Mar 17 17:41:19.866446 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:41:19.866467 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Mar 17 16:05:23 -00 2025 Mar 17 17:41:19.866476 kernel: KASLR enabled Mar 17 17:41:19.866482 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 17 17:41:19.866488 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d98 Mar 17 17:41:19.866493 kernel: random: crng init done Mar 17 17:41:19.866500 kernel: secureboot: Secure boot disabled Mar 17 17:41:19.866525 kernel: ACPI: Early table checksum verification disabled Mar 17 17:41:19.866532 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 17 17:41:19.866541 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:41:19.866547 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:41:19.866552 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:41:19.866558 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:41:19.866564 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:41:19.866571 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:41:19.866579 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:41:19.866585 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:41:19.866592 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:41:19.866598 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:41:19.866604 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 17 17:41:19.866610 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 17 17:41:19.866616 kernel: NUMA: Failed to initialise from firmware Mar 17 17:41:19.866622 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 17 17:41:19.866628 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Mar 17 17:41:19.866634 kernel: Zone ranges: Mar 17 17:41:19.866642 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 17:41:19.866648 kernel: DMA32 empty Mar 17 17:41:19.866654 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 17 17:41:19.866660 kernel: Movable zone start for each node Mar 17 17:41:19.866666 kernel: Early memory node ranges Mar 17 17:41:19.866672 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Mar 17 17:41:19.866678 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 17 17:41:19.866684 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 17 17:41:19.866691 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 17 17:41:19.866697 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 17 17:41:19.866703 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 17 17:41:19.866709 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 17 17:41:19.866717 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 17 17:41:19.866723 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 17 17:41:19.866729 kernel: psci: probing for conduit method from ACPI. Mar 17 17:41:19.866738 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:41:19.866744 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:41:19.866751 kernel: psci: Trusted OS migration not required Mar 17 17:41:19.866759 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:41:19.866766 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 17:41:19.866772 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:41:19.866779 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:41:19.866786 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:41:19.866792 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:41:19.866799 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:41:19.866805 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:41:19.866812 kernel: CPU features: detected: Spectre-v4 Mar 17 17:41:19.866818 kernel: CPU features: detected: Spectre-BHB Mar 17 17:41:19.866826 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:41:19.866833 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:41:19.866839 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:41:19.866849 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:41:19.866857 kernel: alternatives: applying boot alternatives Mar 17 17:41:19.866865 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:41:19.866872 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:41:19.866878 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:41:19.866885 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:41:19.866892 kernel: Fallback order for Node 0: 0 Mar 17 17:41:19.866898 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Mar 17 17:41:19.866906 kernel: Policy zone: Normal Mar 17 17:41:19.866945 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:41:19.866952 kernel: software IO TLB: area num 2. Mar 17 17:41:19.866960 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Mar 17 17:41:19.866967 kernel: Memory: 3882616K/4096000K available (10240K kernel code, 2186K rwdata, 8100K rodata, 39744K init, 897K bss, 213384K reserved, 0K cma-reserved) Mar 17 17:41:19.866974 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:41:19.866980 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:41:19.866988 kernel: rcu: RCU event tracing is enabled. Mar 17 17:41:19.866994 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:41:19.867002 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:41:19.867008 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:41:19.867015 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:41:19.867025 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:41:19.867031 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:41:19.867038 kernel: GICv3: 256 SPIs implemented Mar 17 17:41:19.867045 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:41:19.867051 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:41:19.867058 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:41:19.867065 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 17:41:19.867071 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 17:41:19.867078 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:41:19.867085 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:41:19.867091 kernel: GICv3: using LPI property table @0x00000001000e0000 Mar 17 17:41:19.867100 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Mar 17 17:41:19.867106 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:41:19.867113 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:41:19.867120 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:41:19.867127 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:41:19.867133 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:41:19.867140 kernel: Console: colour dummy device 80x25 Mar 17 17:41:19.867147 kernel: ACPI: Core revision 20230628 Mar 17 17:41:19.867154 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:41:19.867161 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:41:19.867169 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:41:19.867176 kernel: landlock: Up and running. Mar 17 17:41:19.867183 kernel: SELinux: Initializing. Mar 17 17:41:19.867189 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:41:19.867196 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:41:19.867203 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:41:19.867210 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:41:19.867217 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:41:19.867224 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:41:19.867230 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 17:41:19.867238 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 17:41:19.867245 kernel: Remapping and enabling EFI services. Mar 17 17:41:19.867252 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:41:19.867258 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:41:19.867265 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 17:41:19.867272 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Mar 17 17:41:19.867279 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:41:19.867286 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:41:19.867292 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:41:19.867300 kernel: SMP: Total of 2 processors activated. Mar 17 17:41:19.867307 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:41:19.867319 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:41:19.867327 kernel: CPU features: detected: Common not Private translations Mar 17 17:41:19.867334 kernel: CPU features: detected: CRC32 instructions Mar 17 17:41:19.867341 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 17 17:41:19.867349 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:41:19.867356 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:41:19.867363 kernel: CPU features: detected: Privileged Access Never Mar 17 17:41:19.867371 kernel: CPU features: detected: RAS Extension Support Mar 17 17:41:19.867378 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 17:41:19.867385 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:41:19.867393 kernel: alternatives: applying system-wide alternatives Mar 17 17:41:19.867400 kernel: devtmpfs: initialized Mar 17 17:41:19.867407 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:41:19.867414 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:41:19.867421 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:41:19.867430 kernel: SMBIOS 3.0.0 present. Mar 17 17:41:19.867437 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 17 17:41:19.867444 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:41:19.867452 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:41:19.867459 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:41:19.867469 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:41:19.867477 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:41:19.867485 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Mar 17 17:41:19.867494 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:41:19.869544 kernel: cpuidle: using governor menu Mar 17 17:41:19.869565 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:41:19.869573 kernel: ASID allocator initialised with 32768 entries Mar 17 17:41:19.869581 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:41:19.869588 kernel: Serial: AMBA PL011 UART driver Mar 17 17:41:19.869596 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:41:19.869603 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:41:19.869611 kernel: Modules: 508944 pages in range for PLT usage Mar 17 17:41:19.869618 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:41:19.869632 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:41:19.869639 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:41:19.869646 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:41:19.869653 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:41:19.869661 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:41:19.869668 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:41:19.869675 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:41:19.869682 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:41:19.869690 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:41:19.869698 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:41:19.869706 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:41:19.869713 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:41:19.869720 kernel: ACPI: Interpreter enabled Mar 17 17:41:19.869727 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:41:19.869734 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:41:19.869742 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:41:19.869749 kernel: printk: console [ttyAMA0] enabled Mar 17 17:41:19.869756 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:41:19.869925 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:41:19.870008 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:41:19.870076 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:41:19.870139 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 17:41:19.870201 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 17:41:19.870210 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 17:41:19.870217 kernel: PCI host bridge to bus 0000:00 Mar 17 17:41:19.870291 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 17:41:19.870349 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:41:19.870406 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 17:41:19.870462 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:41:19.872630 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 17:41:19.872729 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Mar 17 17:41:19.872801 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Mar 17 17:41:19.872865 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Mar 17 17:41:19.872982 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 17 17:41:19.873053 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Mar 17 17:41:19.873127 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 17 17:41:19.873191 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Mar 17 17:41:19.873262 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 17 17:41:19.873332 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Mar 17 17:41:19.873402 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 17 17:41:19.873465 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Mar 17 17:41:19.873567 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 17 17:41:19.873632 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Mar 17 17:41:19.873705 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 17 17:41:19.873768 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Mar 17 17:41:19.873837 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 17 17:41:19.873905 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Mar 17 17:41:19.873998 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 17 17:41:19.874064 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Mar 17 17:41:19.874135 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 17 17:41:19.874203 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Mar 17 17:41:19.874275 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Mar 17 17:41:19.874339 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Mar 17 17:41:19.874413 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 17:41:19.874481 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Mar 17 17:41:19.874577 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:41:19.874649 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 17 17:41:19.874720 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 17 17:41:19.874789 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Mar 17 17:41:19.874873 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 17 17:41:19.874955 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Mar 17 17:41:19.875024 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Mar 17 17:41:19.875098 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 17 17:41:19.875168 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Mar 17 17:41:19.875242 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 17 17:41:19.875308 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Mar 17 17:41:19.875373 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Mar 17 17:41:19.875446 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 17 17:41:19.877611 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Mar 17 17:41:19.877721 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Mar 17 17:41:19.877807 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 17:41:19.877873 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Mar 17 17:41:19.877986 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Mar 17 17:41:19.878058 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 17 17:41:19.878126 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 17 17:41:19.878195 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 17 17:41:19.878257 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 17 17:41:19.878325 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 17 17:41:19.878388 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 17 17:41:19.878452 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 17 17:41:19.878532 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 17 17:41:19.878599 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 17 17:41:19.878662 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 17 17:41:19.878734 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 17 17:41:19.878798 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 17 17:41:19.878861 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 17 17:41:19.878941 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 17 17:41:19.879008 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 17 17:41:19.879073 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 17 17:41:19.879140 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 17 17:41:19.879207 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 17 17:41:19.879270 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 17 17:41:19.879336 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 17 17:41:19.879410 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 17 17:41:19.879473 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 17 17:41:19.882683 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 17 17:41:19.882766 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 17 17:41:19.882831 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 17 17:41:19.882905 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 17 17:41:19.883034 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 17 17:41:19.883101 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 17 17:41:19.883168 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Mar 17 17:41:19.883232 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:41:19.883298 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Mar 17 17:41:19.883362 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:41:19.883431 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Mar 17 17:41:19.883495 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:41:19.883582 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Mar 17 17:41:19.883647 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:41:19.883796 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Mar 17 17:41:19.883874 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:41:19.883968 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Mar 17 17:41:19.884037 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:41:19.884142 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Mar 17 17:41:19.884211 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:41:19.884276 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Mar 17 17:41:19.884339 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:41:19.884402 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Mar 17 17:41:19.884470 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:41:19.885971 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Mar 17 17:41:19.886113 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Mar 17 17:41:19.886188 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Mar 17 17:41:19.886253 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 17 17:41:19.886318 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Mar 17 17:41:19.886381 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 17 17:41:19.886446 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Mar 17 17:41:19.886539 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 17 17:41:19.886658 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Mar 17 17:41:19.886728 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 17 17:41:19.886792 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Mar 17 17:41:19.886854 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 17 17:41:19.886937 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Mar 17 17:41:19.887006 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 17 17:41:19.887072 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Mar 17 17:41:19.887140 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 17 17:41:19.887205 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Mar 17 17:41:19.887268 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 17 17:41:19.887332 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Mar 17 17:41:19.887396 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Mar 17 17:41:19.887469 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Mar 17 17:41:19.888988 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Mar 17 17:41:19.889074 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:41:19.889150 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Mar 17 17:41:19.889215 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 17 17:41:19.889278 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 17 17:41:19.889341 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 17 17:41:19.889404 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:41:19.889476 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Mar 17 17:41:19.889563 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 17 17:41:19.889629 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 17 17:41:19.889691 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 17 17:41:19.889753 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:41:19.889823 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Mar 17 17:41:19.889888 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Mar 17 17:41:19.889976 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 17 17:41:19.890045 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 17 17:41:19.890110 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 17 17:41:19.890172 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:41:19.890243 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Mar 17 17:41:19.890307 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 17 17:41:19.890372 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 17 17:41:19.890437 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 17 17:41:19.892159 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:41:19.892284 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Mar 17 17:41:19.892353 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Mar 17 17:41:19.892419 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 17 17:41:19.892484 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 17 17:41:19.892569 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 17 17:41:19.892636 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:41:19.892707 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Mar 17 17:41:19.892782 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Mar 17 17:41:19.892849 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 17 17:41:19.892927 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 17 17:41:19.892997 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 17 17:41:19.893061 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:41:19.893135 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Mar 17 17:41:19.893202 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Mar 17 17:41:19.893275 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Mar 17 17:41:19.893345 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 17 17:41:19.893410 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 17 17:41:19.893474 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 17 17:41:19.893564 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:41:19.893636 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 17 17:41:19.893702 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 17 17:41:19.893766 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 17 17:41:19.893829 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:41:19.893901 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 17 17:41:19.894012 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 17 17:41:19.894080 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 17 17:41:19.894145 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:41:19.894212 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 17:41:19.894269 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:41:19.894327 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 17:41:19.894400 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 17 17:41:19.894460 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 17 17:41:19.898586 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:41:19.898755 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 17 17:41:19.898818 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 17 17:41:19.898877 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:41:19.898988 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 17 17:41:19.899064 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 17 17:41:19.899134 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:41:19.899202 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 17 17:41:19.899264 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 17 17:41:19.899327 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:41:19.899395 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 17 17:41:19.899456 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 17 17:41:19.899541 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:41:19.899612 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 17 17:41:19.899677 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 17 17:41:19.899738 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:41:19.899872 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 17 17:41:19.899983 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 17 17:41:19.900051 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:41:19.900124 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 17 17:41:19.900184 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 17 17:41:19.900243 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:41:19.900314 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 17 17:41:19.900374 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 17 17:41:19.900432 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:41:19.900442 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:41:19.900450 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:41:19.900458 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:41:19.900466 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:41:19.900473 kernel: iommu: Default domain type: Translated Mar 17 17:41:19.900483 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:41:19.900491 kernel: efivars: Registered efivars operations Mar 17 17:41:19.900499 kernel: vgaarb: loaded Mar 17 17:41:19.902564 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:41:19.902578 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:41:19.902586 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:41:19.902594 kernel: pnp: PnP ACPI init Mar 17 17:41:19.902708 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 17:41:19.902726 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:41:19.902734 kernel: NET: Registered PF_INET protocol family Mar 17 17:41:19.902741 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:41:19.902750 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:41:19.902758 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:41:19.902766 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:41:19.902773 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:41:19.902781 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:41:19.902789 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:41:19.902800 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:41:19.902808 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:41:19.902884 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 17 17:41:19.902896 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:41:19.902903 kernel: kvm [1]: HYP mode not available Mar 17 17:41:19.902925 kernel: Initialise system trusted keyrings Mar 17 17:41:19.902934 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:41:19.902942 kernel: Key type asymmetric registered Mar 17 17:41:19.902950 kernel: Asymmetric key parser 'x509' registered Mar 17 17:41:19.902961 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:41:19.902969 kernel: io scheduler mq-deadline registered Mar 17 17:41:19.902976 kernel: io scheduler kyber registered Mar 17 17:41:19.902984 kernel: io scheduler bfq registered Mar 17 17:41:19.902992 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 17:41:19.903068 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 17 17:41:19.903134 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 17 17:41:19.903198 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:41:19.903268 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 17 17:41:19.903333 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 17 17:41:19.903398 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:41:19.903465 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 17 17:41:19.903585 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 17 17:41:19.903654 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:41:19.903725 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 17 17:41:19.903789 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 17 17:41:19.903853 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:41:19.903931 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 17 17:41:19.904001 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 17 17:41:19.904066 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:41:19.904136 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 17 17:41:19.904200 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 17 17:41:19.904263 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:41:19.904330 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 17 17:41:19.904393 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 17 17:41:19.904456 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:41:19.906652 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 17 17:41:19.906742 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 17 17:41:19.906806 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:41:19.906817 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 17 17:41:19.906882 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 17 17:41:19.907006 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 17 17:41:19.907084 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:41:19.907094 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:41:19.907102 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:41:19.907110 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:41:19.907181 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 17 17:41:19.907253 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 17 17:41:19.907264 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:41:19.907272 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 17:41:19.907335 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 17 17:41:19.907348 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 17 17:41:19.907355 kernel: thunder_xcv, ver 1.0 Mar 17 17:41:19.907363 kernel: thunder_bgx, ver 1.0 Mar 17 17:41:19.907370 kernel: nicpf, ver 1.0 Mar 17 17:41:19.907378 kernel: nicvf, ver 1.0 Mar 17 17:41:19.907453 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:41:19.907537 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:41:19 UTC (1742233279) Mar 17 17:41:19.907548 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:41:19.907559 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 17:41:19.907567 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:41:19.907574 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:41:19.907582 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:41:19.907590 kernel: Segment Routing with IPv6 Mar 17 17:41:19.907598 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:41:19.907605 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:41:19.907613 kernel: Key type dns_resolver registered Mar 17 17:41:19.907621 kernel: registered taskstats version 1 Mar 17 17:41:19.907633 kernel: Loading compiled-in X.509 certificates Mar 17 17:41:19.907640 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 74c9b4f5dfad711856d7363c976664fc02c1e24c' Mar 17 17:41:19.907648 kernel: Key type .fscrypt registered Mar 17 17:41:19.907655 kernel: Key type fscrypt-provisioning registered Mar 17 17:41:19.907663 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:41:19.907671 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:41:19.907679 kernel: ima: No architecture policies found Mar 17 17:41:19.907686 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:41:19.907696 kernel: clk: Disabling unused clocks Mar 17 17:41:19.907703 kernel: Freeing unused kernel memory: 39744K Mar 17 17:41:19.907711 kernel: Run /init as init process Mar 17 17:41:19.907718 kernel: with arguments: Mar 17 17:41:19.907726 kernel: /init Mar 17 17:41:19.907734 kernel: with environment: Mar 17 17:41:19.907741 kernel: HOME=/ Mar 17 17:41:19.907748 kernel: TERM=linux Mar 17 17:41:19.907755 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:41:19.907765 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:41:19.907776 systemd[1]: Detected virtualization kvm. Mar 17 17:41:19.907784 systemd[1]: Detected architecture arm64. Mar 17 17:41:19.907792 systemd[1]: Running in initrd. Mar 17 17:41:19.907799 systemd[1]: No hostname configured, using default hostname. Mar 17 17:41:19.907807 systemd[1]: Hostname set to . Mar 17 17:41:19.907815 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:41:19.907823 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:41:19.907833 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:41:19.907841 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:41:19.907849 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:41:19.907858 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:41:19.907866 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:41:19.907874 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:41:19.907884 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:41:19.907894 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:41:19.907902 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:41:19.907918 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:41:19.907928 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:41:19.907936 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:41:19.907944 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:41:19.907952 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:41:19.907960 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:41:19.907971 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:41:19.907980 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:41:19.907988 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:41:19.907996 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:41:19.908004 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:41:19.908012 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:41:19.908020 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:41:19.908028 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:41:19.908038 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:41:19.908046 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:41:19.908054 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:41:19.908063 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:41:19.908070 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:41:19.908079 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:41:19.908106 systemd-journald[237]: Collecting audit messages is disabled. Mar 17 17:41:19.908128 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:41:19.908137 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:41:19.908145 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:41:19.908156 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:41:19.908164 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:41:19.908173 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:41:19.908181 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:41:19.908189 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:41:19.908199 systemd-journald[237]: Journal started Mar 17 17:41:19.908219 systemd-journald[237]: Runtime Journal (/run/log/journal/86e2017f0e1a465b883e2af2283b47c8) is 8.0M, max 76.6M, 68.6M free. Mar 17 17:41:19.887557 systemd-modules-load[238]: Inserted module 'overlay' Mar 17 17:41:19.910644 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:41:19.910663 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:41:19.913187 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 17 17:41:19.913678 kernel: Bridge firewalling registered Mar 17 17:41:19.915063 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:41:19.915813 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:41:19.928849 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:41:19.931538 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:41:19.933258 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:41:19.943726 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:41:19.944488 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:41:19.948560 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:41:19.955573 dracut-cmdline[269]: dracut-dracut-053 Mar 17 17:41:19.962941 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:41:19.959143 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:41:19.998270 systemd-resolved[277]: Positive Trust Anchors: Mar 17 17:41:19.998347 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:41:19.998378 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:41:20.003460 systemd-resolved[277]: Defaulting to hostname 'linux'. Mar 17 17:41:20.004613 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:41:20.005256 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:41:20.049573 kernel: SCSI subsystem initialized Mar 17 17:41:20.054544 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:41:20.061561 kernel: iscsi: registered transport (tcp) Mar 17 17:41:20.075560 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:41:20.075637 kernel: QLogic iSCSI HBA Driver Mar 17 17:41:20.121637 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:41:20.127893 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:41:20.145770 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:41:20.145860 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:41:20.145882 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:41:20.199585 kernel: raid6: neonx8 gen() 15708 MB/s Mar 17 17:41:20.213554 kernel: raid6: neonx4 gen() 15581 MB/s Mar 17 17:41:20.230593 kernel: raid6: neonx2 gen() 13170 MB/s Mar 17 17:41:20.247579 kernel: raid6: neonx1 gen() 10435 MB/s Mar 17 17:41:20.264575 kernel: raid6: int64x8 gen() 6933 MB/s Mar 17 17:41:20.281593 kernel: raid6: int64x4 gen() 7318 MB/s Mar 17 17:41:20.298558 kernel: raid6: int64x2 gen() 6104 MB/s Mar 17 17:41:20.315565 kernel: raid6: int64x1 gen() 5041 MB/s Mar 17 17:41:20.315641 kernel: raid6: using algorithm neonx8 gen() 15708 MB/s Mar 17 17:41:20.332566 kernel: raid6: .... xor() 11874 MB/s, rmw enabled Mar 17 17:41:20.332651 kernel: raid6: using neon recovery algorithm Mar 17 17:41:20.337547 kernel: xor: measuring software checksum speed Mar 17 17:41:20.337617 kernel: 8regs : 19754 MB/sec Mar 17 17:41:20.337634 kernel: 32regs : 19669 MB/sec Mar 17 17:41:20.337650 kernel: arm64_neon : 24520 MB/sec Mar 17 17:41:20.338544 kernel: xor: using function: arm64_neon (24520 MB/sec) Mar 17 17:41:20.387562 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:41:20.405093 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:41:20.411802 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:41:20.425402 systemd-udevd[455]: Using default interface naming scheme 'v255'. Mar 17 17:41:20.429015 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:41:20.439674 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:41:20.455865 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Mar 17 17:41:20.491711 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:41:20.496830 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:41:20.546992 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:41:20.556250 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:41:20.571875 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:41:20.575455 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:41:20.576496 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:41:20.577110 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:41:20.583707 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:41:20.603475 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:41:20.659545 kernel: scsi host0: Virtio SCSI HBA Mar 17 17:41:20.665124 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:41:20.665213 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 17 17:41:20.672769 kernel: ACPI: bus type USB registered Mar 17 17:41:20.672821 kernel: usbcore: registered new interface driver usbfs Mar 17 17:41:20.672832 kernel: usbcore: registered new interface driver hub Mar 17 17:41:20.673212 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:41:20.674341 kernel: usbcore: registered new device driver usb Mar 17 17:41:20.674561 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:41:20.676981 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:41:20.678046 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:41:20.678227 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:41:20.681656 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:41:20.694800 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:41:20.702593 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 17:41:20.709777 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 17 17:41:20.709893 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 17 17:41:20.709999 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 17:41:20.710095 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 17 17:41:20.710183 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 17 17:41:20.710264 kernel: hub 1-0:1.0: USB hub found Mar 17 17:41:20.710362 kernel: hub 1-0:1.0: 4 ports detected Mar 17 17:41:20.710440 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 17 17:41:20.710556 kernel: hub 2-0:1.0: USB hub found Mar 17 17:41:20.710645 kernel: hub 2-0:1.0: 4 ports detected Mar 17 17:41:20.711023 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:41:20.721123 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 17 17:41:20.727394 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 17 17:41:20.727582 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:41:20.727594 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:41:20.719832 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:41:20.740729 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 17 17:41:20.753424 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 17 17:41:20.753604 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 17 17:41:20.753694 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 17 17:41:20.753776 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 17 17:41:20.753857 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:41:20.753875 kernel: GPT:17805311 != 80003071 Mar 17 17:41:20.753885 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:41:20.753895 kernel: GPT:17805311 != 80003071 Mar 17 17:41:20.753944 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:41:20.753958 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:41:20.753969 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 17 17:41:20.751045 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:41:20.795563 kernel: BTRFS: device fsid c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (510) Mar 17 17:41:20.798534 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (506) Mar 17 17:41:20.804425 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 17 17:41:20.809362 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 17 17:41:20.819529 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 17 17:41:20.820194 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 17 17:41:20.827762 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 17:41:20.836825 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:41:20.853642 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:41:20.853722 disk-uuid[579]: Primary Header is updated. Mar 17 17:41:20.853722 disk-uuid[579]: Secondary Entries is updated. Mar 17 17:41:20.853722 disk-uuid[579]: Secondary Header is updated. Mar 17 17:41:20.951043 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 17 17:41:21.194580 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 17 17:41:21.328675 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 17 17:41:21.328731 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 17 17:41:21.329654 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 17 17:41:21.384596 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 17 17:41:21.385083 kernel: usbcore: registered new interface driver usbhid Mar 17 17:41:21.387361 kernel: usbhid: USB HID core driver Mar 17 17:41:21.871999 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:41:21.872059 disk-uuid[580]: The operation has completed successfully. Mar 17 17:41:21.921210 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:41:21.921326 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:41:21.952853 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:41:21.958291 sh[595]: Success Mar 17 17:41:21.969524 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:41:22.022991 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:41:22.034689 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:41:22.037541 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:41:22.051963 kernel: BTRFS info (device dm-0): first mount of filesystem c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 Mar 17 17:41:22.052035 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:41:22.052059 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:41:22.052081 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:41:22.052729 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:41:22.058551 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:41:22.061412 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:41:22.062462 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:41:22.072973 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:41:22.077763 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:41:22.089530 kernel: BTRFS info (device sda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:41:22.089604 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:41:22.089638 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:41:22.093629 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:41:22.093701 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:41:22.102120 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:41:22.102702 kernel: BTRFS info (device sda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:41:22.111342 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:41:22.116706 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:41:22.216033 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:41:22.216993 ignition[677]: Ignition 2.20.0 Mar 17 17:41:22.217000 ignition[677]: Stage: fetch-offline Mar 17 17:41:22.217037 ignition[677]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:41:22.217045 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:41:22.217196 ignition[677]: parsed url from cmdline: "" Mar 17 17:41:22.220576 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:41:22.217199 ignition[677]: no config URL provided Mar 17 17:41:22.217204 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:41:22.217211 ignition[677]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:41:22.217216 ignition[677]: failed to fetch config: resource requires networking Mar 17 17:41:22.217383 ignition[677]: Ignition finished successfully Mar 17 17:41:22.226976 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:41:22.250864 systemd-networkd[784]: lo: Link UP Mar 17 17:41:22.250873 systemd-networkd[784]: lo: Gained carrier Mar 17 17:41:22.252453 systemd-networkd[784]: Enumeration completed Mar 17 17:41:22.252573 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:41:22.253327 systemd[1]: Reached target network.target - Network. Mar 17 17:41:22.255209 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:41:22.255212 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:41:22.256231 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:41:22.256234 systemd-networkd[784]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:41:22.256725 systemd-networkd[784]: eth0: Link UP Mar 17 17:41:22.256728 systemd-networkd[784]: eth0: Gained carrier Mar 17 17:41:22.256735 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:41:22.262832 systemd-networkd[784]: eth1: Link UP Mar 17 17:41:22.262836 systemd-networkd[784]: eth1: Gained carrier Mar 17 17:41:22.262847 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:41:22.263972 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:41:22.278047 ignition[786]: Ignition 2.20.0 Mar 17 17:41:22.278877 ignition[786]: Stage: fetch Mar 17 17:41:22.279159 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:41:22.279175 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:41:22.279302 ignition[786]: parsed url from cmdline: "" Mar 17 17:41:22.279307 ignition[786]: no config URL provided Mar 17 17:41:22.279314 ignition[786]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:41:22.279325 ignition[786]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:41:22.279417 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 17 17:41:22.280339 ignition[786]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 17 17:41:22.289606 systemd-networkd[784]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:41:22.317601 systemd-networkd[784]: eth0: DHCPv4 address 88.198.122.152/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 17:41:22.480476 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 17 17:41:22.489597 ignition[786]: GET result: OK Mar 17 17:41:22.489713 ignition[786]: parsing config with SHA512: b2eaf66ccb6240ebcdc22af017112657defb05a4bc623c57211d66de973bf9a02de52bb7416089c3f23b19afeaffbff1cba0d7d7c93e98ddcb8233bbf0f10732 Mar 17 17:41:22.495401 unknown[786]: fetched base config from "system" Mar 17 17:41:22.495415 unknown[786]: fetched base config from "system" Mar 17 17:41:22.495911 ignition[786]: fetch: fetch complete Mar 17 17:41:22.495429 unknown[786]: fetched user config from "hetzner" Mar 17 17:41:22.496057 ignition[786]: fetch: fetch passed Mar 17 17:41:22.498501 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:41:22.496138 ignition[786]: Ignition finished successfully Mar 17 17:41:22.507797 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:41:22.520585 ignition[794]: Ignition 2.20.0 Mar 17 17:41:22.520597 ignition[794]: Stage: kargs Mar 17 17:41:22.520771 ignition[794]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:41:22.520784 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:41:22.521690 ignition[794]: kargs: kargs passed Mar 17 17:41:22.523821 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:41:22.521736 ignition[794]: Ignition finished successfully Mar 17 17:41:22.528690 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:41:22.541456 ignition[800]: Ignition 2.20.0 Mar 17 17:41:22.541468 ignition[800]: Stage: disks Mar 17 17:41:22.541661 ignition[800]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:41:22.541671 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:41:22.542625 ignition[800]: disks: disks passed Mar 17 17:41:22.542672 ignition[800]: Ignition finished successfully Mar 17 17:41:22.544454 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:41:22.545503 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:41:22.546220 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:41:22.548832 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:41:22.549457 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:41:22.550186 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:41:22.557785 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:41:22.576601 systemd-fsck[808]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 17 17:41:22.581851 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:41:22.589616 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:41:22.638569 kernel: EXT4-fs (sda9): mounted filesystem 6b579bf2-7716-4d59-98eb-b92ea668693e r/w with ordered data mode. Quota mode: none. Mar 17 17:41:22.639146 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:41:22.641057 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:41:22.654720 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:41:22.658980 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:41:22.661674 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:41:22.665596 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:41:22.669180 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (816) Mar 17 17:41:22.669205 kernel: BTRFS info (device sda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:41:22.669223 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:41:22.669233 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:41:22.666906 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:41:22.672728 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:41:22.676203 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:41:22.676245 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:41:22.679237 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:41:22.681649 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:41:22.740614 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:41:22.742464 coreos-metadata[818]: Mar 17 17:41:22.742 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 17 17:41:22.745546 coreos-metadata[818]: Mar 17 17:41:22.744 INFO Fetch successful Mar 17 17:41:22.745546 coreos-metadata[818]: Mar 17 17:41:22.744 INFO wrote hostname ci-4152-2-2-4-d76a313bf1 to /sysroot/etc/hostname Mar 17 17:41:22.746943 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:41:22.749032 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:41:22.752809 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:41:22.757325 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:41:22.852403 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:41:22.858651 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:41:22.862708 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:41:22.866526 kernel: BTRFS info (device sda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:41:22.890113 ignition[933]: INFO : Ignition 2.20.0 Mar 17 17:41:22.892118 ignition[933]: INFO : Stage: mount Mar 17 17:41:22.892118 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:41:22.892118 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:41:22.894474 ignition[933]: INFO : mount: mount passed Mar 17 17:41:22.894474 ignition[933]: INFO : Ignition finished successfully Mar 17 17:41:22.894736 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:41:22.895659 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:41:22.902666 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:41:23.051329 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:41:23.058959 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:41:23.068121 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (944) Mar 17 17:41:23.068208 kernel: BTRFS info (device sda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:41:23.068240 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:41:23.068659 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:41:23.072534 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:41:23.072599 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:41:23.074942 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:41:23.106888 ignition[961]: INFO : Ignition 2.20.0 Mar 17 17:41:23.106888 ignition[961]: INFO : Stage: files Mar 17 17:41:23.107968 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:41:23.107968 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:41:23.109458 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:41:23.110177 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:41:23.110177 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:41:23.113313 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:41:23.114316 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:41:23.114316 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:41:23.113732 unknown[961]: wrote ssh authorized keys file for user: core Mar 17 17:41:23.116673 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 17 17:41:23.116673 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Mar 17 17:41:23.172984 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:41:23.365781 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 17 17:41:23.365781 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:41:23.368166 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 17:41:23.956650 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:41:23.970994 systemd-networkd[784]: eth0: Gained IPv6LL Mar 17 17:41:24.035378 systemd-networkd[784]: eth1: Gained IPv6LL Mar 17 17:41:24.069820 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:41:24.069820 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:41:24.072134 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:41:24.072134 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:41:24.072134 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:41:24.072134 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:41:24.072134 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:41:24.072134 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:41:24.072134 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:41:24.072134 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:41:24.072134 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:41:24.072134 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:41:24.072134 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:41:24.072134 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:41:24.072134 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 17 17:41:24.605428 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:41:24.882490 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:41:24.882490 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:41:24.884898 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:41:24.884898 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:41:24.884898 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:41:24.884898 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 17 17:41:24.884898 ignition[961]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 17:41:24.884898 ignition[961]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 17:41:24.884898 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 17 17:41:24.884898 ignition[961]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:41:24.884898 ignition[961]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:41:24.884898 ignition[961]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:41:24.884898 ignition[961]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:41:24.884898 ignition[961]: INFO : files: files passed Mar 17 17:41:24.884898 ignition[961]: INFO : Ignition finished successfully Mar 17 17:41:24.887202 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:41:24.893684 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:41:24.900546 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:41:24.903838 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:41:24.903961 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:41:24.914052 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:41:24.914052 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:41:24.917085 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:41:24.918949 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:41:24.920165 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:41:24.926815 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:41:24.961171 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:41:24.961284 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:41:24.962774 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:41:24.963999 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:41:24.965318 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:41:24.972811 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:41:24.987185 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:41:24.991737 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:41:25.007291 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:41:25.008072 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:41:25.009222 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:41:25.010263 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:41:25.010387 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:41:25.011701 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:41:25.012313 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:41:25.013435 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:41:25.014434 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:41:25.015450 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:41:25.016538 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:41:25.017609 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:41:25.018801 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:41:25.019781 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:41:25.020865 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:41:25.021788 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:41:25.021946 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:41:25.023183 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:41:25.023830 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:41:25.024895 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:41:25.025339 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:41:25.026106 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:41:25.026230 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:41:25.027822 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:41:25.027947 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:41:25.029308 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:41:25.029403 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:41:25.030583 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:41:25.030689 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:41:25.039730 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:41:25.044812 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:41:25.045345 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:41:25.045469 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:41:25.046230 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:41:25.046317 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:41:25.052390 ignition[1014]: INFO : Ignition 2.20.0 Mar 17 17:41:25.052390 ignition[1014]: INFO : Stage: umount Mar 17 17:41:25.052390 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:41:25.052390 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:41:25.058251 ignition[1014]: INFO : umount: umount passed Mar 17 17:41:25.058251 ignition[1014]: INFO : Ignition finished successfully Mar 17 17:41:25.056816 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:41:25.057590 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:41:25.064049 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:41:25.064622 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:41:25.066217 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:41:25.067960 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:41:25.068055 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:41:25.072252 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:41:25.072411 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:41:25.073904 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:41:25.073951 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:41:25.075049 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:41:25.075087 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:41:25.076052 systemd[1]: Stopped target network.target - Network. Mar 17 17:41:25.077064 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:41:25.077110 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:41:25.078335 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:41:25.079185 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:41:25.079565 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:41:25.080215 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:41:25.081093 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:41:25.082169 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:41:25.082210 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:41:25.083020 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:41:25.083058 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:41:25.084023 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:41:25.084071 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:41:25.085212 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:41:25.085250 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:41:25.086073 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:41:25.086110 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:41:25.087221 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:41:25.087956 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:41:25.094551 systemd-networkd[784]: eth1: DHCPv6 lease lost Mar 17 17:41:25.095145 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:41:25.095267 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:41:25.097550 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:41:25.097617 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:41:25.098675 systemd-networkd[784]: eth0: DHCPv6 lease lost Mar 17 17:41:25.100394 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:41:25.100527 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:41:25.102223 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:41:25.102263 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:41:25.111728 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:41:25.112335 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:41:25.112410 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:41:25.114678 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:41:25.114734 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:41:25.116256 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:41:25.116298 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:41:25.117408 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:41:25.131982 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:41:25.132803 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:41:25.133829 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:41:25.134000 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:41:25.136485 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:41:25.136583 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:41:25.137710 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:41:25.137743 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:41:25.138672 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:41:25.138715 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:41:25.140481 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:41:25.140542 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:41:25.141764 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:41:25.141821 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:41:25.147728 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:41:25.149031 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:41:25.149131 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:41:25.153217 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:41:25.153274 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:41:25.154116 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:41:25.154161 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:41:25.155493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:41:25.155564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:41:25.157531 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:41:25.159539 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:41:25.160820 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:41:25.163819 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:41:25.177425 systemd[1]: Switching root. Mar 17 17:41:25.216190 systemd-journald[237]: Journal stopped Mar 17 17:41:26.118689 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 17 17:41:26.118771 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:41:26.118786 kernel: SELinux: policy capability open_perms=1 Mar 17 17:41:26.118797 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:41:26.118814 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:41:26.118824 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:41:26.118836 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:41:26.118853 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:41:26.118864 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:41:26.118889 kernel: audit: type=1403 audit(1742233285.378:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:41:26.118902 systemd[1]: Successfully loaded SELinux policy in 32.532ms. Mar 17 17:41:26.118928 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.421ms. Mar 17 17:41:26.118941 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:41:26.118952 systemd[1]: Detected virtualization kvm. Mar 17 17:41:26.118966 systemd[1]: Detected architecture arm64. Mar 17 17:41:26.118977 systemd[1]: Detected first boot. Mar 17 17:41:26.118987 systemd[1]: Hostname set to . Mar 17 17:41:26.118998 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:41:26.119009 zram_generator::config[1057]: No configuration found. Mar 17 17:41:26.119021 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:41:26.119033 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:41:26.119044 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:41:26.119056 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:41:26.119068 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:41:26.119078 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:41:26.119089 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:41:26.119099 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:41:26.119109 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:41:26.119120 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:41:26.119131 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:41:26.119142 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:41:26.119153 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:41:26.119163 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:41:26.119174 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:41:26.119184 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:41:26.119195 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:41:26.119205 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:41:26.119216 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:41:26.119229 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:41:26.119240 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:41:26.119250 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:41:26.119261 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:41:26.119272 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:41:26.119283 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:41:26.119299 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:41:26.119310 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:41:26.119320 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:41:26.119331 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:41:26.119341 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:41:26.119352 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:41:26.119363 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:41:26.119374 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:41:26.119384 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:41:26.119396 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:41:26.119407 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:41:26.119418 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:41:26.119429 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:41:26.119443 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:41:26.119456 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:41:26.119468 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:41:26.119479 systemd[1]: Reached target machines.target - Containers. Mar 17 17:41:26.119494 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:41:26.119523 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:41:26.119538 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:41:26.119550 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:41:26.119566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:41:26.119576 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:41:26.119590 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:41:26.119600 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:41:26.119611 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:41:26.119622 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:41:26.119633 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:41:26.119644 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:41:26.119654 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:41:26.119665 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:41:26.119675 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:41:26.119687 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:41:26.119698 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:41:26.119709 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:41:26.119720 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:41:26.119730 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:41:26.119741 systemd[1]: Stopped verity-setup.service. Mar 17 17:41:26.119751 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:41:26.119762 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:41:26.119774 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:41:26.119785 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:41:26.119796 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:41:26.119807 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:41:26.119817 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:41:26.119829 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:41:26.119844 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:41:26.119856 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:41:26.119869 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:41:26.119914 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:41:26.119948 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:41:26.119969 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:41:26.119982 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:41:26.119995 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:41:26.120008 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:41:26.120021 kernel: loop: module loaded Mar 17 17:41:26.120032 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:41:26.120047 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:41:26.120061 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:41:26.120074 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:41:26.120089 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:41:26.120102 kernel: fuse: init (API version 7.39) Mar 17 17:41:26.120116 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:41:26.120128 kernel: ACPI: bus type drm_connector registered Mar 17 17:41:26.120172 systemd-journald[1138]: Collecting audit messages is disabled. Mar 17 17:41:26.120195 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:41:26.122607 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:41:26.122624 systemd-journald[1138]: Journal started Mar 17 17:41:26.122654 systemd-journald[1138]: Runtime Journal (/run/log/journal/86e2017f0e1a465b883e2af2283b47c8) is 8.0M, max 76.6M, 68.6M free. Mar 17 17:41:25.830451 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:41:25.851250 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 17 17:41:25.852166 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:41:26.132442 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:41:26.137067 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:41:26.138869 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Mar 17 17:41:26.139224 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Mar 17 17:41:26.153737 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:41:26.160918 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:41:26.164426 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:41:26.164497 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:41:26.166654 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:41:26.168483 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:41:26.169840 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:41:26.170027 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:41:26.171134 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:41:26.171264 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:41:26.173955 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:41:26.174846 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:41:26.175757 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:41:26.192453 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:41:26.206446 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:41:26.209840 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:41:26.221534 kernel: loop0: detected capacity change from 0 to 116808 Mar 17 17:41:26.221585 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:41:26.230711 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:41:26.237116 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:41:26.238684 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:41:26.240831 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:41:26.244502 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:41:26.246190 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:41:26.256626 systemd-journald[1138]: Time spent on flushing to /var/log/journal/86e2017f0e1a465b883e2af2283b47c8 is 51.577ms for 1141 entries. Mar 17 17:41:26.256626 systemd-journald[1138]: System Journal (/var/log/journal/86e2017f0e1a465b883e2af2283b47c8) is 8.0M, max 584.8M, 576.8M free. Mar 17 17:41:26.329029 systemd-journald[1138]: Received client request to flush runtime journal. Mar 17 17:41:26.329079 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:41:26.329094 kernel: loop1: detected capacity change from 0 to 201592 Mar 17 17:41:26.263601 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:41:26.278838 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:41:26.292025 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:41:26.295538 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:41:26.300266 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:41:26.332410 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:41:26.336080 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:41:26.347896 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:41:26.363703 kernel: loop2: detected capacity change from 0 to 113536 Mar 17 17:41:26.378290 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Mar 17 17:41:26.378309 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Mar 17 17:41:26.392191 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:41:26.403582 kernel: loop3: detected capacity change from 0 to 8 Mar 17 17:41:26.421599 kernel: loop4: detected capacity change from 0 to 116808 Mar 17 17:41:26.442551 kernel: loop5: detected capacity change from 0 to 201592 Mar 17 17:41:26.463550 kernel: loop6: detected capacity change from 0 to 113536 Mar 17 17:41:26.493923 kernel: loop7: detected capacity change from 0 to 8 Mar 17 17:41:26.495844 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 17 17:41:26.496385 (sd-merge)[1200]: Merged extensions into '/usr'. Mar 17 17:41:26.504993 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:41:26.505292 systemd[1]: Reloading... Mar 17 17:41:26.623580 ldconfig[1149]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:41:26.631531 zram_generator::config[1236]: No configuration found. Mar 17 17:41:26.744680 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:41:26.790359 systemd[1]: Reloading finished in 283 ms. Mar 17 17:41:26.816237 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:41:26.817443 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:41:26.829265 systemd[1]: Starting ensure-sysext.service... Mar 17 17:41:26.835708 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:41:26.850586 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:41:26.850606 systemd[1]: Reloading... Mar 17 17:41:26.868395 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:41:26.868688 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:41:26.869362 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:41:26.869593 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Mar 17 17:41:26.869640 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Mar 17 17:41:26.872222 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:41:26.872322 systemd-tmpfiles[1265]: Skipping /boot Mar 17 17:41:26.880931 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:41:26.881080 systemd-tmpfiles[1265]: Skipping /boot Mar 17 17:41:26.924538 zram_generator::config[1291]: No configuration found. Mar 17 17:41:27.028881 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:41:27.074216 systemd[1]: Reloading finished in 223 ms. Mar 17 17:41:27.094602 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:41:27.095708 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:41:27.114932 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:41:27.118811 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:41:27.123368 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:41:27.129433 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:41:27.134712 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:41:27.139259 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:41:27.144595 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:41:27.150846 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:41:27.160757 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:41:27.166412 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:41:27.168676 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:41:27.172700 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:41:27.178289 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:41:27.179269 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:41:27.181737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:41:27.188600 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:41:27.188890 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Mar 17 17:41:27.189557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:41:27.199591 systemd[1]: Finished ensure-sysext.service. Mar 17 17:41:27.202545 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:41:27.219972 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:41:27.223819 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:41:27.224959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:41:27.226019 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:41:27.227775 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:41:27.228729 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:41:27.230543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:41:27.265091 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:41:27.266646 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:41:27.267270 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:41:27.269580 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:41:27.275796 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:41:27.282574 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:41:27.283226 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:41:27.289611 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:41:27.300337 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:41:27.301196 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:41:27.308825 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:41:27.311280 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:41:27.319519 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 17 17:41:27.329909 augenrules[1399]: No rules Mar 17 17:41:27.344092 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:41:27.344669 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:41:27.425541 systemd-resolved[1334]: Positive Trust Anchors: Mar 17 17:41:27.425616 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:41:27.425647 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:41:27.431775 systemd-resolved[1334]: Using system hostname 'ci-4152-2-2-4-d76a313bf1'. Mar 17 17:41:27.434106 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:41:27.434853 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:41:27.438748 systemd-networkd[1365]: lo: Link UP Mar 17 17:41:27.440665 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:41:27.440711 systemd-networkd[1365]: lo: Gained carrier Mar 17 17:41:27.441455 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:41:27.457009 systemd-networkd[1365]: Enumeration completed Mar 17 17:41:27.457293 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:41:27.458129 systemd[1]: Reached target network.target - Network. Mar 17 17:41:27.459914 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:41:27.460000 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:41:27.461729 systemd-networkd[1365]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:41:27.461811 systemd-networkd[1365]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:41:27.462454 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:41:27.463029 systemd-networkd[1365]: eth0: Link UP Mar 17 17:41:27.463093 systemd-networkd[1365]: eth0: Gained carrier Mar 17 17:41:27.463143 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:41:27.465953 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:41:27.468800 systemd-networkd[1365]: eth1: Link UP Mar 17 17:41:27.468942 systemd-networkd[1365]: eth1: Gained carrier Mar 17 17:41:27.468968 systemd-networkd[1365]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:41:27.494626 systemd-networkd[1365]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:41:27.496420 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Mar 17 17:41:27.507986 systemd-networkd[1365]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:41:27.523549 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:41:27.524622 systemd-networkd[1365]: eth0: DHCPv4 address 88.198.122.152/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 17:41:27.525570 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Mar 17 17:41:27.529535 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1378) Mar 17 17:41:27.578398 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 17 17:41:27.578535 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:41:27.583072 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:41:27.592772 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:41:27.606923 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:41:27.607962 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:41:27.608005 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:41:27.608368 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:41:27.610049 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:41:27.611069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:41:27.611201 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:41:27.613843 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:41:27.614459 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:41:27.620392 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:41:27.620466 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:41:27.646632 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:41:27.649161 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 17:41:27.650791 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 17 17:41:27.650841 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 17:41:27.650927 kernel: [drm] features: -context_init Mar 17 17:41:27.650944 kernel: [drm] number of scanouts: 1 Mar 17 17:41:27.650956 kernel: [drm] number of cap sets: 0 Mar 17 17:41:27.653550 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 17 17:41:27.662324 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 17:41:27.661956 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:41:27.668533 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 17:41:27.681757 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:41:27.682597 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:41:27.683714 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:41:27.698958 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:41:27.759391 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:41:27.803311 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:41:27.811779 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:41:27.827440 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:41:27.860028 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:41:27.861659 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:41:27.862794 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:41:27.863968 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:41:27.865263 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:41:27.866186 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:41:27.866914 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:41:27.868077 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:41:27.868833 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:41:27.868877 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:41:27.869374 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:41:27.871327 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:41:27.873349 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:41:27.878759 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:41:27.881643 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:41:27.883139 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:41:27.884019 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:41:27.884677 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:41:27.885375 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:41:27.885410 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:41:27.887691 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:41:27.892098 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:41:27.904743 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:41:27.909020 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:41:27.922385 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:41:27.927711 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:41:27.929754 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:41:27.932674 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:41:27.937747 jq[1460]: false Mar 17 17:41:27.940739 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:41:27.946750 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 17 17:41:27.951731 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:41:27.956039 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:41:27.959657 dbus-daemon[1457]: [system] SELinux support is enabled Mar 17 17:41:27.961154 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:41:27.963351 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:41:27.964738 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:41:27.971733 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:41:27.975479 coreos-metadata[1456]: Mar 17 17:41:27.973 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 17 17:41:27.977726 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:41:27.978811 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:41:27.979983 coreos-metadata[1456]: Mar 17 17:41:27.979 INFO Fetch successful Mar 17 17:41:27.979983 coreos-metadata[1456]: Mar 17 17:41:27.979 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 17 17:41:27.985674 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:41:27.988074 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:41:27.990651 coreos-metadata[1456]: Mar 17 17:41:27.988 INFO Fetch successful Mar 17 17:41:27.988763 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:41:28.014823 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:41:28.014911 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:41:28.016681 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:41:28.016712 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:41:28.031877 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:41:28.032078 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:41:28.033837 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:41:28.034008 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:41:28.041929 jq[1470]: true Mar 17 17:41:28.041715 systemd-logind[1467]: New seat seat0. Mar 17 17:41:28.047137 extend-filesystems[1461]: Found loop4 Mar 17 17:41:28.047137 extend-filesystems[1461]: Found loop5 Mar 17 17:41:28.047137 extend-filesystems[1461]: Found loop6 Mar 17 17:41:28.047137 extend-filesystems[1461]: Found loop7 Mar 17 17:41:28.047137 extend-filesystems[1461]: Found sda Mar 17 17:41:28.047137 extend-filesystems[1461]: Found sda1 Mar 17 17:41:28.047137 extend-filesystems[1461]: Found sda2 Mar 17 17:41:28.047137 extend-filesystems[1461]: Found sda3 Mar 17 17:41:28.047137 extend-filesystems[1461]: Found usr Mar 17 17:41:28.047137 extend-filesystems[1461]: Found sda4 Mar 17 17:41:28.047137 extend-filesystems[1461]: Found sda6 Mar 17 17:41:28.047137 extend-filesystems[1461]: Found sda7 Mar 17 17:41:28.047137 extend-filesystems[1461]: Found sda9 Mar 17 17:41:28.047137 extend-filesystems[1461]: Checking size of /dev/sda9 Mar 17 17:41:28.099668 tar[1473]: linux-arm64/LICENSE Mar 17 17:41:28.099668 tar[1473]: linux-arm64/helm Mar 17 17:41:28.099986 update_engine[1468]: I20250317 17:41:28.070489 1468 main.cc:92] Flatcar Update Engine starting Mar 17 17:41:28.099986 update_engine[1468]: I20250317 17:41:28.077744 1468 update_check_scheduler.cc:74] Next update check in 6m44s Mar 17 17:41:28.057182 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:41:28.100207 jq[1492]: true Mar 17 17:41:28.057199 systemd-logind[1467]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 17 17:41:28.057448 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:41:28.077207 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:41:28.105106 extend-filesystems[1461]: Resized partition /dev/sda9 Mar 17 17:41:28.083306 (ntainerd)[1491]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:41:28.105854 extend-filesystems[1507]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:41:28.086887 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:41:28.109566 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 17 17:41:28.150207 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:41:28.153845 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:41:28.161062 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1385) Mar 17 17:41:28.247173 bash[1527]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:41:28.248157 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:41:28.258390 systemd[1]: Starting sshkeys.service... Mar 17 17:41:28.278109 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 17 17:41:28.281486 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:41:28.292263 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:41:28.299717 extend-filesystems[1507]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 17 17:41:28.299717 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 17 17:41:28.299717 extend-filesystems[1507]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 17 17:41:28.306247 extend-filesystems[1461]: Resized filesystem in /dev/sda9 Mar 17 17:41:28.306247 extend-filesystems[1461]: Found sr0 Mar 17 17:41:28.304156 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:41:28.305555 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:41:28.349191 coreos-metadata[1537]: Mar 17 17:41:28.349 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 17 17:41:28.352782 coreos-metadata[1537]: Mar 17 17:41:28.352 INFO Fetch successful Mar 17 17:41:28.357541 locksmithd[1501]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:41:28.358671 unknown[1537]: wrote ssh authorized keys file for user: core Mar 17 17:41:28.396851 update-ssh-keys[1545]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:41:28.400025 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:41:28.404618 systemd[1]: Finished sshkeys.service. Mar 17 17:41:28.445781 containerd[1491]: time="2025-03-17T17:41:28.443555640Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:41:28.482330 containerd[1491]: time="2025-03-17T17:41:28.482275200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:28.484346 containerd[1491]: time="2025-03-17T17:41:28.484299360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:41:28.484482 containerd[1491]: time="2025-03-17T17:41:28.484465600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:41:28.484557 containerd[1491]: time="2025-03-17T17:41:28.484543440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:41:28.484787 containerd[1491]: time="2025-03-17T17:41:28.484769360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:41:28.484909 containerd[1491]: time="2025-03-17T17:41:28.484854200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:28.485037 containerd[1491]: time="2025-03-17T17:41:28.485019600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:41:28.485090 containerd[1491]: time="2025-03-17T17:41:28.485077800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:28.485304 containerd[1491]: time="2025-03-17T17:41:28.485283680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:41:28.485362 containerd[1491]: time="2025-03-17T17:41:28.485349240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:28.485416 containerd[1491]: time="2025-03-17T17:41:28.485402080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:41:28.485471 containerd[1491]: time="2025-03-17T17:41:28.485458600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:28.485622 containerd[1491]: time="2025-03-17T17:41:28.485603880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:28.485882 containerd[1491]: time="2025-03-17T17:41:28.485847600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:28.486055 containerd[1491]: time="2025-03-17T17:41:28.486036560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:41:28.486109 containerd[1491]: time="2025-03-17T17:41:28.486097200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:41:28.486229 containerd[1491]: time="2025-03-17T17:41:28.486213440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:41:28.486323 containerd[1491]: time="2025-03-17T17:41:28.486309480Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:41:28.492577 containerd[1491]: time="2025-03-17T17:41:28.492540800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:41:28.492799 containerd[1491]: time="2025-03-17T17:41:28.492779160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:41:28.492996 containerd[1491]: time="2025-03-17T17:41:28.492896200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:41:28.493087 containerd[1491]: time="2025-03-17T17:41:28.493070280Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:41:28.493157 containerd[1491]: time="2025-03-17T17:41:28.493142600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:41:28.493444 containerd[1491]: time="2025-03-17T17:41:28.493412640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:41:28.494085 containerd[1491]: time="2025-03-17T17:41:28.494053560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:41:28.494222 containerd[1491]: time="2025-03-17T17:41:28.494202080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:41:28.494253 containerd[1491]: time="2025-03-17T17:41:28.494226080Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:41:28.494253 containerd[1491]: time="2025-03-17T17:41:28.494241760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:41:28.494353 containerd[1491]: time="2025-03-17T17:41:28.494255480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:41:28.494353 containerd[1491]: time="2025-03-17T17:41:28.494268360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:41:28.494353 containerd[1491]: time="2025-03-17T17:41:28.494280360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:41:28.494353 containerd[1491]: time="2025-03-17T17:41:28.494294120Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:41:28.494353 containerd[1491]: time="2025-03-17T17:41:28.494307960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:41:28.494353 containerd[1491]: time="2025-03-17T17:41:28.494323080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:41:28.494353 containerd[1491]: time="2025-03-17T17:41:28.494335360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:41:28.494353 containerd[1491]: time="2025-03-17T17:41:28.494347640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:41:28.494533 containerd[1491]: time="2025-03-17T17:41:28.494368680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.494533 containerd[1491]: time="2025-03-17T17:41:28.494388320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.494533 containerd[1491]: time="2025-03-17T17:41:28.494400680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.494533 containerd[1491]: time="2025-03-17T17:41:28.494413560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.494533 containerd[1491]: time="2025-03-17T17:41:28.494426200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.494533 containerd[1491]: time="2025-03-17T17:41:28.494439040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.494533 containerd[1491]: time="2025-03-17T17:41:28.494450920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.494533 containerd[1491]: time="2025-03-17T17:41:28.494463280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.494533 containerd[1491]: time="2025-03-17T17:41:28.494479360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.494533 containerd[1491]: time="2025-03-17T17:41:28.494493840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.494533 containerd[1491]: time="2025-03-17T17:41:28.494523280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.494533 containerd[1491]: time="2025-03-17T17:41:28.494537520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.494837 containerd[1491]: time="2025-03-17T17:41:28.494549920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.494837 containerd[1491]: time="2025-03-17T17:41:28.494564480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:41:28.494837 containerd[1491]: time="2025-03-17T17:41:28.494585960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.494837 containerd[1491]: time="2025-03-17T17:41:28.494607240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.494837 containerd[1491]: time="2025-03-17T17:41:28.494618800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:41:28.494837 containerd[1491]: time="2025-03-17T17:41:28.494797560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:41:28.494837 containerd[1491]: time="2025-03-17T17:41:28.494815040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:41:28.494837 containerd[1491]: time="2025-03-17T17:41:28.494825200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:41:28.494837 containerd[1491]: time="2025-03-17T17:41:28.494838440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:41:28.495125 containerd[1491]: time="2025-03-17T17:41:28.494848800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.495125 containerd[1491]: time="2025-03-17T17:41:28.494899120Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:41:28.495125 containerd[1491]: time="2025-03-17T17:41:28.494913680Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:41:28.495125 containerd[1491]: time="2025-03-17T17:41:28.494924160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:41:28.495325 containerd[1491]: time="2025-03-17T17:41:28.495258600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:41:28.495325 containerd[1491]: time="2025-03-17T17:41:28.495317400Z" level=info msg="Connect containerd service" Mar 17 17:41:28.495474 containerd[1491]: time="2025-03-17T17:41:28.495352080Z" level=info msg="using legacy CRI server" Mar 17 17:41:28.495474 containerd[1491]: time="2025-03-17T17:41:28.495359800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:41:28.497119 containerd[1491]: time="2025-03-17T17:41:28.495659320Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:41:28.497119 containerd[1491]: time="2025-03-17T17:41:28.496362760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:41:28.497119 containerd[1491]: time="2025-03-17T17:41:28.496907640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:41:28.497119 containerd[1491]: time="2025-03-17T17:41:28.496953360Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:41:28.497119 containerd[1491]: time="2025-03-17T17:41:28.497059520Z" level=info msg="Start subscribing containerd event" Mar 17 17:41:28.497119 containerd[1491]: time="2025-03-17T17:41:28.497090760Z" level=info msg="Start recovering state" Mar 17 17:41:28.499838 containerd[1491]: time="2025-03-17T17:41:28.499721880Z" level=info msg="Start event monitor" Mar 17 17:41:28.499983 containerd[1491]: time="2025-03-17T17:41:28.499964240Z" level=info msg="Start snapshots syncer" Mar 17 17:41:28.500090 containerd[1491]: time="2025-03-17T17:41:28.500075040Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:41:28.500148 containerd[1491]: time="2025-03-17T17:41:28.500131240Z" level=info msg="Start streaming server" Mar 17 17:41:28.505563 containerd[1491]: time="2025-03-17T17:41:28.505535960Z" level=info msg="containerd successfully booted in 0.065774s" Mar 17 17:41:28.505650 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:41:28.805650 tar[1473]: linux-arm64/README.md Mar 17 17:41:28.819647 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:41:28.898704 systemd-networkd[1365]: eth0: Gained IPv6LL Mar 17 17:41:28.899293 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Mar 17 17:41:28.906272 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:41:28.908832 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:41:28.917056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:28.922874 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:41:28.949174 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:41:29.248034 sshd_keygen[1498]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:41:29.268668 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:41:29.276915 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:41:29.284428 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:41:29.286399 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:41:29.296090 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:41:29.305938 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:41:29.313493 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:41:29.317737 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:41:29.319331 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:41:29.474832 systemd-networkd[1365]: eth1: Gained IPv6LL Mar 17 17:41:29.475684 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Mar 17 17:41:29.655223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:29.656890 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:41:29.660640 (kubelet)[1587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:41:29.663229 systemd[1]: Startup finished in 768ms (kernel) + 5.694s (initrd) + 4.317s (userspace) = 10.779s. Mar 17 17:41:30.146707 kubelet[1587]: E0317 17:41:30.146181 1587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:41:30.149110 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:41:30.149256 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:41:40.400118 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:41:40.410881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:40.508392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:40.514194 (kubelet)[1605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:41:40.560634 kubelet[1605]: E0317 17:41:40.560557 1605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:41:40.566959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:41:40.567720 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:41:50.817816 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:41:50.828974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:50.937818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:50.949329 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:41:50.995410 kubelet[1621]: E0317 17:41:50.995335 1621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:41:50.998190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:41:50.998370 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:41:59.511717 systemd-timesyncd[1350]: Contacted time server 49.12.125.53:123 (2.flatcar.pool.ntp.org). Mar 17 17:41:59.511801 systemd-timesyncd[1350]: Initial clock synchronization to Mon 2025-03-17 17:41:59.676571 UTC. Mar 17 17:42:01.008599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:42:01.013852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:01.144923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:01.160428 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:42:01.211839 kubelet[1636]: E0317 17:42:01.211710 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:42:01.214198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:42:01.214372 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:42:11.259182 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 17:42:11.268886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:11.378346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:11.383442 (kubelet)[1652]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:42:11.429049 kubelet[1652]: E0317 17:42:11.428979 1652 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:42:11.432677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:42:11.432891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:42:12.929616 update_engine[1468]: I20250317 17:42:12.928592 1468 update_attempter.cc:509] Updating boot flags... Mar 17 17:42:12.976608 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1668) Mar 17 17:42:13.027289 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1668) Mar 17 17:42:13.091542 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1668) Mar 17 17:42:21.509048 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 17:42:21.515998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:21.637493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:21.649118 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:42:21.695352 kubelet[1688]: E0317 17:42:21.695291 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:42:21.698045 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:42:21.698228 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:42:31.758604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 17:42:31.770148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:31.896446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:31.911152 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:42:31.954631 kubelet[1703]: E0317 17:42:31.954580 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:42:31.957021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:42:31.957230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:42:42.008709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 17 17:42:42.016928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:42.137602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:42.147033 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:42:42.187343 kubelet[1718]: E0317 17:42:42.187279 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:42:42.189955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:42:42.190151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:42:52.258826 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 17 17:42:52.265837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:52.394774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:52.395708 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:42:52.441547 kubelet[1733]: E0317 17:42:52.441392 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:42:52.444739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:42:52.444866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:43:02.508819 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 17 17:43:02.517985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:43:02.624393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:43:02.629974 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:43:02.672542 kubelet[1747]: E0317 17:43:02.672192 1747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:43:02.674641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:43:02.674771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:43:12.759111 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 17 17:43:12.764864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:43:12.898211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:43:12.909439 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:43:12.954099 kubelet[1763]: E0317 17:43:12.954027 1763 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:43:12.956697 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:43:12.956937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:43:23.008866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 17 17:43:23.018928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:43:23.141346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:43:23.154880 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:43:23.202547 kubelet[1779]: E0317 17:43:23.202466 1779 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:43:23.204972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:43:23.205312 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:43:23.326753 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:43:23.337068 systemd[1]: Started sshd@0-88.198.122.152:22-139.178.89.65:41208.service - OpenSSH per-connection server daemon (139.178.89.65:41208). Mar 17 17:43:24.326535 sshd[1787]: Accepted publickey for core from 139.178.89.65 port 41208 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:43:24.328688 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:24.339471 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:43:24.348979 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:43:24.354005 systemd-logind[1467]: New session 1 of user core. Mar 17 17:43:24.362730 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:43:24.372026 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:43:24.375470 (systemd)[1791]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:43:24.475183 systemd[1791]: Queued start job for default target default.target. Mar 17 17:43:24.487432 systemd[1791]: Created slice app.slice - User Application Slice. Mar 17 17:43:24.487490 systemd[1791]: Reached target paths.target - Paths. Mar 17 17:43:24.487546 systemd[1791]: Reached target timers.target - Timers. Mar 17 17:43:24.489948 systemd[1791]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:43:24.503998 systemd[1791]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:43:24.504324 systemd[1791]: Reached target sockets.target - Sockets. Mar 17 17:43:24.504442 systemd[1791]: Reached target basic.target - Basic System. Mar 17 17:43:24.504620 systemd[1791]: Reached target default.target - Main User Target. Mar 17 17:43:24.504746 systemd[1791]: Startup finished in 122ms. Mar 17 17:43:24.505035 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:43:24.511775 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:43:25.208880 systemd[1]: Started sshd@1-88.198.122.152:22-139.178.89.65:41222.service - OpenSSH per-connection server daemon (139.178.89.65:41222). Mar 17 17:43:26.202226 sshd[1802]: Accepted publickey for core from 139.178.89.65 port 41222 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:43:26.204801 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:26.210072 systemd-logind[1467]: New session 2 of user core. Mar 17 17:43:26.220779 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:43:26.890921 sshd[1804]: Connection closed by 139.178.89.65 port 41222 Mar 17 17:43:26.891703 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:26.896713 systemd-logind[1467]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:43:26.898026 systemd[1]: sshd@1-88.198.122.152:22-139.178.89.65:41222.service: Deactivated successfully. Mar 17 17:43:26.900725 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:43:26.902264 systemd-logind[1467]: Removed session 2. Mar 17 17:43:27.066007 systemd[1]: Started sshd@2-88.198.122.152:22-139.178.89.65:41228.service - OpenSSH per-connection server daemon (139.178.89.65:41228). Mar 17 17:43:28.052927 sshd[1809]: Accepted publickey for core from 139.178.89.65 port 41228 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:43:28.055336 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:28.061305 systemd-logind[1467]: New session 3 of user core. Mar 17 17:43:28.068860 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:43:28.730351 sshd[1811]: Connection closed by 139.178.89.65 port 41228 Mar 17 17:43:28.730240 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:28.735600 systemd[1]: sshd@2-88.198.122.152:22-139.178.89.65:41228.service: Deactivated successfully. Mar 17 17:43:28.737199 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:43:28.737876 systemd-logind[1467]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:43:28.739126 systemd-logind[1467]: Removed session 3. Mar 17 17:43:28.913986 systemd[1]: Started sshd@3-88.198.122.152:22-139.178.89.65:41244.service - OpenSSH per-connection server daemon (139.178.89.65:41244). Mar 17 17:43:29.905169 sshd[1816]: Accepted publickey for core from 139.178.89.65 port 41244 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:43:29.906961 sshd-session[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:29.912992 systemd-logind[1467]: New session 4 of user core. Mar 17 17:43:29.919820 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:43:30.591535 sshd[1818]: Connection closed by 139.178.89.65 port 41244 Mar 17 17:43:30.592488 sshd-session[1816]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:30.597372 systemd[1]: sshd@3-88.198.122.152:22-139.178.89.65:41244.service: Deactivated successfully. Mar 17 17:43:30.599288 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:43:30.601287 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:43:30.602378 systemd-logind[1467]: Removed session 4. Mar 17 17:43:30.761903 systemd[1]: Started sshd@4-88.198.122.152:22-139.178.89.65:41258.service - OpenSSH per-connection server daemon (139.178.89.65:41258). Mar 17 17:43:31.759487 sshd[1823]: Accepted publickey for core from 139.178.89.65 port 41258 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:43:31.761590 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:31.765992 systemd-logind[1467]: New session 5 of user core. Mar 17 17:43:31.776833 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:43:32.291387 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:43:32.292316 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:43:32.312442 sudo[1826]: pam_unix(sudo:session): session closed for user root Mar 17 17:43:32.473544 sshd[1825]: Connection closed by 139.178.89.65 port 41258 Mar 17 17:43:32.472921 sshd-session[1823]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:32.479393 systemd[1]: sshd@4-88.198.122.152:22-139.178.89.65:41258.service: Deactivated successfully. Mar 17 17:43:32.481120 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:43:32.484011 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:43:32.485721 systemd-logind[1467]: Removed session 5. Mar 17 17:43:32.649989 systemd[1]: Started sshd@5-88.198.122.152:22-139.178.89.65:54516.service - OpenSSH per-connection server daemon (139.178.89.65:54516). Mar 17 17:43:33.258763 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Mar 17 17:43:33.268220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:43:33.392815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:43:33.393228 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:43:33.434208 kubelet[1841]: E0317 17:43:33.434143 1841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:43:33.436876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:43:33.437127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:43:33.634102 sshd[1831]: Accepted publickey for core from 139.178.89.65 port 54516 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:43:33.635189 sshd-session[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:33.642386 systemd-logind[1467]: New session 6 of user core. Mar 17 17:43:33.648989 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:43:34.155269 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:43:34.156110 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:43:34.160158 sudo[1850]: pam_unix(sudo:session): session closed for user root Mar 17 17:43:34.165793 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:43:34.166111 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:43:34.190133 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:43:34.219307 augenrules[1872]: No rules Mar 17 17:43:34.221007 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:43:34.221226 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:43:34.222983 sudo[1849]: pam_unix(sudo:session): session closed for user root Mar 17 17:43:34.382135 sshd[1848]: Connection closed by 139.178.89.65 port 54516 Mar 17 17:43:34.383170 sshd-session[1831]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:34.390216 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:43:34.390662 systemd[1]: sshd@5-88.198.122.152:22-139.178.89.65:54516.service: Deactivated successfully. Mar 17 17:43:34.393166 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:43:34.394422 systemd-logind[1467]: Removed session 6. Mar 17 17:43:34.556904 systemd[1]: Started sshd@6-88.198.122.152:22-139.178.89.65:54524.service - OpenSSH per-connection server daemon (139.178.89.65:54524). Mar 17 17:43:35.541777 sshd[1880]: Accepted publickey for core from 139.178.89.65 port 54524 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:43:35.543444 sshd-session[1880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:35.548278 systemd-logind[1467]: New session 7 of user core. Mar 17 17:43:35.556826 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:43:36.064289 sudo[1883]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:43:36.064593 sudo[1883]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:43:36.365902 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:43:36.366541 (dockerd)[1900]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:43:36.592367 dockerd[1900]: time="2025-03-17T17:43:36.591484856Z" level=info msg="Starting up" Mar 17 17:43:36.683380 dockerd[1900]: time="2025-03-17T17:43:36.683060766Z" level=info msg="Loading containers: start." Mar 17 17:43:36.837549 kernel: Initializing XFRM netlink socket Mar 17 17:43:36.923604 systemd-networkd[1365]: docker0: Link UP Mar 17 17:43:36.957567 dockerd[1900]: time="2025-03-17T17:43:36.957239677Z" level=info msg="Loading containers: done." Mar 17 17:43:36.974369 dockerd[1900]: time="2025-03-17T17:43:36.973906400Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:43:36.974369 dockerd[1900]: time="2025-03-17T17:43:36.974022804Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:43:36.974369 dockerd[1900]: time="2025-03-17T17:43:36.974151729Z" level=info msg="Daemon has completed initialization" Mar 17 17:43:37.012558 dockerd[1900]: time="2025-03-17T17:43:37.012338891Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:43:37.012890 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:43:37.759746 containerd[1491]: time="2025-03-17T17:43:37.759709169Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 17 17:43:38.409035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037304747.mount: Deactivated successfully. Mar 17 17:43:39.265072 containerd[1491]: time="2025-03-17T17:43:39.265003243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:39.267170 containerd[1491]: time="2025-03-17T17:43:39.266535125Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=26232042" Mar 17 17:43:39.268652 containerd[1491]: time="2025-03-17T17:43:39.268594901Z" level=info msg="ImageCreate event name:\"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:39.275153 containerd[1491]: time="2025-03-17T17:43:39.275035798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:39.277430 containerd[1491]: time="2025-03-17T17:43:39.277148016Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"26228750\" in 1.517394725s" Mar 17 17:43:39.277430 containerd[1491]: time="2025-03-17T17:43:39.277208737Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\"" Mar 17 17:43:39.278319 containerd[1491]: time="2025-03-17T17:43:39.278288927Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 17 17:43:40.368456 containerd[1491]: time="2025-03-17T17:43:40.368393306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:40.370407 containerd[1491]: time="2025-03-17T17:43:40.370336719Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=22530052" Mar 17 17:43:40.371614 containerd[1491]: time="2025-03-17T17:43:40.371545233Z" level=info msg="ImageCreate event name:\"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:40.375076 containerd[1491]: time="2025-03-17T17:43:40.375027128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:40.376638 containerd[1491]: time="2025-03-17T17:43:40.376455248Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"23970828\" in 1.098128479s" Mar 17 17:43:40.376638 containerd[1491]: time="2025-03-17T17:43:40.376495329Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\"" Mar 17 17:43:40.377537 containerd[1491]: time="2025-03-17T17:43:40.377012863Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 17 17:43:41.356162 containerd[1491]: time="2025-03-17T17:43:41.356105540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:41.357526 containerd[1491]: time="2025-03-17T17:43:41.357379215Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=17482581" Mar 17 17:43:41.358482 containerd[1491]: time="2025-03-17T17:43:41.358433844Z" level=info msg="ImageCreate event name:\"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:41.366247 containerd[1491]: time="2025-03-17T17:43:41.365992653Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"18923375\" in 988.943909ms" Mar 17 17:43:41.366247 containerd[1491]: time="2025-03-17T17:43:41.366045455Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\"" Mar 17 17:43:41.366501 containerd[1491]: time="2025-03-17T17:43:41.366454226Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 17:43:41.367343 containerd[1491]: time="2025-03-17T17:43:41.366815236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:42.326069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296591837.mount: Deactivated successfully. Mar 17 17:43:42.605632 containerd[1491]: time="2025-03-17T17:43:42.605482380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:42.607030 containerd[1491]: time="2025-03-17T17:43:42.606948061Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=27370121" Mar 17 17:43:42.607655 containerd[1491]: time="2025-03-17T17:43:42.607434714Z" level=info msg="ImageCreate event name:\"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:42.609716 containerd[1491]: time="2025-03-17T17:43:42.609658896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:42.610572 containerd[1491]: time="2025-03-17T17:43:42.610427518Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"27369114\" in 1.243946931s" Mar 17 17:43:42.610572 containerd[1491]: time="2025-03-17T17:43:42.610460758Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\"" Mar 17 17:43:42.611121 containerd[1491]: time="2025-03-17T17:43:42.611093776Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 17 17:43:43.180625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount283278914.mount: Deactivated successfully. Mar 17 17:43:43.508596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Mar 17 17:43:43.516685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:43:43.632272 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:43:43.637431 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:43:43.680558 kubelet[2211]: E0317 17:43:43.679750 2211 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:43:43.683089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:43:43.683239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:43:43.942222 containerd[1491]: time="2025-03-17T17:43:43.941987931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:43.944029 containerd[1491]: time="2025-03-17T17:43:43.943642338Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Mar 17 17:43:43.945182 containerd[1491]: time="2025-03-17T17:43:43.945062257Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:43.948995 containerd[1491]: time="2025-03-17T17:43:43.948928725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:43.951208 containerd[1491]: time="2025-03-17T17:43:43.950945582Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.339814444s" Mar 17 17:43:43.951208 containerd[1491]: time="2025-03-17T17:43:43.950994263Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Mar 17 17:43:43.952475 containerd[1491]: time="2025-03-17T17:43:43.952450664Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:43:44.440845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount574217515.mount: Deactivated successfully. Mar 17 17:43:44.447571 containerd[1491]: time="2025-03-17T17:43:44.447455943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:44.448455 containerd[1491]: time="2025-03-17T17:43:44.448420490Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Mar 17 17:43:44.449582 containerd[1491]: time="2025-03-17T17:43:44.449447119Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:44.452243 containerd[1491]: time="2025-03-17T17:43:44.452186116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:44.453257 containerd[1491]: time="2025-03-17T17:43:44.453142103Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 500.565356ms" Mar 17 17:43:44.453257 containerd[1491]: time="2025-03-17T17:43:44.453171944Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 17 17:43:44.454034 containerd[1491]: time="2025-03-17T17:43:44.453921165Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 17 17:43:45.081088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3144848220.mount: Deactivated successfully. Mar 17 17:43:46.484119 containerd[1491]: time="2025-03-17T17:43:46.484054352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:46.486314 containerd[1491]: time="2025-03-17T17:43:46.485579235Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812491" Mar 17 17:43:46.487686 containerd[1491]: time="2025-03-17T17:43:46.487649293Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:46.492523 containerd[1491]: time="2025-03-17T17:43:46.492438829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:46.494962 containerd[1491]: time="2025-03-17T17:43:46.494458006Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.040453959s" Mar 17 17:43:46.494962 containerd[1491]: time="2025-03-17T17:43:46.494498407Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Mar 17 17:43:50.707271 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:43:50.716021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:43:50.750658 systemd[1]: Reloading requested from client PID 2310 ('systemctl') (unit session-7.scope)... Mar 17 17:43:50.750677 systemd[1]: Reloading... Mar 17 17:43:50.861537 zram_generator::config[2350]: No configuration found. Mar 17 17:43:50.963659 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:43:51.032209 systemd[1]: Reloading finished in 281 ms. Mar 17 17:43:51.077653 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:43:51.077732 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:43:51.078047 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:43:51.082854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:43:51.209727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:43:51.212820 (kubelet)[2399]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:43:51.257590 kubelet[2399]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:43:51.257995 kubelet[2399]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:43:51.258066 kubelet[2399]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:43:51.258280 kubelet[2399]: I0317 17:43:51.258232 2399 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:43:51.894753 kubelet[2399]: I0317 17:43:51.894698 2399 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:43:51.894753 kubelet[2399]: I0317 17:43:51.894740 2399 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:43:51.895097 kubelet[2399]: I0317 17:43:51.895060 2399 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:43:51.923229 kubelet[2399]: E0317 17:43:51.923164 2399 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://88.198.122.152:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 88.198.122.152:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:51.926642 kubelet[2399]: I0317 17:43:51.926448 2399 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:43:51.936062 kubelet[2399]: E0317 17:43:51.935993 2399 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:43:51.936062 kubelet[2399]: I0317 17:43:51.936036 2399 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:43:51.939081 kubelet[2399]: I0317 17:43:51.939025 2399 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:43:51.939985 kubelet[2399]: I0317 17:43:51.939884 2399 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:43:51.940186 kubelet[2399]: I0317 17:43:51.939939 2399 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-2-4-d76a313bf1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:43:51.940344 kubelet[2399]: I0317 17:43:51.940255 2399 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:43:51.940344 kubelet[2399]: I0317 17:43:51.940266 2399 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:43:51.940517 kubelet[2399]: I0317 17:43:51.940482 2399 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:43:51.944215 kubelet[2399]: I0317 17:43:51.944118 2399 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:43:51.944215 kubelet[2399]: I0317 17:43:51.944146 2399 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:43:51.944215 kubelet[2399]: I0317 17:43:51.944169 2399 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:43:51.944215 kubelet[2399]: I0317 17:43:51.944179 2399 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:43:51.951639 kubelet[2399]: W0317 17:43:51.950855 2399 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://88.198.122.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-4-d76a313bf1&limit=500&resourceVersion=0": dial tcp 88.198.122.152:6443: connect: connection refused Mar 17 17:43:51.951639 kubelet[2399]: E0317 17:43:51.950915 2399 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://88.198.122.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-4-d76a313bf1&limit=500&resourceVersion=0\": dial tcp 88.198.122.152:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:51.951639 kubelet[2399]: W0317 17:43:51.951306 2399 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://88.198.122.152:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 88.198.122.152:6443: connect: connection refused Mar 17 17:43:51.951639 kubelet[2399]: E0317 17:43:51.951340 2399 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://88.198.122.152:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 88.198.122.152:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:51.951959 kubelet[2399]: I0317 17:43:51.951943 2399 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:43:51.952726 kubelet[2399]: I0317 17:43:51.952707 2399 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:43:51.952935 kubelet[2399]: W0317 17:43:51.952924 2399 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:43:51.954587 kubelet[2399]: I0317 17:43:51.954250 2399 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:43:51.954587 kubelet[2399]: I0317 17:43:51.954288 2399 server.go:1287] "Started kubelet" Mar 17 17:43:51.956386 kubelet[2399]: I0317 17:43:51.955826 2399 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:43:51.958110 kubelet[2399]: I0317 17:43:51.958068 2399 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:43:51.958529 kubelet[2399]: I0317 17:43:51.958451 2399 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:43:51.958894 kubelet[2399]: I0317 17:43:51.958875 2399 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:43:51.959423 kubelet[2399]: E0317 17:43:51.959143 2399 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://88.198.122.152:6443/api/v1/namespaces/default/events\": dial tcp 88.198.122.152:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-2-4-d76a313bf1.182da8135b71995b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-2-4-d76a313bf1,UID:ci-4152-2-2-4-d76a313bf1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-2-4-d76a313bf1,},FirstTimestamp:2025-03-17 17:43:51.954266459 +0000 UTC m=+0.737437381,LastTimestamp:2025-03-17 17:43:51.954266459 +0000 UTC m=+0.737437381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-2-4-d76a313bf1,}" Mar 17 17:43:51.962170 kubelet[2399]: I0317 17:43:51.961522 2399 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:43:51.962170 kubelet[2399]: I0317 17:43:51.961622 2399 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:43:51.962332 kubelet[2399]: I0317 17:43:51.962314 2399 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:43:51.964346 kubelet[2399]: I0317 17:43:51.964320 2399 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:43:51.964539 kubelet[2399]: I0317 17:43:51.964527 2399 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:43:51.966728 kubelet[2399]: W0317 17:43:51.966669 2399 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://88.198.122.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 88.198.122.152:6443: connect: connection refused Mar 17 17:43:51.966934 kubelet[2399]: E0317 17:43:51.966914 2399 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://88.198.122.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 88.198.122.152:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:51.967671 kubelet[2399]: E0317 17:43:51.967497 2399 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152-2-2-4-d76a313bf1\" not found" Mar 17 17:43:51.967671 kubelet[2399]: E0317 17:43:51.967618 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.122.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-4-d76a313bf1?timeout=10s\": dial tcp 88.198.122.152:6443: connect: connection refused" interval="200ms" Mar 17 17:43:51.968846 kubelet[2399]: I0317 17:43:51.968296 2399 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:43:51.968846 kubelet[2399]: I0317 17:43:51.968383 2399 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:43:51.968846 kubelet[2399]: E0317 17:43:51.968569 2399 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:43:51.969480 kubelet[2399]: I0317 17:43:51.969458 2399 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:43:51.991359 kubelet[2399]: I0317 17:43:51.991314 2399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:43:51.998961 kubelet[2399]: I0317 17:43:51.998905 2399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:43:51.998961 kubelet[2399]: I0317 17:43:51.998956 2399 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:43:51.999137 kubelet[2399]: I0317 17:43:51.998989 2399 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:43:51.999137 kubelet[2399]: I0317 17:43:51.999014 2399 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:43:51.999137 kubelet[2399]: E0317 17:43:51.999070 2399 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:43:52.004597 kubelet[2399]: W0317 17:43:52.003197 2399 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://88.198.122.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 88.198.122.152:6443: connect: connection refused Mar 17 17:43:52.004919 kubelet[2399]: E0317 17:43:52.004765 2399 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://88.198.122.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 88.198.122.152:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:52.006672 kubelet[2399]: I0317 17:43:52.006261 2399 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:43:52.006672 kubelet[2399]: I0317 17:43:52.006283 2399 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:43:52.006672 kubelet[2399]: I0317 17:43:52.006303 2399 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:43:52.008923 kubelet[2399]: I0317 17:43:52.008897 2399 policy_none.go:49] "None policy: Start" Mar 17 17:43:52.009065 kubelet[2399]: I0317 17:43:52.009050 2399 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:43:52.009140 kubelet[2399]: I0317 17:43:52.009131 2399 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:43:52.016727 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:43:52.030329 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:43:52.046174 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:43:52.048115 kubelet[2399]: I0317 17:43:52.047870 2399 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:43:52.048115 kubelet[2399]: I0317 17:43:52.048112 2399 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:43:52.048242 kubelet[2399]: I0317 17:43:52.048126 2399 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:43:52.048529 kubelet[2399]: I0317 17:43:52.048402 2399 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:43:52.050198 kubelet[2399]: E0317 17:43:52.050147 2399 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:43:52.050281 kubelet[2399]: E0317 17:43:52.050238 2399 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-2-4-d76a313bf1\" not found" Mar 17 17:43:52.112651 systemd[1]: Created slice kubepods-burstable-podde7608229fd68c173b9d0b2b1d1e6ed3.slice - libcontainer container kubepods-burstable-podde7608229fd68c173b9d0b2b1d1e6ed3.slice. Mar 17 17:43:52.126208 kubelet[2399]: E0317 17:43:52.126151 2399 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-2-4-d76a313bf1\" not found" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.130546 systemd[1]: Created slice kubepods-burstable-podcdfd17329059b1e403a7e5b6e61023bc.slice - libcontainer container kubepods-burstable-podcdfd17329059b1e403a7e5b6e61023bc.slice. Mar 17 17:43:52.133019 kubelet[2399]: E0317 17:43:52.132804 2399 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-2-4-d76a313bf1\" not found" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.145410 systemd[1]: Created slice kubepods-burstable-pod92d4fdffd5a9e34d823c13795ce51981.slice - libcontainer container kubepods-burstable-pod92d4fdffd5a9e34d823c13795ce51981.slice. Mar 17 17:43:52.148766 kubelet[2399]: E0317 17:43:52.148369 2399 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-2-4-d76a313bf1\" not found" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.150114 kubelet[2399]: I0317 17:43:52.150091 2399 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.150534 kubelet[2399]: E0317 17:43:52.150490 2399 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://88.198.122.152:6443/api/v1/nodes\": dial tcp 88.198.122.152:6443: connect: connection refused" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.168039 kubelet[2399]: I0317 17:43:52.167956 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92d4fdffd5a9e34d823c13795ce51981-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-2-4-d76a313bf1\" (UID: \"92d4fdffd5a9e34d823c13795ce51981\") " pod="kube-system/kube-apiserver-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.168310 kubelet[2399]: I0317 17:43:52.168270 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de7608229fd68c173b9d0b2b1d1e6ed3-ca-certs\") pod \"kube-controller-manager-ci-4152-2-2-4-d76a313bf1\" (UID: \"de7608229fd68c173b9d0b2b1d1e6ed3\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.168478 kubelet[2399]: E0317 17:43:52.168300 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.122.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-4-d76a313bf1?timeout=10s\": dial tcp 88.198.122.152:6443: connect: connection refused" interval="400ms" Mar 17 17:43:52.168478 kubelet[2399]: I0317 17:43:52.168439 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/de7608229fd68c173b9d0b2b1d1e6ed3-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-2-4-d76a313bf1\" (UID: \"de7608229fd68c173b9d0b2b1d1e6ed3\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.168687 kubelet[2399]: I0317 17:43:52.168663 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de7608229fd68c173b9d0b2b1d1e6ed3-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-2-4-d76a313bf1\" (UID: \"de7608229fd68c173b9d0b2b1d1e6ed3\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.168873 kubelet[2399]: I0317 17:43:52.168837 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de7608229fd68c173b9d0b2b1d1e6ed3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-2-4-d76a313bf1\" (UID: \"de7608229fd68c173b9d0b2b1d1e6ed3\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.169040 kubelet[2399]: I0317 17:43:52.169006 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cdfd17329059b1e403a7e5b6e61023bc-kubeconfig\") pod \"kube-scheduler-ci-4152-2-2-4-d76a313bf1\" (UID: \"cdfd17329059b1e403a7e5b6e61023bc\") " pod="kube-system/kube-scheduler-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.169186 kubelet[2399]: I0317 17:43:52.169166 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92d4fdffd5a9e34d823c13795ce51981-ca-certs\") pod \"kube-apiserver-ci-4152-2-2-4-d76a313bf1\" (UID: \"92d4fdffd5a9e34d823c13795ce51981\") " pod="kube-system/kube-apiserver-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.169441 kubelet[2399]: I0317 17:43:52.169316 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de7608229fd68c173b9d0b2b1d1e6ed3-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-2-4-d76a313bf1\" (UID: \"de7608229fd68c173b9d0b2b1d1e6ed3\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.169441 kubelet[2399]: I0317 17:43:52.169377 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92d4fdffd5a9e34d823c13795ce51981-k8s-certs\") pod \"kube-apiserver-ci-4152-2-2-4-d76a313bf1\" (UID: \"92d4fdffd5a9e34d823c13795ce51981\") " pod="kube-system/kube-apiserver-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.353133 kubelet[2399]: I0317 17:43:52.352940 2399 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.353888 kubelet[2399]: E0317 17:43:52.353857 2399 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://88.198.122.152:6443/api/v1/nodes\": dial tcp 88.198.122.152:6443: connect: connection refused" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.428781 containerd[1491]: time="2025-03-17T17:43:52.428292607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-2-4-d76a313bf1,Uid:de7608229fd68c173b9d0b2b1d1e6ed3,Namespace:kube-system,Attempt:0,}" Mar 17 17:43:52.434650 containerd[1491]: time="2025-03-17T17:43:52.434532508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-2-4-d76a313bf1,Uid:cdfd17329059b1e403a7e5b6e61023bc,Namespace:kube-system,Attempt:0,}" Mar 17 17:43:52.450670 containerd[1491]: time="2025-03-17T17:43:52.450468009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-2-4-d76a313bf1,Uid:92d4fdffd5a9e34d823c13795ce51981,Namespace:kube-system,Attempt:0,}" Mar 17 17:43:52.569395 kubelet[2399]: E0317 17:43:52.569297 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.122.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-4-d76a313bf1?timeout=10s\": dial tcp 88.198.122.152:6443: connect: connection refused" interval="800ms" Mar 17 17:43:52.756536 kubelet[2399]: I0317 17:43:52.756329 2399 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.756931 kubelet[2399]: E0317 17:43:52.756882 2399 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://88.198.122.152:6443/api/v1/nodes\": dial tcp 88.198.122.152:6443: connect: connection refused" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:52.959245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1102611434.mount: Deactivated successfully. Mar 17 17:43:52.967363 containerd[1491]: time="2025-03-17T17:43:52.966961990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:43:52.969028 containerd[1491]: time="2025-03-17T17:43:52.968953728Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Mar 17 17:43:52.970986 containerd[1491]: time="2025-03-17T17:43:52.970944545Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:43:52.972576 containerd[1491]: time="2025-03-17T17:43:52.972538152Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:43:52.974160 kubelet[2399]: W0317 17:43:52.974028 2399 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://88.198.122.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-4-d76a313bf1&limit=500&resourceVersion=0": dial tcp 88.198.122.152:6443: connect: connection refused Mar 17 17:43:52.974160 kubelet[2399]: E0317 17:43:52.974100 2399 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://88.198.122.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-4-d76a313bf1&limit=500&resourceVersion=0\": dial tcp 88.198.122.152:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:52.974915 containerd[1491]: time="2025-03-17T17:43:52.974859299Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:43:52.976603 containerd[1491]: time="2025-03-17T17:43:52.976457545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:43:52.977631 containerd[1491]: time="2025-03-17T17:43:52.977473974Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.077444ms" Mar 17 17:43:52.979958 containerd[1491]: time="2025-03-17T17:43:52.979334228Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:43:52.979958 containerd[1491]: time="2025-03-17T17:43:52.979842883Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:43:52.984844 containerd[1491]: time="2025-03-17T17:43:52.984796586Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 534.206574ms" Mar 17 17:43:52.985790 containerd[1491]: time="2025-03-17T17:43:52.985545248Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 550.909457ms" Mar 17 17:43:53.028698 kubelet[2399]: W0317 17:43:53.028635 2399 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://88.198.122.152:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 88.198.122.152:6443: connect: connection refused Mar 17 17:43:53.028911 kubelet[2399]: E0317 17:43:53.028889 2399 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://88.198.122.152:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 88.198.122.152:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:53.091664 kubelet[2399]: W0317 17:43:53.091615 2399 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://88.198.122.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 88.198.122.152:6443: connect: connection refused Mar 17 17:43:53.091664 kubelet[2399]: E0317 17:43:53.091665 2399 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://88.198.122.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 88.198.122.152:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:53.099014 containerd[1491]: time="2025-03-17T17:43:53.098850335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:53.099014 containerd[1491]: time="2025-03-17T17:43:53.098947818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:53.099014 containerd[1491]: time="2025-03-17T17:43:53.098963178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:53.099671 containerd[1491]: time="2025-03-17T17:43:53.099142943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:53.103575 containerd[1491]: time="2025-03-17T17:43:53.103454228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:53.104136 containerd[1491]: time="2025-03-17T17:43:53.103894761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:53.104136 containerd[1491]: time="2025-03-17T17:43:53.103940603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:53.104136 containerd[1491]: time="2025-03-17T17:43:53.103956203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:53.104474 containerd[1491]: time="2025-03-17T17:43:53.104047966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:53.105048 containerd[1491]: time="2025-03-17T17:43:53.104986033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:53.105194 containerd[1491]: time="2025-03-17T17:43:53.105147758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:53.106863 containerd[1491]: time="2025-03-17T17:43:53.105605851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:53.128832 systemd[1]: Started cri-containerd-10456d8da2563fd549812763b6868926dac44fcf5bcb293119373a515d281307.scope - libcontainer container 10456d8da2563fd549812763b6868926dac44fcf5bcb293119373a515d281307. Mar 17 17:43:53.142949 systemd[1]: Started cri-containerd-ca0440d9acf80ac103c74dcd8b11047a5a7e58393983372388894e21b2e0ae1b.scope - libcontainer container ca0440d9acf80ac103c74dcd8b11047a5a7e58393983372388894e21b2e0ae1b. Mar 17 17:43:53.154795 systemd[1]: Started cri-containerd-641a99baeaca45bc1b43eb46ac83e895b3e4ec5a0eb43f18c9bd6b233103b6a8.scope - libcontainer container 641a99baeaca45bc1b43eb46ac83e895b3e4ec5a0eb43f18c9bd6b233103b6a8. Mar 17 17:43:53.160977 kubelet[2399]: W0317 17:43:53.160914 2399 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://88.198.122.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 88.198.122.152:6443: connect: connection refused Mar 17 17:43:53.161122 kubelet[2399]: E0317 17:43:53.161012 2399 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://88.198.122.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 88.198.122.152:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:53.212042 containerd[1491]: time="2025-03-17T17:43:53.211928137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-2-4-d76a313bf1,Uid:92d4fdffd5a9e34d823c13795ce51981,Namespace:kube-system,Attempt:0,} returns sandbox id \"641a99baeaca45bc1b43eb46ac83e895b3e4ec5a0eb43f18c9bd6b233103b6a8\"" Mar 17 17:43:53.213126 containerd[1491]: time="2025-03-17T17:43:53.213084170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-2-4-d76a313bf1,Uid:de7608229fd68c173b9d0b2b1d1e6ed3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca0440d9acf80ac103c74dcd8b11047a5a7e58393983372388894e21b2e0ae1b\"" Mar 17 17:43:53.218064 containerd[1491]: time="2025-03-17T17:43:53.218013953Z" level=info msg="CreateContainer within sandbox \"ca0440d9acf80ac103c74dcd8b11047a5a7e58393983372388894e21b2e0ae1b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:43:53.218349 containerd[1491]: time="2025-03-17T17:43:53.218204799Z" level=info msg="CreateContainer within sandbox \"641a99baeaca45bc1b43eb46ac83e895b3e4ec5a0eb43f18c9bd6b233103b6a8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:43:53.223220 containerd[1491]: time="2025-03-17T17:43:53.223179023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-2-4-d76a313bf1,Uid:cdfd17329059b1e403a7e5b6e61023bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"10456d8da2563fd549812763b6868926dac44fcf5bcb293119373a515d281307\"" Mar 17 17:43:53.227262 containerd[1491]: time="2025-03-17T17:43:53.227218420Z" level=info msg="CreateContainer within sandbox \"10456d8da2563fd549812763b6868926dac44fcf5bcb293119373a515d281307\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:43:53.238881 containerd[1491]: time="2025-03-17T17:43:53.238370624Z" level=info msg="CreateContainer within sandbox \"641a99baeaca45bc1b43eb46ac83e895b3e4ec5a0eb43f18c9bd6b233103b6a8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"83ab606742f7455501393b3c25bd555258b60c7ea4044ea7afd6052f41149c25\"" Mar 17 17:43:53.239913 containerd[1491]: time="2025-03-17T17:43:53.239874668Z" level=info msg="StartContainer for \"83ab606742f7455501393b3c25bd555258b60c7ea4044ea7afd6052f41149c25\"" Mar 17 17:43:53.241611 containerd[1491]: time="2025-03-17T17:43:53.241468674Z" level=info msg="CreateContainer within sandbox \"ca0440d9acf80ac103c74dcd8b11047a5a7e58393983372388894e21b2e0ae1b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d160e253be6578fcdb7655f8b35f1e99f14ba3dfb25b1019fda5ebb6ba874704\"" Mar 17 17:43:53.242029 containerd[1491]: time="2025-03-17T17:43:53.241982809Z" level=info msg="StartContainer for \"d160e253be6578fcdb7655f8b35f1e99f14ba3dfb25b1019fda5ebb6ba874704\"" Mar 17 17:43:53.246099 containerd[1491]: time="2025-03-17T17:43:53.245960964Z" level=info msg="CreateContainer within sandbox \"10456d8da2563fd549812763b6868926dac44fcf5bcb293119373a515d281307\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bbc1d4e79e576bc6fea3ba796452016b3643d33b0eec0a86369b316eeb814c20\"" Mar 17 17:43:53.247263 containerd[1491]: time="2025-03-17T17:43:53.247202000Z" level=info msg="StartContainer for \"bbc1d4e79e576bc6fea3ba796452016b3643d33b0eec0a86369b316eeb814c20\"" Mar 17 17:43:53.282915 systemd[1]: Started cri-containerd-83ab606742f7455501393b3c25bd555258b60c7ea4044ea7afd6052f41149c25.scope - libcontainer container 83ab606742f7455501393b3c25bd555258b60c7ea4044ea7afd6052f41149c25. Mar 17 17:43:53.286383 systemd[1]: Started cri-containerd-d160e253be6578fcdb7655f8b35f1e99f14ba3dfb25b1019fda5ebb6ba874704.scope - libcontainer container d160e253be6578fcdb7655f8b35f1e99f14ba3dfb25b1019fda5ebb6ba874704. Mar 17 17:43:53.294905 systemd[1]: Started cri-containerd-bbc1d4e79e576bc6fea3ba796452016b3643d33b0eec0a86369b316eeb814c20.scope - libcontainer container bbc1d4e79e576bc6fea3ba796452016b3643d33b0eec0a86369b316eeb814c20. Mar 17 17:43:53.329198 containerd[1491]: time="2025-03-17T17:43:53.329089257Z" level=info msg="StartContainer for \"d160e253be6578fcdb7655f8b35f1e99f14ba3dfb25b1019fda5ebb6ba874704\" returns successfully" Mar 17 17:43:53.363544 containerd[1491]: time="2025-03-17T17:43:53.361253671Z" level=info msg="StartContainer for \"83ab606742f7455501393b3c25bd555258b60c7ea4044ea7afd6052f41149c25\" returns successfully" Mar 17 17:43:53.370694 kubelet[2399]: E0317 17:43:53.370653 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.122.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-4-d76a313bf1?timeout=10s\": dial tcp 88.198.122.152:6443: connect: connection refused" interval="1.6s" Mar 17 17:43:53.376606 containerd[1491]: time="2025-03-17T17:43:53.376482593Z" level=info msg="StartContainer for \"bbc1d4e79e576bc6fea3ba796452016b3643d33b0eec0a86369b316eeb814c20\" returns successfully" Mar 17 17:43:53.559639 kubelet[2399]: I0317 17:43:53.559543 2399 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:54.023518 kubelet[2399]: E0317 17:43:54.021970 2399 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-2-4-d76a313bf1\" not found" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:54.024523 kubelet[2399]: E0317 17:43:54.023843 2399 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-2-4-d76a313bf1\" not found" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:54.027792 kubelet[2399]: E0317 17:43:54.027651 2399 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-2-4-d76a313bf1\" not found" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:55.030534 kubelet[2399]: E0317 17:43:55.029869 2399 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-2-4-d76a313bf1\" not found" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:55.030534 kubelet[2399]: E0317 17:43:55.030222 2399 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-2-4-d76a313bf1\" not found" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:55.030534 kubelet[2399]: E0317 17:43:55.030466 2399 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-2-4-d76a313bf1\" not found" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:55.793391 kubelet[2399]: E0317 17:43:55.793353 2399 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-2-4-d76a313bf1\" not found" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:55.815151 kubelet[2399]: I0317 17:43:55.815007 2399 kubelet_node_status.go:79] "Successfully registered node" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:55.815151 kubelet[2399]: E0317 17:43:55.815050 2399 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4152-2-2-4-d76a313bf1\": node \"ci-4152-2-2-4-d76a313bf1\" not found" Mar 17 17:43:55.831101 kubelet[2399]: E0317 17:43:55.831022 2399 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152-2-2-4-d76a313bf1\" not found" Mar 17 17:43:55.954677 kubelet[2399]: I0317 17:43:55.954634 2399 apiserver.go:52] "Watching apiserver" Mar 17 17:43:55.964578 kubelet[2399]: I0317 17:43:55.964531 2399 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:43:55.969110 kubelet[2399]: I0317 17:43:55.968632 2399 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:55.979918 kubelet[2399]: E0317 17:43:55.979880 2399 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4152-2-2-4-d76a313bf1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:55.980365 kubelet[2399]: I0317 17:43:55.980105 2399 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:55.981996 kubelet[2399]: E0317 17:43:55.981964 2399 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4152-2-2-4-d76a313bf1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:55.981996 kubelet[2399]: I0317 17:43:55.981995 2399 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:55.987947 kubelet[2399]: E0317 17:43:55.987898 2399 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4152-2-2-4-d76a313bf1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:58.171340 systemd[1]: Reloading requested from client PID 2677 ('systemctl') (unit session-7.scope)... Mar 17 17:43:58.171364 systemd[1]: Reloading... Mar 17 17:43:58.283535 zram_generator::config[2723]: No configuration found. Mar 17 17:43:58.380501 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:43:58.463133 systemd[1]: Reloading finished in 291 ms. Mar 17 17:43:58.511828 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:43:58.528613 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:43:58.529658 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:43:58.529727 systemd[1]: kubelet.service: Consumed 1.137s CPU time, 122.2M memory peak, 0B memory swap peak. Mar 17 17:43:58.537978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:43:58.648219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:43:58.661317 (kubelet)[2762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:43:58.715358 kubelet[2762]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:43:58.715358 kubelet[2762]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:43:58.715358 kubelet[2762]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:43:58.715887 kubelet[2762]: I0317 17:43:58.715759 2762 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:43:58.723351 kubelet[2762]: I0317 17:43:58.722867 2762 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:43:58.723351 kubelet[2762]: I0317 17:43:58.722897 2762 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:43:58.723351 kubelet[2762]: I0317 17:43:58.723148 2762 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:43:58.724781 kubelet[2762]: I0317 17:43:58.724764 2762 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:43:58.727199 kubelet[2762]: I0317 17:43:58.727178 2762 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:43:58.730307 kubelet[2762]: E0317 17:43:58.730264 2762 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:43:58.730420 kubelet[2762]: I0317 17:43:58.730407 2762 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:43:58.735875 kubelet[2762]: I0317 17:43:58.735851 2762 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:43:58.736213 kubelet[2762]: I0317 17:43:58.736181 2762 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:43:58.736534 kubelet[2762]: I0317 17:43:58.736278 2762 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-2-4-d76a313bf1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:43:58.736675 kubelet[2762]: I0317 17:43:58.736662 2762 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:43:58.736729 kubelet[2762]: I0317 17:43:58.736721 2762 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:43:58.736822 kubelet[2762]: I0317 17:43:58.736812 2762 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:43:58.737010 kubelet[2762]: I0317 17:43:58.736997 2762 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:43:58.737547 kubelet[2762]: I0317 17:43:58.737064 2762 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:43:58.737547 kubelet[2762]: I0317 17:43:58.737093 2762 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:43:58.737547 kubelet[2762]: I0317 17:43:58.737103 2762 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:43:58.742660 kubelet[2762]: I0317 17:43:58.742635 2762 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:43:58.744515 kubelet[2762]: I0317 17:43:58.743223 2762 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:43:58.745012 kubelet[2762]: I0317 17:43:58.744995 2762 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:43:58.745113 kubelet[2762]: I0317 17:43:58.745105 2762 server.go:1287] "Started kubelet" Mar 17 17:43:58.748007 kubelet[2762]: I0317 17:43:58.747989 2762 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:43:58.753016 kubelet[2762]: I0317 17:43:58.752981 2762 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:43:58.754279 kubelet[2762]: I0317 17:43:58.754250 2762 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:43:58.755420 kubelet[2762]: I0317 17:43:58.755363 2762 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:43:58.755758 kubelet[2762]: I0317 17:43:58.755741 2762 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:43:58.756434 kubelet[2762]: I0317 17:43:58.756415 2762 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:43:58.758223 kubelet[2762]: I0317 17:43:58.758209 2762 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:43:58.758586 kubelet[2762]: E0317 17:43:58.758570 2762 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152-2-2-4-d76a313bf1\" not found" Mar 17 17:43:58.761545 kubelet[2762]: I0317 17:43:58.761527 2762 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:43:58.761868 kubelet[2762]: I0317 17:43:58.761735 2762 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:43:58.763407 kubelet[2762]: I0317 17:43:58.763374 2762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:43:58.764375 kubelet[2762]: I0317 17:43:58.764358 2762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:43:58.764523 kubelet[2762]: I0317 17:43:58.764454 2762 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:43:58.764523 kubelet[2762]: I0317 17:43:58.764475 2762 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:43:58.764523 kubelet[2762]: I0317 17:43:58.764481 2762 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:43:58.764649 kubelet[2762]: E0317 17:43:58.764632 2762 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:43:58.778891 kubelet[2762]: I0317 17:43:58.778650 2762 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:43:58.779029 kubelet[2762]: I0317 17:43:58.779006 2762 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:43:58.792669 kubelet[2762]: I0317 17:43:58.792641 2762 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:43:58.794860 kubelet[2762]: E0317 17:43:58.793847 2762 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:43:58.858431 kubelet[2762]: I0317 17:43:58.857706 2762 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:43:58.858431 kubelet[2762]: I0317 17:43:58.857726 2762 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:43:58.858431 kubelet[2762]: I0317 17:43:58.857760 2762 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:43:58.858431 kubelet[2762]: I0317 17:43:58.857938 2762 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:43:58.858431 kubelet[2762]: I0317 17:43:58.857949 2762 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:43:58.858431 kubelet[2762]: I0317 17:43:58.857968 2762 policy_none.go:49] "None policy: Start" Mar 17 17:43:58.858431 kubelet[2762]: I0317 17:43:58.857975 2762 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:43:58.858431 kubelet[2762]: I0317 17:43:58.857985 2762 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:43:58.858431 kubelet[2762]: I0317 17:43:58.858076 2762 state_mem.go:75] "Updated machine memory state" Mar 17 17:43:58.862984 kubelet[2762]: I0317 17:43:58.862953 2762 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:43:58.863155 kubelet[2762]: I0317 17:43:58.863138 2762 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:43:58.863189 kubelet[2762]: I0317 17:43:58.863155 2762 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:43:58.864289 kubelet[2762]: I0317 17:43:58.863916 2762 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:43:58.865877 kubelet[2762]: I0317 17:43:58.865849 2762 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:58.866029 kubelet[2762]: I0317 17:43:58.865986 2762 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:58.866862 kubelet[2762]: I0317 17:43:58.866468 2762 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:58.869032 kubelet[2762]: E0317 17:43:58.868857 2762 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:43:58.964222 kubelet[2762]: I0317 17:43:58.963988 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92d4fdffd5a9e34d823c13795ce51981-ca-certs\") pod \"kube-apiserver-ci-4152-2-2-4-d76a313bf1\" (UID: \"92d4fdffd5a9e34d823c13795ce51981\") " pod="kube-system/kube-apiserver-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:58.964700 kubelet[2762]: I0317 17:43:58.964458 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de7608229fd68c173b9d0b2b1d1e6ed3-ca-certs\") pod \"kube-controller-manager-ci-4152-2-2-4-d76a313bf1\" (UID: \"de7608229fd68c173b9d0b2b1d1e6ed3\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:58.964700 kubelet[2762]: I0317 17:43:58.964557 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92d4fdffd5a9e34d823c13795ce51981-k8s-certs\") pod \"kube-apiserver-ci-4152-2-2-4-d76a313bf1\" (UID: \"92d4fdffd5a9e34d823c13795ce51981\") " pod="kube-system/kube-apiserver-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:58.964700 kubelet[2762]: I0317 17:43:58.964609 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92d4fdffd5a9e34d823c13795ce51981-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-2-4-d76a313bf1\" (UID: \"92d4fdffd5a9e34d823c13795ce51981\") " pod="kube-system/kube-apiserver-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:58.964700 kubelet[2762]: I0317 17:43:58.964640 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/de7608229fd68c173b9d0b2b1d1e6ed3-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-2-4-d76a313bf1\" (UID: \"de7608229fd68c173b9d0b2b1d1e6ed3\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:58.965570 kubelet[2762]: I0317 17:43:58.965072 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de7608229fd68c173b9d0b2b1d1e6ed3-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-2-4-d76a313bf1\" (UID: \"de7608229fd68c173b9d0b2b1d1e6ed3\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:58.965570 kubelet[2762]: I0317 17:43:58.965238 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de7608229fd68c173b9d0b2b1d1e6ed3-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-2-4-d76a313bf1\" (UID: \"de7608229fd68c173b9d0b2b1d1e6ed3\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:58.965570 kubelet[2762]: I0317 17:43:58.965272 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de7608229fd68c173b9d0b2b1d1e6ed3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-2-4-d76a313bf1\" (UID: \"de7608229fd68c173b9d0b2b1d1e6ed3\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:58.965570 kubelet[2762]: I0317 17:43:58.965305 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cdfd17329059b1e403a7e5b6e61023bc-kubeconfig\") pod \"kube-scheduler-ci-4152-2-2-4-d76a313bf1\" (UID: \"cdfd17329059b1e403a7e5b6e61023bc\") " pod="kube-system/kube-scheduler-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:58.980891 kubelet[2762]: I0317 17:43:58.980823 2762 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:58.992007 kubelet[2762]: I0317 17:43:58.991672 2762 kubelet_node_status.go:125] "Node was previously registered" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:58.992007 kubelet[2762]: I0317 17:43:58.991764 2762 kubelet_node_status.go:79] "Successfully registered node" node="ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:59.172497 sudo[2795]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:43:59.172921 sudo[2795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:43:59.641873 sudo[2795]: pam_unix(sudo:session): session closed for user root Mar 17 17:43:59.739405 kubelet[2762]: I0317 17:43:59.739157 2762 apiserver.go:52] "Watching apiserver" Mar 17 17:43:59.761949 kubelet[2762]: I0317 17:43:59.761894 2762 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:43:59.831195 kubelet[2762]: I0317 17:43:59.830723 2762 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:59.840110 kubelet[2762]: E0317 17:43:59.840079 2762 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4152-2-2-4-d76a313bf1\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-2-4-d76a313bf1" Mar 17 17:43:59.875369 kubelet[2762]: I0317 17:43:59.875127 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-2-4-d76a313bf1" podStartSLOduration=1.8751025270000001 podStartE2EDuration="1.875102527s" podCreationTimestamp="2025-03-17 17:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:43:59.859302261 +0000 UTC m=+1.194473210" watchObservedRunningTime="2025-03-17 17:43:59.875102527 +0000 UTC m=+1.210273516" Mar 17 17:43:59.887370 kubelet[2762]: I0317 17:43:59.887190 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-2-4-d76a313bf1" podStartSLOduration=1.887172884 podStartE2EDuration="1.887172884s" podCreationTimestamp="2025-03-17 17:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:43:59.87620376 +0000 UTC m=+1.211374829" watchObservedRunningTime="2025-03-17 17:43:59.887172884 +0000 UTC m=+1.222343833" Mar 17 17:43:59.900837 kubelet[2762]: I0317 17:43:59.900400 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-2-4-d76a313bf1" podStartSLOduration=1.9003763139999998 podStartE2EDuration="1.900376314s" podCreationTimestamp="2025-03-17 17:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:43:59.888221155 +0000 UTC m=+1.223392104" watchObservedRunningTime="2025-03-17 17:43:59.900376314 +0000 UTC m=+1.235547263" Mar 17 17:44:02.060781 sudo[1883]: pam_unix(sudo:session): session closed for user root Mar 17 17:44:02.219453 sshd[1882]: Connection closed by 139.178.89.65 port 54524 Mar 17 17:44:02.220708 sshd-session[1880]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:02.224823 systemd[1]: sshd@6-88.198.122.152:22-139.178.89.65:54524.service: Deactivated successfully. Mar 17 17:44:02.227362 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:44:02.227825 systemd[1]: session-7.scope: Consumed 7.038s CPU time, 159.0M memory peak, 0B memory swap peak. Mar 17 17:44:02.229688 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:44:02.230750 systemd-logind[1467]: Removed session 7. Mar 17 17:44:02.452366 kubelet[2762]: I0317 17:44:02.451894 2762 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:44:02.452704 containerd[1491]: time="2025-03-17T17:44:02.452180359Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:44:02.453931 kubelet[2762]: I0317 17:44:02.453697 2762 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:44:02.523844 systemd[1]: Started sshd@7-88.198.122.152:22-64.176.71.124:59584.service - OpenSSH per-connection server daemon (64.176.71.124:59584). Mar 17 17:44:02.691166 sshd[2829]: Connection closed by authenticating user root 64.176.71.124 port 59584 [preauth] Mar 17 17:44:02.694178 systemd[1]: sshd@7-88.198.122.152:22-64.176.71.124:59584.service: Deactivated successfully. Mar 17 17:44:02.866464 systemd[1]: Created slice kubepods-besteffort-pod22b29643_c7ba_4c4f_9cec_455a3210ae22.slice - libcontainer container kubepods-besteffort-pod22b29643_c7ba_4c4f_9cec_455a3210ae22.slice. Mar 17 17:44:02.893143 kubelet[2762]: I0317 17:44:02.893091 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/22b29643-c7ba-4c4f-9cec-455a3210ae22-kube-proxy\") pod \"kube-proxy-76cqv\" (UID: \"22b29643-c7ba-4c4f-9cec-455a3210ae22\") " pod="kube-system/kube-proxy-76cqv" Mar 17 17:44:02.893143 kubelet[2762]: I0317 17:44:02.893133 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22b29643-c7ba-4c4f-9cec-455a3210ae22-lib-modules\") pod \"kube-proxy-76cqv\" (UID: \"22b29643-c7ba-4c4f-9cec-455a3210ae22\") " pod="kube-system/kube-proxy-76cqv" Mar 17 17:44:02.893337 kubelet[2762]: I0317 17:44:02.893156 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22b29643-c7ba-4c4f-9cec-455a3210ae22-xtables-lock\") pod \"kube-proxy-76cqv\" (UID: \"22b29643-c7ba-4c4f-9cec-455a3210ae22\") " pod="kube-system/kube-proxy-76cqv" Mar 17 17:44:02.893337 kubelet[2762]: I0317 17:44:02.893175 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrlxl\" (UniqueName: \"kubernetes.io/projected/22b29643-c7ba-4c4f-9cec-455a3210ae22-kube-api-access-mrlxl\") pod \"kube-proxy-76cqv\" (UID: \"22b29643-c7ba-4c4f-9cec-455a3210ae22\") " pod="kube-system/kube-proxy-76cqv" Mar 17 17:44:02.899953 systemd[1]: Created slice kubepods-burstable-pod9cc258aa_21f4_4983_b041_01b98f5f822b.slice - libcontainer container kubepods-burstable-pod9cc258aa_21f4_4983_b041_01b98f5f822b.slice. Mar 17 17:44:02.993734 kubelet[2762]: I0317 17:44:02.993677 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-cilium-cgroup\") pod \"cilium-fcsqd\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " pod="kube-system/cilium-fcsqd" Mar 17 17:44:02.994259 kubelet[2762]: I0317 17:44:02.994043 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9cc258aa-21f4-4983-b041-01b98f5f822b-hubble-tls\") pod \"cilium-fcsqd\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " pod="kube-system/cilium-fcsqd" Mar 17 17:44:02.994259 kubelet[2762]: I0317 17:44:02.994194 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-lib-modules\") pod \"cilium-fcsqd\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " pod="kube-system/cilium-fcsqd" Mar 17 17:44:02.994835 kubelet[2762]: I0317 17:44:02.994584 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-etc-cni-netd\") pod \"cilium-fcsqd\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " pod="kube-system/cilium-fcsqd" Mar 17 17:44:02.994835 kubelet[2762]: I0317 17:44:02.994699 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-xtables-lock\") pod \"cilium-fcsqd\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " pod="kube-system/cilium-fcsqd" Mar 17 17:44:02.995180 kubelet[2762]: I0317 17:44:02.994790 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-host-proc-sys-kernel\") pod \"cilium-fcsqd\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " pod="kube-system/cilium-fcsqd" Mar 17 17:44:02.995180 kubelet[2762]: I0317 17:44:02.995125 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9cc258aa-21f4-4983-b041-01b98f5f822b-cilium-config-path\") pod \"cilium-fcsqd\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " pod="kube-system/cilium-fcsqd" Mar 17 17:44:02.997593 kubelet[2762]: I0317 17:44:02.996573 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-host-proc-sys-net\") pod \"cilium-fcsqd\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " pod="kube-system/cilium-fcsqd" Mar 17 17:44:02.997593 kubelet[2762]: I0317 17:44:02.996675 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-bpf-maps\") pod \"cilium-fcsqd\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " pod="kube-system/cilium-fcsqd" Mar 17 17:44:02.997593 kubelet[2762]: I0317 17:44:02.996749 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-cilium-run\") pod \"cilium-fcsqd\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " pod="kube-system/cilium-fcsqd" Mar 17 17:44:02.997593 kubelet[2762]: I0317 17:44:02.996788 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-hostproc\") pod \"cilium-fcsqd\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " pod="kube-system/cilium-fcsqd" Mar 17 17:44:02.997593 kubelet[2762]: I0317 17:44:02.996823 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-cni-path\") pod \"cilium-fcsqd\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " pod="kube-system/cilium-fcsqd" Mar 17 17:44:02.997593 kubelet[2762]: I0317 17:44:02.996878 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9cc258aa-21f4-4983-b041-01b98f5f822b-clustermesh-secrets\") pod \"cilium-fcsqd\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " pod="kube-system/cilium-fcsqd" Mar 17 17:44:02.998120 kubelet[2762]: I0317 17:44:02.996913 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmwtm\" (UniqueName: \"kubernetes.io/projected/9cc258aa-21f4-4983-b041-01b98f5f822b-kube-api-access-jmwtm\") pod \"cilium-fcsqd\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " pod="kube-system/cilium-fcsqd" Mar 17 17:44:03.008641 kubelet[2762]: E0317 17:44:03.007379 2762 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 17:44:03.008641 kubelet[2762]: E0317 17:44:03.007415 2762 projected.go:194] Error preparing data for projected volume kube-api-access-mrlxl for pod kube-system/kube-proxy-76cqv: configmap "kube-root-ca.crt" not found Mar 17 17:44:03.008641 kubelet[2762]: E0317 17:44:03.007474 2762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22b29643-c7ba-4c4f-9cec-455a3210ae22-kube-api-access-mrlxl podName:22b29643-c7ba-4c4f-9cec-455a3210ae22 nodeName:}" failed. No retries permitted until 2025-03-17 17:44:03.50745356 +0000 UTC m=+4.842624469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mrlxl" (UniqueName: "kubernetes.io/projected/22b29643-c7ba-4c4f-9cec-455a3210ae22-kube-api-access-mrlxl") pod "kube-proxy-76cqv" (UID: "22b29643-c7ba-4c4f-9cec-455a3210ae22") : configmap "kube-root-ca.crt" not found Mar 17 17:44:03.120398 kubelet[2762]: E0317 17:44:03.119352 2762 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 17:44:03.120398 kubelet[2762]: E0317 17:44:03.119391 2762 projected.go:194] Error preparing data for projected volume kube-api-access-jmwtm for pod kube-system/cilium-fcsqd: configmap "kube-root-ca.crt" not found Mar 17 17:44:03.120398 kubelet[2762]: E0317 17:44:03.119435 2762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9cc258aa-21f4-4983-b041-01b98f5f822b-kube-api-access-jmwtm podName:9cc258aa-21f4-4983-b041-01b98f5f822b nodeName:}" failed. No retries permitted until 2025-03-17 17:44:03.619418219 +0000 UTC m=+4.954589168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jmwtm" (UniqueName: "kubernetes.io/projected/9cc258aa-21f4-4983-b041-01b98f5f822b-kube-api-access-jmwtm") pod "cilium-fcsqd" (UID: "9cc258aa-21f4-4983-b041-01b98f5f822b") : configmap "kube-root-ca.crt" not found Mar 17 17:44:03.561000 systemd[1]: Created slice kubepods-besteffort-pod45d6880b_4381_47a1_9fe6_6cc68c1a51cf.slice - libcontainer container kubepods-besteffort-pod45d6880b_4381_47a1_9fe6_6cc68c1a51cf.slice. Mar 17 17:44:03.600963 kubelet[2762]: I0317 17:44:03.600838 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hprlf\" (UniqueName: \"kubernetes.io/projected/45d6880b-4381-47a1-9fe6-6cc68c1a51cf-kube-api-access-hprlf\") pod \"cilium-operator-6c4d7847fc-9vcrg\" (UID: \"45d6880b-4381-47a1-9fe6-6cc68c1a51cf\") " pod="kube-system/cilium-operator-6c4d7847fc-9vcrg" Mar 17 17:44:03.600963 kubelet[2762]: I0317 17:44:03.600912 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45d6880b-4381-47a1-9fe6-6cc68c1a51cf-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-9vcrg\" (UID: \"45d6880b-4381-47a1-9fe6-6cc68c1a51cf\") " pod="kube-system/cilium-operator-6c4d7847fc-9vcrg" Mar 17 17:44:03.782680 containerd[1491]: time="2025-03-17T17:44:03.781563245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-76cqv,Uid:22b29643-c7ba-4c4f-9cec-455a3210ae22,Namespace:kube-system,Attempt:0,}" Mar 17 17:44:03.804328 containerd[1491]: time="2025-03-17T17:44:03.803918032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fcsqd,Uid:9cc258aa-21f4-4983-b041-01b98f5f822b,Namespace:kube-system,Attempt:0,}" Mar 17 17:44:03.815949 containerd[1491]: time="2025-03-17T17:44:03.814920960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:44:03.815949 containerd[1491]: time="2025-03-17T17:44:03.814994522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:44:03.815949 containerd[1491]: time="2025-03-17T17:44:03.815011323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:44:03.815949 containerd[1491]: time="2025-03-17T17:44:03.815103446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:44:03.841958 containerd[1491]: time="2025-03-17T17:44:03.841670478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:44:03.841958 containerd[1491]: time="2025-03-17T17:44:03.841725440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:44:03.841958 containerd[1491]: time="2025-03-17T17:44:03.841736560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:44:03.841958 containerd[1491]: time="2025-03-17T17:44:03.841805962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:44:03.844008 systemd[1]: Started cri-containerd-ef13ec97230db6bce46729d0145aa88cba76ccabb0ec3fbef37cdef2db08d43a.scope - libcontainer container ef13ec97230db6bce46729d0145aa88cba76ccabb0ec3fbef37cdef2db08d43a. Mar 17 17:44:03.866368 containerd[1491]: time="2025-03-17T17:44:03.866299692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9vcrg,Uid:45d6880b-4381-47a1-9fe6-6cc68c1a51cf,Namespace:kube-system,Attempt:0,}" Mar 17 17:44:03.866935 systemd[1]: Started cri-containerd-a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba.scope - libcontainer container a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba. Mar 17 17:44:03.900306 containerd[1491]: time="2025-03-17T17:44:03.900083460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-76cqv,Uid:22b29643-c7ba-4c4f-9cec-455a3210ae22,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef13ec97230db6bce46729d0145aa88cba76ccabb0ec3fbef37cdef2db08d43a\"" Mar 17 17:44:03.908084 containerd[1491]: time="2025-03-17T17:44:03.907969255Z" level=info msg="CreateContainer within sandbox \"ef13ec97230db6bce46729d0145aa88cba76ccabb0ec3fbef37cdef2db08d43a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:44:03.914294 containerd[1491]: time="2025-03-17T17:44:03.913631224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:44:03.914294 containerd[1491]: time="2025-03-17T17:44:03.913840670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:44:03.914294 containerd[1491]: time="2025-03-17T17:44:03.913858711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:44:03.916294 containerd[1491]: time="2025-03-17T17:44:03.914950423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:44:03.918072 containerd[1491]: time="2025-03-17T17:44:03.918009355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fcsqd,Uid:9cc258aa-21f4-4983-b041-01b98f5f822b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\"" Mar 17 17:44:03.921813 containerd[1491]: time="2025-03-17T17:44:03.921772507Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:44:03.934910 containerd[1491]: time="2025-03-17T17:44:03.934812896Z" level=info msg="CreateContainer within sandbox \"ef13ec97230db6bce46729d0145aa88cba76ccabb0ec3fbef37cdef2db08d43a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0085580ebf3f3207a1d429ff110dbec7c43d5cbba63d33cd15410e66884631cb\"" Mar 17 17:44:03.936436 containerd[1491]: time="2025-03-17T17:44:03.936216097Z" level=info msg="StartContainer for \"0085580ebf3f3207a1d429ff110dbec7c43d5cbba63d33cd15410e66884631cb\"" Mar 17 17:44:03.946055 systemd[1]: Started cri-containerd-5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f.scope - libcontainer container 5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f. Mar 17 17:44:03.979573 systemd[1]: Started cri-containerd-0085580ebf3f3207a1d429ff110dbec7c43d5cbba63d33cd15410e66884631cb.scope - libcontainer container 0085580ebf3f3207a1d429ff110dbec7c43d5cbba63d33cd15410e66884631cb. Mar 17 17:44:04.000598 containerd[1491]: time="2025-03-17T17:44:04.000241087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9vcrg,Uid:45d6880b-4381-47a1-9fe6-6cc68c1a51cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f\"" Mar 17 17:44:04.024278 containerd[1491]: time="2025-03-17T17:44:04.024113360Z" level=info msg="StartContainer for \"0085580ebf3f3207a1d429ff110dbec7c43d5cbba63d33cd15410e66884631cb\" returns successfully" Mar 17 17:44:04.880361 kubelet[2762]: I0317 17:44:04.880299 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-76cqv" podStartSLOduration=2.880263189 podStartE2EDuration="2.880263189s" podCreationTimestamp="2025-03-17 17:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:44:04.86555791 +0000 UTC m=+6.200728859" watchObservedRunningTime="2025-03-17 17:44:04.880263189 +0000 UTC m=+6.215434138" Mar 17 17:44:10.421950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3477140533.mount: Deactivated successfully. Mar 17 17:44:11.806418 containerd[1491]: time="2025-03-17T17:44:11.806350342Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:44:11.807756 containerd[1491]: time="2025-03-17T17:44:11.807618780Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 17 17:44:11.810565 containerd[1491]: time="2025-03-17T17:44:11.808665292Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:44:11.813118 containerd[1491]: time="2025-03-17T17:44:11.813076226Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.89102055s" Mar 17 17:44:11.813231 containerd[1491]: time="2025-03-17T17:44:11.813214990Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 17:44:11.815340 containerd[1491]: time="2025-03-17T17:44:11.815317174Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:44:11.816502 containerd[1491]: time="2025-03-17T17:44:11.816471768Z" level=info msg="CreateContainer within sandbox \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:44:11.835173 containerd[1491]: time="2025-03-17T17:44:11.835129894Z" level=info msg="CreateContainer within sandbox \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918\"" Mar 17 17:44:11.836789 containerd[1491]: time="2025-03-17T17:44:11.836080643Z" level=info msg="StartContainer for \"151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918\"" Mar 17 17:44:11.879744 systemd[1]: Started cri-containerd-151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918.scope - libcontainer container 151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918. Mar 17 17:44:11.909241 containerd[1491]: time="2025-03-17T17:44:11.909194498Z" level=info msg="StartContainer for \"151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918\" returns successfully" Mar 17 17:44:11.925203 systemd[1]: cri-containerd-151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918.scope: Deactivated successfully. Mar 17 17:44:12.101778 containerd[1491]: time="2025-03-17T17:44:12.101463769Z" level=info msg="shim disconnected" id=151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918 namespace=k8s.io Mar 17 17:44:12.101778 containerd[1491]: time="2025-03-17T17:44:12.101543812Z" level=warning msg="cleaning up after shim disconnected" id=151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918 namespace=k8s.io Mar 17 17:44:12.101778 containerd[1491]: time="2025-03-17T17:44:12.101554772Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:44:12.830421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918-rootfs.mount: Deactivated successfully. Mar 17 17:44:12.888603 containerd[1491]: time="2025-03-17T17:44:12.888256571Z" level=info msg="CreateContainer within sandbox \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:44:12.910688 containerd[1491]: time="2025-03-17T17:44:12.910634850Z" level=info msg="CreateContainer within sandbox \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663\"" Mar 17 17:44:12.912277 containerd[1491]: time="2025-03-17T17:44:12.911578319Z" level=info msg="StartContainer for \"65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663\"" Mar 17 17:44:12.951869 systemd[1]: Started cri-containerd-65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663.scope - libcontainer container 65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663. Mar 17 17:44:12.985476 containerd[1491]: time="2025-03-17T17:44:12.985379719Z" level=info msg="StartContainer for \"65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663\" returns successfully" Mar 17 17:44:12.998446 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:44:12.998957 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:44:12.999030 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:44:13.008266 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:44:13.008533 systemd[1]: cri-containerd-65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663.scope: Deactivated successfully. Mar 17 17:44:13.035768 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:44:13.039877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663-rootfs.mount: Deactivated successfully. Mar 17 17:44:13.042625 containerd[1491]: time="2025-03-17T17:44:13.042367210Z" level=info msg="shim disconnected" id=65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663 namespace=k8s.io Mar 17 17:44:13.042625 containerd[1491]: time="2025-03-17T17:44:13.042432852Z" level=warning msg="cleaning up after shim disconnected" id=65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663 namespace=k8s.io Mar 17 17:44:13.042625 containerd[1491]: time="2025-03-17T17:44:13.042441093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:44:13.892564 containerd[1491]: time="2025-03-17T17:44:13.892490937Z" level=info msg="CreateContainer within sandbox \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:44:13.918308 containerd[1491]: time="2025-03-17T17:44:13.917847108Z" level=info msg="CreateContainer within sandbox \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030\"" Mar 17 17:44:13.918705 containerd[1491]: time="2025-03-17T17:44:13.918674173Z" level=info msg="StartContainer for \"0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030\"" Mar 17 17:44:13.959153 systemd[1]: Started cri-containerd-0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030.scope - libcontainer container 0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030. Mar 17 17:44:13.993025 containerd[1491]: time="2025-03-17T17:44:13.992849708Z" level=info msg="StartContainer for \"0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030\" returns successfully" Mar 17 17:44:13.996842 systemd[1]: cri-containerd-0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030.scope: Deactivated successfully. Mar 17 17:44:14.028386 containerd[1491]: time="2025-03-17T17:44:14.028067380Z" level=info msg="shim disconnected" id=0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030 namespace=k8s.io Mar 17 17:44:14.028386 containerd[1491]: time="2025-03-17T17:44:14.028127702Z" level=warning msg="cleaning up after shim disconnected" id=0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030 namespace=k8s.io Mar 17 17:44:14.028386 containerd[1491]: time="2025-03-17T17:44:14.028137502Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:44:14.830414 systemd[1]: run-containerd-runc-k8s.io-0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030-runc.kSfeZQ.mount: Deactivated successfully. Mar 17 17:44:14.830639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030-rootfs.mount: Deactivated successfully. Mar 17 17:44:14.901429 containerd[1491]: time="2025-03-17T17:44:14.899365793Z" level=info msg="CreateContainer within sandbox \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:44:14.918609 containerd[1491]: time="2025-03-17T17:44:14.918565417Z" level=info msg="CreateContainer within sandbox \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3\"" Mar 17 17:44:14.919425 containerd[1491]: time="2025-03-17T17:44:14.919398363Z" level=info msg="StartContainer for \"1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3\"" Mar 17 17:44:14.952709 systemd[1]: Started cri-containerd-1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3.scope - libcontainer container 1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3. Mar 17 17:44:14.977064 systemd[1]: cri-containerd-1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3.scope: Deactivated successfully. Mar 17 17:44:14.985539 containerd[1491]: time="2025-03-17T17:44:14.983731762Z" level=info msg="StartContainer for \"1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3\" returns successfully" Mar 17 17:44:15.013695 containerd[1491]: time="2025-03-17T17:44:15.013620272Z" level=info msg="shim disconnected" id=1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3 namespace=k8s.io Mar 17 17:44:15.013934 containerd[1491]: time="2025-03-17T17:44:15.013915641Z" level=warning msg="cleaning up after shim disconnected" id=1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3 namespace=k8s.io Mar 17 17:44:15.014004 containerd[1491]: time="2025-03-17T17:44:15.013990484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:44:15.830749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3-rootfs.mount: Deactivated successfully. Mar 17 17:44:15.909583 containerd[1491]: time="2025-03-17T17:44:15.907947668Z" level=info msg="CreateContainer within sandbox \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:44:15.927372 containerd[1491]: time="2025-03-17T17:44:15.926405151Z" level=info msg="CreateContainer within sandbox \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d\"" Mar 17 17:44:15.930570 containerd[1491]: time="2025-03-17T17:44:15.929163435Z" level=info msg="StartContainer for \"66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d\"" Mar 17 17:44:15.964865 systemd[1]: Started cri-containerd-66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d.scope - libcontainer container 66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d. Mar 17 17:44:15.998130 containerd[1491]: time="2025-03-17T17:44:15.998055457Z" level=info msg="StartContainer for \"66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d\" returns successfully" Mar 17 17:44:16.155731 kubelet[2762]: I0317 17:44:16.154925 2762 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 17:44:16.200479 systemd[1]: Created slice kubepods-burstable-pod0c0127c5_2a91_4623_89f3_78aab8b38ba9.slice - libcontainer container kubepods-burstable-pod0c0127c5_2a91_4623_89f3_78aab8b38ba9.slice. Mar 17 17:44:16.211837 systemd[1]: Created slice kubepods-burstable-pod3e3eacd7_bf87_4bb3_af76_77d1dcf84392.slice - libcontainer container kubepods-burstable-pod3e3eacd7_bf87_4bb3_af76_77d1dcf84392.slice. Mar 17 17:44:16.290983 kubelet[2762]: I0317 17:44:16.290917 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mccpx\" (UniqueName: \"kubernetes.io/projected/0c0127c5-2a91-4623-89f3-78aab8b38ba9-kube-api-access-mccpx\") pod \"coredns-668d6bf9bc-fcsxx\" (UID: \"0c0127c5-2a91-4623-89f3-78aab8b38ba9\") " pod="kube-system/coredns-668d6bf9bc-fcsxx" Mar 17 17:44:16.290983 kubelet[2762]: I0317 17:44:16.290966 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c0127c5-2a91-4623-89f3-78aab8b38ba9-config-volume\") pod \"coredns-668d6bf9bc-fcsxx\" (UID: \"0c0127c5-2a91-4623-89f3-78aab8b38ba9\") " pod="kube-system/coredns-668d6bf9bc-fcsxx" Mar 17 17:44:16.291435 kubelet[2762]: I0317 17:44:16.291047 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59chs\" (UniqueName: \"kubernetes.io/projected/3e3eacd7-bf87-4bb3-af76-77d1dcf84392-kube-api-access-59chs\") pod \"coredns-668d6bf9bc-9vbnx\" (UID: \"3e3eacd7-bf87-4bb3-af76-77d1dcf84392\") " pod="kube-system/coredns-668d6bf9bc-9vbnx" Mar 17 17:44:16.291622 kubelet[2762]: I0317 17:44:16.291473 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e3eacd7-bf87-4bb3-af76-77d1dcf84392-config-volume\") pod \"coredns-668d6bf9bc-9vbnx\" (UID: \"3e3eacd7-bf87-4bb3-af76-77d1dcf84392\") " pod="kube-system/coredns-668d6bf9bc-9vbnx" Mar 17 17:44:16.508811 containerd[1491]: time="2025-03-17T17:44:16.508776896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fcsxx,Uid:0c0127c5-2a91-4623-89f3-78aab8b38ba9,Namespace:kube-system,Attempt:0,}" Mar 17 17:44:16.520336 containerd[1491]: time="2025-03-17T17:44:16.520293368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9vbnx,Uid:3e3eacd7-bf87-4bb3-af76-77d1dcf84392,Namespace:kube-system,Attempt:0,}" Mar 17 17:44:16.809771 containerd[1491]: time="2025-03-17T17:44:16.808852222Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:44:16.810773 containerd[1491]: time="2025-03-17T17:44:16.810695558Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 17 17:44:16.811682 containerd[1491]: time="2025-03-17T17:44:16.811585945Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:44:16.813534 containerd[1491]: time="2025-03-17T17:44:16.812884385Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.997062396s" Mar 17 17:44:16.813534 containerd[1491]: time="2025-03-17T17:44:16.812918626Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 17:44:16.816726 containerd[1491]: time="2025-03-17T17:44:16.816678461Z" level=info msg="CreateContainer within sandbox \"5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:44:16.834243 containerd[1491]: time="2025-03-17T17:44:16.834188756Z" level=info msg="CreateContainer within sandbox \"5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7\"" Mar 17 17:44:16.834451 systemd[1]: run-containerd-runc-k8s.io-66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d-runc.dPrFah.mount: Deactivated successfully. Mar 17 17:44:16.836638 containerd[1491]: time="2025-03-17T17:44:16.835946609Z" level=info msg="StartContainer for \"a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7\"" Mar 17 17:44:16.874868 systemd[1]: Started cri-containerd-a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7.scope - libcontainer container a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7. Mar 17 17:44:16.906013 containerd[1491]: time="2025-03-17T17:44:16.905925907Z" level=info msg="StartContainer for \"a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7\" returns successfully" Mar 17 17:44:17.932167 kubelet[2762]: I0317 17:44:17.931873 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-9vcrg" podStartSLOduration=2.119599797 podStartE2EDuration="14.931852724s" podCreationTimestamp="2025-03-17 17:44:03 +0000 UTC" firstStartedPulling="2025-03-17 17:44:04.001855575 +0000 UTC m=+5.337026524" lastFinishedPulling="2025-03-17 17:44:16.814108542 +0000 UTC m=+18.149279451" observedRunningTime="2025-03-17 17:44:17.93136463 +0000 UTC m=+19.266535579" watchObservedRunningTime="2025-03-17 17:44:17.931852724 +0000 UTC m=+19.267023673" Mar 17 17:44:17.934411 kubelet[2762]: I0317 17:44:17.934084 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fcsqd" podStartSLOduration=8.040286759 podStartE2EDuration="15.934067632s" podCreationTimestamp="2025-03-17 17:44:02 +0000 UTC" firstStartedPulling="2025-03-17 17:44:03.920634873 +0000 UTC m=+5.255805822" lastFinishedPulling="2025-03-17 17:44:11.814415746 +0000 UTC m=+13.149586695" observedRunningTime="2025-03-17 17:44:16.943217486 +0000 UTC m=+18.278388475" watchObservedRunningTime="2025-03-17 17:44:17.934067632 +0000 UTC m=+19.269238541" Mar 17 17:44:21.008720 systemd-networkd[1365]: cilium_host: Link UP Mar 17 17:44:21.011855 systemd-networkd[1365]: cilium_net: Link UP Mar 17 17:44:21.012185 systemd-networkd[1365]: cilium_net: Gained carrier Mar 17 17:44:21.012472 systemd-networkd[1365]: cilium_host: Gained carrier Mar 17 17:44:21.053067 systemd-networkd[1365]: cilium_net: Gained IPv6LL Mar 17 17:44:21.120136 systemd-networkd[1365]: cilium_vxlan: Link UP Mar 17 17:44:21.120143 systemd-networkd[1365]: cilium_vxlan: Gained carrier Mar 17 17:44:21.411562 kernel: NET: Registered PF_ALG protocol family Mar 17 17:44:21.764777 systemd-networkd[1365]: cilium_host: Gained IPv6LL Mar 17 17:44:22.166314 systemd-networkd[1365]: lxc_health: Link UP Mar 17 17:44:22.183340 systemd-networkd[1365]: lxc_health: Gained carrier Mar 17 17:44:22.588238 systemd-networkd[1365]: lxc83fa0733187f: Link UP Mar 17 17:44:22.592967 kernel: eth0: renamed from tmp1a773 Mar 17 17:44:22.600218 systemd-networkd[1365]: lxc83fa0733187f: Gained carrier Mar 17 17:44:22.626958 systemd-networkd[1365]: lxc74fef9c9b42c: Link UP Mar 17 17:44:22.634619 kernel: eth0: renamed from tmp369cc Mar 17 17:44:22.640537 systemd-networkd[1365]: lxc74fef9c9b42c: Gained carrier Mar 17 17:44:22.786796 systemd-networkd[1365]: cilium_vxlan: Gained IPv6LL Mar 17 17:44:24.066932 systemd-networkd[1365]: lxc_health: Gained IPv6LL Mar 17 17:44:24.322855 systemd-networkd[1365]: lxc83fa0733187f: Gained IPv6LL Mar 17 17:44:24.514863 systemd-networkd[1365]: lxc74fef9c9b42c: Gained IPv6LL Mar 17 17:44:26.529469 containerd[1491]: time="2025-03-17T17:44:26.529300963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:44:26.529859 containerd[1491]: time="2025-03-17T17:44:26.529808739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:44:26.529895 containerd[1491]: time="2025-03-17T17:44:26.529867981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:44:26.531760 containerd[1491]: time="2025-03-17T17:44:26.530026025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:44:26.549785 containerd[1491]: time="2025-03-17T17:44:26.549659873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:44:26.549785 containerd[1491]: time="2025-03-17T17:44:26.549740235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:44:26.550772 containerd[1491]: time="2025-03-17T17:44:26.549774156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:44:26.550772 containerd[1491]: time="2025-03-17T17:44:26.549894560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:44:26.563641 systemd[1]: run-containerd-runc-k8s.io-369cc0eed43060e4dafad6ae45e4f64579e0af7e7c61a54bed7d4aa14d68bb6e-runc.KduCde.mount: Deactivated successfully. Mar 17 17:44:26.573677 systemd[1]: Started cri-containerd-369cc0eed43060e4dafad6ae45e4f64579e0af7e7c61a54bed7d4aa14d68bb6e.scope - libcontainer container 369cc0eed43060e4dafad6ae45e4f64579e0af7e7c61a54bed7d4aa14d68bb6e. Mar 17 17:44:26.610082 systemd[1]: Started cri-containerd-1a7734454f23910061cd5ded0b7a736cd645d885d90103671a44bcfe63b23ff8.scope - libcontainer container 1a7734454f23910061cd5ded0b7a736cd645d885d90103671a44bcfe63b23ff8. Mar 17 17:44:26.653379 containerd[1491]: time="2025-03-17T17:44:26.653073351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9vbnx,Uid:3e3eacd7-bf87-4bb3-af76-77d1dcf84392,Namespace:kube-system,Attempt:0,} returns sandbox id \"369cc0eed43060e4dafad6ae45e4f64579e0af7e7c61a54bed7d4aa14d68bb6e\"" Mar 17 17:44:26.661559 containerd[1491]: time="2025-03-17T17:44:26.661516252Z" level=info msg="CreateContainer within sandbox \"369cc0eed43060e4dafad6ae45e4f64579e0af7e7c61a54bed7d4aa14d68bb6e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:44:26.679702 containerd[1491]: time="2025-03-17T17:44:26.679643813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fcsxx,Uid:0c0127c5-2a91-4623-89f3-78aab8b38ba9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a7734454f23910061cd5ded0b7a736cd645d885d90103671a44bcfe63b23ff8\"" Mar 17 17:44:26.685762 containerd[1491]: time="2025-03-17T17:44:26.685704841Z" level=info msg="CreateContainer within sandbox \"1a7734454f23910061cd5ded0b7a736cd645d885d90103671a44bcfe63b23ff8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:44:26.689728 containerd[1491]: time="2025-03-17T17:44:26.689671603Z" level=info msg="CreateContainer within sandbox \"369cc0eed43060e4dafad6ae45e4f64579e0af7e7c61a54bed7d4aa14d68bb6e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"083ace178dd93babc2eae275080cdfb2f7b553798068af40769302cec7f92030\"" Mar 17 17:44:26.698698 containerd[1491]: time="2025-03-17T17:44:26.698657841Z" level=info msg="StartContainer for \"083ace178dd93babc2eae275080cdfb2f7b553798068af40769302cec7f92030\"" Mar 17 17:44:26.706042 containerd[1491]: time="2025-03-17T17:44:26.704739629Z" level=info msg="CreateContainer within sandbox \"1a7734454f23910061cd5ded0b7a736cd645d885d90103671a44bcfe63b23ff8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c5697f4f31b1595a3938ffc1f4da922747872119ff4a3ce317f3a57a1a757118\"" Mar 17 17:44:26.707838 containerd[1491]: time="2025-03-17T17:44:26.707663880Z" level=info msg="StartContainer for \"c5697f4f31b1595a3938ffc1f4da922747872119ff4a3ce317f3a57a1a757118\"" Mar 17 17:44:26.744108 systemd[1]: Started cri-containerd-c5697f4f31b1595a3938ffc1f4da922747872119ff4a3ce317f3a57a1a757118.scope - libcontainer container c5697f4f31b1595a3938ffc1f4da922747872119ff4a3ce317f3a57a1a757118. Mar 17 17:44:26.758821 systemd[1]: Started cri-containerd-083ace178dd93babc2eae275080cdfb2f7b553798068af40769302cec7f92030.scope - libcontainer container 083ace178dd93babc2eae275080cdfb2f7b553798068af40769302cec7f92030. Mar 17 17:44:26.797473 containerd[1491]: time="2025-03-17T17:44:26.796753075Z" level=info msg="StartContainer for \"c5697f4f31b1595a3938ffc1f4da922747872119ff4a3ce317f3a57a1a757118\" returns successfully" Mar 17 17:44:26.815411 containerd[1491]: time="2025-03-17T17:44:26.815352410Z" level=info msg="StartContainer for \"083ace178dd93babc2eae275080cdfb2f7b553798068af40769302cec7f92030\" returns successfully" Mar 17 17:44:26.970044 kubelet[2762]: I0317 17:44:26.969094 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9vbnx" podStartSLOduration=23.969076005 podStartE2EDuration="23.969076005s" podCreationTimestamp="2025-03-17 17:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:44:26.968685033 +0000 UTC m=+28.303855982" watchObservedRunningTime="2025-03-17 17:44:26.969076005 +0000 UTC m=+28.304246994" Mar 17 17:44:26.984493 kubelet[2762]: I0317 17:44:26.984263 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fcsxx" podStartSLOduration=23.984245874 podStartE2EDuration="23.984245874s" podCreationTimestamp="2025-03-17 17:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:44:26.983366087 +0000 UTC m=+28.318537076" watchObservedRunningTime="2025-03-17 17:44:26.984245874 +0000 UTC m=+28.319416823" Mar 17 17:44:29.876325 kubelet[2762]: I0317 17:44:29.874593 2762 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:44:57.572850 systemd[1]: Started sshd@8-88.198.122.152:22-221.7.12.139:32998.service - OpenSSH per-connection server daemon (221.7.12.139:32998). Mar 17 17:45:07.741459 sshd[4150]: banner exchange: Connection from 221.7.12.139 port 32998: invalid format Mar 17 17:45:07.742555 systemd[1]: sshd@8-88.198.122.152:22-221.7.12.139:32998.service: Deactivated successfully. Mar 17 17:45:18.814003 systemd[1]: Started sshd@9-88.198.122.152:22-221.7.12.139:11642.service - OpenSSH per-connection server daemon (221.7.12.139:11642). Mar 17 17:45:23.789551 sshd[4159]: Invalid user wqmarlduiqkmgs from 221.7.12.139 port 11642 Mar 17 17:45:23.791557 sshd[4159]: userauth_pubkey: parse publickey packet: incomplete message [preauth] Mar 17 17:45:23.794068 systemd[1]: sshd@9-88.198.122.152:22-221.7.12.139:11642.service: Deactivated successfully. Mar 17 17:46:13.206501 systemd[1]: Started sshd@10-88.198.122.152:22-64.176.71.124:40250.service - OpenSSH per-connection server daemon (64.176.71.124:40250). Mar 17 17:46:15.487679 sshd[4172]: Connection closed by authenticating user root 64.176.71.124 port 40250 [preauth] Mar 17 17:46:15.491016 systemd[1]: sshd@10-88.198.122.152:22-64.176.71.124:40250.service: Deactivated successfully. Mar 17 17:48:11.983587 update_engine[1468]: I20250317 17:48:11.983475 1468 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 17:48:11.983587 update_engine[1468]: I20250317 17:48:11.983578 1468 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 17:48:11.986577 update_engine[1468]: I20250317 17:48:11.983947 1468 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 17:48:11.986577 update_engine[1468]: I20250317 17:48:11.984428 1468 omaha_request_params.cc:62] Current group set to stable Mar 17 17:48:11.986577 update_engine[1468]: I20250317 17:48:11.984557 1468 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 17:48:11.986577 update_engine[1468]: I20250317 17:48:11.984570 1468 update_attempter.cc:643] Scheduling an action processor start. Mar 17 17:48:11.986577 update_engine[1468]: I20250317 17:48:11.984591 1468 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 17:48:11.986577 update_engine[1468]: I20250317 17:48:11.984627 1468 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 17:48:11.986577 update_engine[1468]: I20250317 17:48:11.984690 1468 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 17:48:11.986577 update_engine[1468]: I20250317 17:48:11.984722 1468 omaha_request_action.cc:272] Request: Mar 17 17:48:11.986577 update_engine[1468]: Mar 17 17:48:11.986577 update_engine[1468]: Mar 17 17:48:11.986577 update_engine[1468]: Mar 17 17:48:11.986577 update_engine[1468]: Mar 17 17:48:11.986577 update_engine[1468]: Mar 17 17:48:11.986577 update_engine[1468]: Mar 17 17:48:11.986577 update_engine[1468]: Mar 17 17:48:11.986577 update_engine[1468]: Mar 17 17:48:11.986577 update_engine[1468]: I20250317 17:48:11.984731 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:48:11.986916 locksmithd[1501]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 17:48:11.987104 update_engine[1468]: I20250317 17:48:11.986670 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:48:11.987274 update_engine[1468]: I20250317 17:48:11.987241 1468 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:48:11.988059 update_engine[1468]: E20250317 17:48:11.988026 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:48:11.988128 update_engine[1468]: I20250317 17:48:11.988094 1468 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 17:48:12.947919 systemd[1]: Started sshd@11-88.198.122.152:22-154.117.199.5:15773.service - OpenSSH per-connection server daemon (154.117.199.5:15773). Mar 17 17:48:13.499643 sshd[4193]: Connection closed by 154.117.199.5 port 15773 [preauth] Mar 17 17:48:13.500494 systemd[1]: sshd@11-88.198.122.152:22-154.117.199.5:15773.service: Deactivated successfully. Mar 17 17:48:21.891230 update_engine[1468]: I20250317 17:48:21.891084 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:48:21.891902 update_engine[1468]: I20250317 17:48:21.891460 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:48:21.892040 update_engine[1468]: I20250317 17:48:21.891948 1468 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:48:21.892379 update_engine[1468]: E20250317 17:48:21.892331 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:48:21.892485 update_engine[1468]: I20250317 17:48:21.892388 1468 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 17 17:48:31.894647 update_engine[1468]: I20250317 17:48:31.894483 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:48:31.895558 update_engine[1468]: I20250317 17:48:31.894847 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:48:31.895558 update_engine[1468]: I20250317 17:48:31.895172 1468 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:48:31.895693 update_engine[1468]: E20250317 17:48:31.895601 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:48:31.895693 update_engine[1468]: I20250317 17:48:31.895661 1468 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 17 17:48:41.891683 update_engine[1468]: I20250317 17:48:41.891580 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:48:41.892241 update_engine[1468]: I20250317 17:48:41.891853 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:48:41.892241 update_engine[1468]: I20250317 17:48:41.892100 1468 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:48:41.892690 update_engine[1468]: E20250317 17:48:41.892630 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:48:41.892798 update_engine[1468]: I20250317 17:48:41.892712 1468 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 17:48:41.892798 update_engine[1468]: I20250317 17:48:41.892731 1468 omaha_request_action.cc:617] Omaha request response: Mar 17 17:48:41.892890 update_engine[1468]: E20250317 17:48:41.892841 1468 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 17 17:48:41.892890 update_engine[1468]: I20250317 17:48:41.892870 1468 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 17 17:48:41.892890 update_engine[1468]: I20250317 17:48:41.892881 1468 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 17:48:41.893070 update_engine[1468]: I20250317 17:48:41.892890 1468 update_attempter.cc:306] Processing Done. Mar 17 17:48:41.893070 update_engine[1468]: E20250317 17:48:41.892912 1468 update_attempter.cc:619] Update failed. Mar 17 17:48:41.893070 update_engine[1468]: I20250317 17:48:41.892922 1468 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 17 17:48:41.893070 update_engine[1468]: I20250317 17:48:41.892933 1468 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 17 17:48:41.893070 update_engine[1468]: I20250317 17:48:41.892944 1468 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 17 17:48:41.893322 update_engine[1468]: I20250317 17:48:41.893245 1468 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 17:48:41.893322 update_engine[1468]: I20250317 17:48:41.893294 1468 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 17:48:41.893322 update_engine[1468]: I20250317 17:48:41.893306 1468 omaha_request_action.cc:272] Request: Mar 17 17:48:41.893322 update_engine[1468]: Mar 17 17:48:41.893322 update_engine[1468]: Mar 17 17:48:41.893322 update_engine[1468]: Mar 17 17:48:41.893322 update_engine[1468]: Mar 17 17:48:41.893322 update_engine[1468]: Mar 17 17:48:41.893322 update_engine[1468]: Mar 17 17:48:41.893322 update_engine[1468]: I20250317 17:48:41.893316 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:48:41.893726 update_engine[1468]: I20250317 17:48:41.893570 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:48:41.894027 update_engine[1468]: I20250317 17:48:41.893847 1468 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:48:41.894106 locksmithd[1501]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 17 17:48:41.894617 update_engine[1468]: E20250317 17:48:41.894304 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:48:41.894617 update_engine[1468]: I20250317 17:48:41.894385 1468 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 17:48:41.894617 update_engine[1468]: I20250317 17:48:41.894402 1468 omaha_request_action.cc:617] Omaha request response: Mar 17 17:48:41.894617 update_engine[1468]: I20250317 17:48:41.894415 1468 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 17:48:41.894617 update_engine[1468]: I20250317 17:48:41.894426 1468 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 17:48:41.894617 update_engine[1468]: I20250317 17:48:41.894436 1468 update_attempter.cc:306] Processing Done. Mar 17 17:48:41.894617 update_engine[1468]: I20250317 17:48:41.894447 1468 update_attempter.cc:310] Error event sent. Mar 17 17:48:41.894617 update_engine[1468]: I20250317 17:48:41.894464 1468 update_check_scheduler.cc:74] Next update check in 43m48s Mar 17 17:48:41.894986 locksmithd[1501]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 17 17:48:46.750933 systemd[1]: Started sshd@12-88.198.122.152:22-64.176.71.124:51554.service - OpenSSH per-connection server daemon (64.176.71.124:51554). Mar 17 17:48:48.623826 systemd[1]: Started sshd@13-88.198.122.152:22-139.178.89.65:43054.service - OpenSSH per-connection server daemon (139.178.89.65:43054). Mar 17 17:48:49.612739 sshd[4203]: Accepted publickey for core from 139.178.89.65 port 43054 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:48:49.615386 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:49.620033 systemd-logind[1467]: New session 8 of user core. Mar 17 17:48:49.633901 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:48:50.391894 sshd[4205]: Connection closed by 139.178.89.65 port 43054 Mar 17 17:48:50.393034 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:50.397662 systemd[1]: sshd@13-88.198.122.152:22-139.178.89.65:43054.service: Deactivated successfully. Mar 17 17:48:50.399718 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:48:50.400704 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:48:50.401805 systemd-logind[1467]: Removed session 8. Mar 17 17:48:55.566817 systemd[1]: Started sshd@14-88.198.122.152:22-139.178.89.65:60642.service - OpenSSH per-connection server daemon (139.178.89.65:60642). Mar 17 17:48:56.543279 sshd[4217]: Accepted publickey for core from 139.178.89.65 port 60642 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:48:56.545570 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:56.552238 systemd-logind[1467]: New session 9 of user core. Mar 17 17:48:56.558836 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:48:57.296584 sshd[4219]: Connection closed by 139.178.89.65 port 60642 Mar 17 17:48:57.297246 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:57.301164 systemd[1]: sshd@14-88.198.122.152:22-139.178.89.65:60642.service: Deactivated successfully. Mar 17 17:48:57.304951 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:48:57.308678 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:48:57.310722 systemd-logind[1467]: Removed session 9. Mar 17 17:48:58.581040 systemd[1]: Started sshd@15-88.198.122.152:22-185.42.12.3:21556.service - OpenSSH per-connection server daemon (185.42.12.3:21556). Mar 17 17:48:58.871697 sshd[4230]: Invalid user 1111 from 185.42.12.3 port 21556 Mar 17 17:48:58.918807 sshd-session[4234]: pam_faillock(sshd:auth): User unknown Mar 17 17:48:58.922727 sshd[4230]: Postponed keyboard-interactive for invalid user 1111 from 185.42.12.3 port 21556 ssh2 [preauth] Mar 17 17:48:58.964217 sshd-session[4234]: pam_unix(sshd:auth): check pass; user unknown Mar 17 17:48:58.964253 sshd-session[4234]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.42.12.3 Mar 17 17:48:58.964319 sshd-session[4234]: pam_faillock(sshd:auth): User unknown Mar 17 17:49:00.280545 sshd[4230]: PAM: Permission denied for illegal user 1111 from 185.42.12.3 Mar 17 17:49:00.281463 sshd[4230]: Failed keyboard-interactive/pam for invalid user 1111 from 185.42.12.3 port 21556 ssh2 Mar 17 17:49:00.326658 sshd[4230]: Received disconnect from 185.42.12.3 port 21556:11: Client disconnecting normally [preauth] Mar 17 17:49:00.326658 sshd[4230]: Disconnected from invalid user 1111 185.42.12.3 port 21556 [preauth] Mar 17 17:49:00.330737 systemd[1]: sshd@15-88.198.122.152:22-185.42.12.3:21556.service: Deactivated successfully. Mar 17 17:49:02.470827 systemd[1]: Started sshd@16-88.198.122.152:22-139.178.89.65:43984.service - OpenSSH per-connection server daemon (139.178.89.65:43984). Mar 17 17:49:03.466111 sshd[4238]: Accepted publickey for core from 139.178.89.65 port 43984 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:49:03.468620 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:03.474213 systemd-logind[1467]: New session 10 of user core. Mar 17 17:49:03.478748 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:49:04.227630 sshd[4240]: Connection closed by 139.178.89.65 port 43984 Mar 17 17:49:04.228611 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:04.232439 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:49:04.233108 systemd[1]: sshd@16-88.198.122.152:22-139.178.89.65:43984.service: Deactivated successfully. Mar 17 17:49:04.235225 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:49:04.236991 systemd-logind[1467]: Removed session 10. Mar 17 17:49:04.410439 systemd[1]: Started sshd@17-88.198.122.152:22-139.178.89.65:43996.service - OpenSSH per-connection server daemon (139.178.89.65:43996). Mar 17 17:49:05.394113 sshd[4253]: Accepted publickey for core from 139.178.89.65 port 43996 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:49:05.396667 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:05.403422 systemd-logind[1467]: New session 11 of user core. Mar 17 17:49:05.404790 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:49:06.190647 sshd[4255]: Connection closed by 139.178.89.65 port 43996 Mar 17 17:49:06.191625 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:06.197106 systemd[1]: sshd@17-88.198.122.152:22-139.178.89.65:43996.service: Deactivated successfully. Mar 17 17:49:06.200794 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:49:06.203677 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:49:06.205152 systemd-logind[1467]: Removed session 11. Mar 17 17:49:06.369976 systemd[1]: Started sshd@18-88.198.122.152:22-139.178.89.65:44010.service - OpenSSH per-connection server daemon (139.178.89.65:44010). Mar 17 17:49:07.363251 sshd[4263]: Accepted publickey for core from 139.178.89.65 port 44010 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:49:07.365486 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:07.372119 systemd-logind[1467]: New session 12 of user core. Mar 17 17:49:07.381818 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:49:08.118782 sshd[4265]: Connection closed by 139.178.89.65 port 44010 Mar 17 17:49:08.119342 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:08.124934 systemd[1]: sshd@18-88.198.122.152:22-139.178.89.65:44010.service: Deactivated successfully. Mar 17 17:49:08.127484 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:49:08.129129 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:49:08.130307 systemd-logind[1467]: Removed session 12. Mar 17 17:49:13.289355 systemd[1]: Started sshd@19-88.198.122.152:22-139.178.89.65:32802.service - OpenSSH per-connection server daemon (139.178.89.65:32802). Mar 17 17:49:14.300777 sshd[4275]: Accepted publickey for core from 139.178.89.65 port 32802 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:49:14.302856 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:14.307112 systemd-logind[1467]: New session 13 of user core. Mar 17 17:49:14.311673 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:49:15.061595 sshd[4277]: Connection closed by 139.178.89.65 port 32802 Mar 17 17:49:15.062463 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:15.066442 systemd[1]: sshd@19-88.198.122.152:22-139.178.89.65:32802.service: Deactivated successfully. Mar 17 17:49:15.069967 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:49:15.072762 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:49:15.074812 systemd-logind[1467]: Removed session 13. Mar 17 17:49:15.237244 systemd[1]: Started sshd@20-88.198.122.152:22-139.178.89.65:32804.service - OpenSSH per-connection server daemon (139.178.89.65:32804). Mar 17 17:49:16.214577 sshd[4288]: Accepted publickey for core from 139.178.89.65 port 32804 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:49:16.216186 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:16.223127 systemd-logind[1467]: New session 14 of user core. Mar 17 17:49:16.228795 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:49:17.015546 sshd[4290]: Connection closed by 139.178.89.65 port 32804 Mar 17 17:49:17.016563 sshd-session[4288]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:17.022521 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:49:17.023017 systemd[1]: sshd@20-88.198.122.152:22-139.178.89.65:32804.service: Deactivated successfully. Mar 17 17:49:17.026312 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:49:17.028868 systemd-logind[1467]: Removed session 14. Mar 17 17:49:17.193955 systemd[1]: Started sshd@21-88.198.122.152:22-139.178.89.65:32816.service - OpenSSH per-connection server daemon (139.178.89.65:32816). Mar 17 17:49:18.177338 sshd[4298]: Accepted publickey for core from 139.178.89.65 port 32816 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:49:18.179928 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:18.186387 systemd-logind[1467]: New session 15 of user core. Mar 17 17:49:18.195046 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:49:19.864263 sshd[4300]: Connection closed by 139.178.89.65 port 32816 Mar 17 17:49:19.865185 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:19.871304 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:49:19.871814 systemd[1]: sshd@21-88.198.122.152:22-139.178.89.65:32816.service: Deactivated successfully. Mar 17 17:49:19.873896 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:49:19.874891 systemd-logind[1467]: Removed session 15. Mar 17 17:49:20.049293 systemd[1]: Started sshd@22-88.198.122.152:22-139.178.89.65:32826.service - OpenSSH per-connection server daemon (139.178.89.65:32826). Mar 17 17:49:21.046334 sshd[4316]: Accepted publickey for core from 139.178.89.65 port 32826 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:49:21.049017 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:21.053782 systemd-logind[1467]: New session 16 of user core. Mar 17 17:49:21.063819 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:49:21.934193 sshd[4318]: Connection closed by 139.178.89.65 port 32826 Mar 17 17:49:21.935155 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:21.939484 systemd[1]: sshd@22-88.198.122.152:22-139.178.89.65:32826.service: Deactivated successfully. Mar 17 17:49:21.941564 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:49:21.942389 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:49:21.943564 systemd-logind[1467]: Removed session 16. Mar 17 17:49:22.116291 systemd[1]: Started sshd@23-88.198.122.152:22-139.178.89.65:32940.service - OpenSSH per-connection server daemon (139.178.89.65:32940). Mar 17 17:49:23.112277 sshd[4327]: Accepted publickey for core from 139.178.89.65 port 32940 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:49:23.114338 sshd-session[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:23.119805 systemd-logind[1467]: New session 17 of user core. Mar 17 17:49:23.122703 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:49:23.870590 sshd[4329]: Connection closed by 139.178.89.65 port 32940 Mar 17 17:49:23.871456 sshd-session[4327]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:23.877289 systemd[1]: sshd@23-88.198.122.152:22-139.178.89.65:32940.service: Deactivated successfully. Mar 17 17:49:23.880160 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:49:23.881566 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:49:23.883118 systemd-logind[1467]: Removed session 17. Mar 17 17:49:29.044059 systemd[1]: Started sshd@24-88.198.122.152:22-139.178.89.65:32948.service - OpenSSH per-connection server daemon (139.178.89.65:32948). Mar 17 17:49:30.042921 sshd[4342]: Accepted publickey for core from 139.178.89.65 port 32948 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:49:30.045145 sshd-session[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:30.051984 systemd-logind[1467]: New session 18 of user core. Mar 17 17:49:30.056828 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:49:30.799528 sshd[4344]: Connection closed by 139.178.89.65 port 32948 Mar 17 17:49:30.800561 sshd-session[4342]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:30.804679 systemd[1]: sshd@24-88.198.122.152:22-139.178.89.65:32948.service: Deactivated successfully. Mar 17 17:49:30.808302 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:49:30.809856 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:49:30.811999 systemd-logind[1467]: Removed session 18. Mar 17 17:49:35.975988 systemd[1]: Started sshd@25-88.198.122.152:22-139.178.89.65:42056.service - OpenSSH per-connection server daemon (139.178.89.65:42056). Mar 17 17:49:36.961059 sshd[4356]: Accepted publickey for core from 139.178.89.65 port 42056 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:49:36.963870 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:36.969805 systemd-logind[1467]: New session 19 of user core. Mar 17 17:49:36.979852 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:49:37.709412 sshd[4358]: Connection closed by 139.178.89.65 port 42056 Mar 17 17:49:37.711883 sshd-session[4356]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:37.716498 systemd[1]: sshd@25-88.198.122.152:22-139.178.89.65:42056.service: Deactivated successfully. Mar 17 17:49:37.719924 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:49:37.722106 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:49:37.724881 systemd-logind[1467]: Removed session 19. Mar 17 17:49:37.891923 systemd[1]: Started sshd@26-88.198.122.152:22-139.178.89.65:42064.service - OpenSSH per-connection server daemon (139.178.89.65:42064). Mar 17 17:49:38.883662 sshd[4368]: Accepted publickey for core from 139.178.89.65 port 42064 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:49:38.885914 sshd-session[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:38.890254 systemd-logind[1467]: New session 20 of user core. Mar 17 17:49:38.902913 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:49:41.365152 containerd[1491]: time="2025-03-17T17:49:41.365096695Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:49:41.372898 containerd[1491]: time="2025-03-17T17:49:41.372064718Z" level=info msg="StopContainer for \"a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7\" with timeout 30 (s)" Mar 17 17:49:41.372898 containerd[1491]: time="2025-03-17T17:49:41.372485407Z" level=info msg="Stop container \"a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7\" with signal terminated" Mar 17 17:49:41.378874 containerd[1491]: time="2025-03-17T17:49:41.378637893Z" level=info msg="StopContainer for \"66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d\" with timeout 2 (s)" Mar 17 17:49:41.379947 containerd[1491]: time="2025-03-17T17:49:41.379913359Z" level=info msg="Stop container \"66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d\" with signal terminated" Mar 17 17:49:41.387191 systemd[1]: cri-containerd-a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7.scope: Deactivated successfully. Mar 17 17:49:41.392846 systemd-networkd[1365]: lxc_health: Link DOWN Mar 17 17:49:41.392853 systemd-networkd[1365]: lxc_health: Lost carrier Mar 17 17:49:41.414985 systemd[1]: cri-containerd-66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d.scope: Deactivated successfully. Mar 17 17:49:41.415302 systemd[1]: cri-containerd-66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d.scope: Consumed 7.825s CPU time. Mar 17 17:49:41.433193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7-rootfs.mount: Deactivated successfully. Mar 17 17:49:41.441663 containerd[1491]: time="2025-03-17T17:49:41.441426738Z" level=info msg="shim disconnected" id=a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7 namespace=k8s.io Mar 17 17:49:41.441663 containerd[1491]: time="2025-03-17T17:49:41.441482059Z" level=warning msg="cleaning up after shim disconnected" id=a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7 namespace=k8s.io Mar 17 17:49:41.441663 containerd[1491]: time="2025-03-17T17:49:41.441494619Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:41.453276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d-rootfs.mount: Deactivated successfully. Mar 17 17:49:41.462226 containerd[1491]: time="2025-03-17T17:49:41.462081721Z" level=info msg="StopContainer for \"a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7\" returns successfully" Mar 17 17:49:41.463533 containerd[1491]: time="2025-03-17T17:49:41.462960979Z" level=info msg="shim disconnected" id=66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d namespace=k8s.io Mar 17 17:49:41.463533 containerd[1491]: time="2025-03-17T17:49:41.463360547Z" level=warning msg="cleaning up after shim disconnected" id=66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d namespace=k8s.io Mar 17 17:49:41.463533 containerd[1491]: time="2025-03-17T17:49:41.463371867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:41.463797 containerd[1491]: time="2025-03-17T17:49:41.463141742Z" level=info msg="StopPodSandbox for \"5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f\"" Mar 17 17:49:41.463961 containerd[1491]: time="2025-03-17T17:49:41.463944519Z" level=info msg="Container to stop \"a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:49:41.469008 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f-shm.mount: Deactivated successfully. Mar 17 17:49:41.487088 systemd[1]: cri-containerd-5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f.scope: Deactivated successfully. Mar 17 17:49:41.500102 containerd[1491]: time="2025-03-17T17:49:41.500052458Z" level=info msg="StopContainer for \"66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d\" returns successfully" Mar 17 17:49:41.500853 containerd[1491]: time="2025-03-17T17:49:41.500817194Z" level=info msg="StopPodSandbox for \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\"" Mar 17 17:49:41.500942 containerd[1491]: time="2025-03-17T17:49:41.500861114Z" level=info msg="Container to stop \"65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:49:41.500942 containerd[1491]: time="2025-03-17T17:49:41.500874675Z" level=info msg="Container to stop \"0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:49:41.500942 containerd[1491]: time="2025-03-17T17:49:41.500884515Z" level=info msg="Container to stop \"66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:49:41.500942 containerd[1491]: time="2025-03-17T17:49:41.500895395Z" level=info msg="Container to stop \"151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:49:41.500942 containerd[1491]: time="2025-03-17T17:49:41.500904355Z" level=info msg="Container to stop \"1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:49:41.508696 systemd[1]: cri-containerd-a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba.scope: Deactivated successfully. Mar 17 17:49:41.530297 containerd[1491]: time="2025-03-17T17:49:41.530182155Z" level=info msg="shim disconnected" id=5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f namespace=k8s.io Mar 17 17:49:41.530297 containerd[1491]: time="2025-03-17T17:49:41.530286997Z" level=warning msg="cleaning up after shim disconnected" id=5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f namespace=k8s.io Mar 17 17:49:41.530549 containerd[1491]: time="2025-03-17T17:49:41.530305837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:41.537335 containerd[1491]: time="2025-03-17T17:49:41.537272100Z" level=info msg="shim disconnected" id=a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba namespace=k8s.io Mar 17 17:49:41.537717 containerd[1491]: time="2025-03-17T17:49:41.537692068Z" level=warning msg="cleaning up after shim disconnected" id=a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba namespace=k8s.io Mar 17 17:49:41.537819 containerd[1491]: time="2025-03-17T17:49:41.537802351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:41.551131 containerd[1491]: time="2025-03-17T17:49:41.551080702Z" level=info msg="TearDown network for sandbox \"5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f\" successfully" Mar 17 17:49:41.551131 containerd[1491]: time="2025-03-17T17:49:41.551117943Z" level=info msg="StopPodSandbox for \"5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f\" returns successfully" Mar 17 17:49:41.558179 containerd[1491]: time="2025-03-17T17:49:41.557963083Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:49:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:49:41.559733 containerd[1491]: time="2025-03-17T17:49:41.559617197Z" level=info msg="TearDown network for sandbox \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\" successfully" Mar 17 17:49:41.559733 containerd[1491]: time="2025-03-17T17:49:41.559651518Z" level=info msg="StopPodSandbox for \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\" returns successfully" Mar 17 17:49:41.715129 kubelet[2762]: I0317 17:49:41.713441 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-etc-cni-netd\") pod \"9cc258aa-21f4-4983-b041-01b98f5f822b\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " Mar 17 17:49:41.715129 kubelet[2762]: I0317 17:49:41.713557 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45d6880b-4381-47a1-9fe6-6cc68c1a51cf-cilium-config-path\") pod \"45d6880b-4381-47a1-9fe6-6cc68c1a51cf\" (UID: \"45d6880b-4381-47a1-9fe6-6cc68c1a51cf\") " Mar 17 17:49:41.715129 kubelet[2762]: I0317 17:49:41.713608 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hprlf\" (UniqueName: \"kubernetes.io/projected/45d6880b-4381-47a1-9fe6-6cc68c1a51cf-kube-api-access-hprlf\") pod \"45d6880b-4381-47a1-9fe6-6cc68c1a51cf\" (UID: \"45d6880b-4381-47a1-9fe6-6cc68c1a51cf\") " Mar 17 17:49:41.715129 kubelet[2762]: I0317 17:49:41.713657 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-cilium-run\") pod \"9cc258aa-21f4-4983-b041-01b98f5f822b\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " Mar 17 17:49:41.715129 kubelet[2762]: I0317 17:49:41.713701 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9cc258aa-21f4-4983-b041-01b98f5f822b-cilium-config-path\") pod \"9cc258aa-21f4-4983-b041-01b98f5f822b\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " Mar 17 17:49:41.715129 kubelet[2762]: I0317 17:49:41.713732 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmwtm\" (UniqueName: \"kubernetes.io/projected/9cc258aa-21f4-4983-b041-01b98f5f822b-kube-api-access-jmwtm\") pod \"9cc258aa-21f4-4983-b041-01b98f5f822b\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " Mar 17 17:49:41.715968 kubelet[2762]: I0317 17:49:41.713759 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-cilium-cgroup\") pod \"9cc258aa-21f4-4983-b041-01b98f5f822b\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " Mar 17 17:49:41.715968 kubelet[2762]: I0317 17:49:41.713813 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-lib-modules\") pod \"9cc258aa-21f4-4983-b041-01b98f5f822b\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " Mar 17 17:49:41.715968 kubelet[2762]: I0317 17:49:41.713841 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-xtables-lock\") pod \"9cc258aa-21f4-4983-b041-01b98f5f822b\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " Mar 17 17:49:41.715968 kubelet[2762]: I0317 17:49:41.713871 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-bpf-maps\") pod \"9cc258aa-21f4-4983-b041-01b98f5f822b\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " Mar 17 17:49:41.715968 kubelet[2762]: I0317 17:49:41.713897 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-cni-path\") pod \"9cc258aa-21f4-4983-b041-01b98f5f822b\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " Mar 17 17:49:41.715968 kubelet[2762]: I0317 17:49:41.713927 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-host-proc-sys-net\") pod \"9cc258aa-21f4-4983-b041-01b98f5f822b\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " Mar 17 17:49:41.716216 kubelet[2762]: I0317 17:49:41.713953 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-host-proc-sys-kernel\") pod \"9cc258aa-21f4-4983-b041-01b98f5f822b\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " Mar 17 17:49:41.716216 kubelet[2762]: I0317 17:49:41.713990 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-hostproc\") pod \"9cc258aa-21f4-4983-b041-01b98f5f822b\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " Mar 17 17:49:41.716216 kubelet[2762]: I0317 17:49:41.714026 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9cc258aa-21f4-4983-b041-01b98f5f822b-clustermesh-secrets\") pod \"9cc258aa-21f4-4983-b041-01b98f5f822b\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " Mar 17 17:49:41.716216 kubelet[2762]: I0317 17:49:41.714059 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9cc258aa-21f4-4983-b041-01b98f5f822b-hubble-tls\") pod \"9cc258aa-21f4-4983-b041-01b98f5f822b\" (UID: \"9cc258aa-21f4-4983-b041-01b98f5f822b\") " Mar 17 17:49:41.716216 kubelet[2762]: I0317 17:49:41.714641 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9cc258aa-21f4-4983-b041-01b98f5f822b" (UID: "9cc258aa-21f4-4983-b041-01b98f5f822b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:49:41.716408 kubelet[2762]: I0317 17:49:41.714703 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9cc258aa-21f4-4983-b041-01b98f5f822b" (UID: "9cc258aa-21f4-4983-b041-01b98f5f822b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:49:41.717441 kubelet[2762]: I0317 17:49:41.717337 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9cc258aa-21f4-4983-b041-01b98f5f822b" (UID: "9cc258aa-21f4-4983-b041-01b98f5f822b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:49:41.717441 kubelet[2762]: I0317 17:49:41.717401 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9cc258aa-21f4-4983-b041-01b98f5f822b" (UID: "9cc258aa-21f4-4983-b041-01b98f5f822b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:49:41.717441 kubelet[2762]: I0317 17:49:41.717430 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-cni-path" (OuterVolumeSpecName: "cni-path") pod "9cc258aa-21f4-4983-b041-01b98f5f822b" (UID: "9cc258aa-21f4-4983-b041-01b98f5f822b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:49:41.717688 kubelet[2762]: I0317 17:49:41.717457 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9cc258aa-21f4-4983-b041-01b98f5f822b" (UID: "9cc258aa-21f4-4983-b041-01b98f5f822b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:49:41.717688 kubelet[2762]: I0317 17:49:41.717482 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9cc258aa-21f4-4983-b041-01b98f5f822b" (UID: "9cc258aa-21f4-4983-b041-01b98f5f822b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:49:41.717688 kubelet[2762]: I0317 17:49:41.717531 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-hostproc" (OuterVolumeSpecName: "hostproc") pod "9cc258aa-21f4-4983-b041-01b98f5f822b" (UID: "9cc258aa-21f4-4983-b041-01b98f5f822b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:49:41.721545 kubelet[2762]: I0317 17:49:41.719063 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9cc258aa-21f4-4983-b041-01b98f5f822b" (UID: "9cc258aa-21f4-4983-b041-01b98f5f822b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:49:41.723305 kubelet[2762]: I0317 17:49:41.723257 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9cc258aa-21f4-4983-b041-01b98f5f822b" (UID: "9cc258aa-21f4-4983-b041-01b98f5f822b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:49:41.724254 kubelet[2762]: I0317 17:49:41.724202 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d6880b-4381-47a1-9fe6-6cc68c1a51cf-kube-api-access-hprlf" (OuterVolumeSpecName: "kube-api-access-hprlf") pod "45d6880b-4381-47a1-9fe6-6cc68c1a51cf" (UID: "45d6880b-4381-47a1-9fe6-6cc68c1a51cf"). InnerVolumeSpecName "kube-api-access-hprlf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:49:41.727592 kubelet[2762]: I0317 17:49:41.727472 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cc258aa-21f4-4983-b041-01b98f5f822b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9cc258aa-21f4-4983-b041-01b98f5f822b" (UID: "9cc258aa-21f4-4983-b041-01b98f5f822b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 17:49:41.728140 kubelet[2762]: I0317 17:49:41.728074 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45d6880b-4381-47a1-9fe6-6cc68c1a51cf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "45d6880b-4381-47a1-9fe6-6cc68c1a51cf" (UID: "45d6880b-4381-47a1-9fe6-6cc68c1a51cf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 17:49:41.728607 kubelet[2762]: I0317 17:49:41.728470 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cc258aa-21f4-4983-b041-01b98f5f822b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9cc258aa-21f4-4983-b041-01b98f5f822b" (UID: "9cc258aa-21f4-4983-b041-01b98f5f822b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:49:41.730899 kubelet[2762]: I0317 17:49:41.730830 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cc258aa-21f4-4983-b041-01b98f5f822b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9cc258aa-21f4-4983-b041-01b98f5f822b" (UID: "9cc258aa-21f4-4983-b041-01b98f5f822b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 17:49:41.730899 kubelet[2762]: I0317 17:49:41.730834 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cc258aa-21f4-4983-b041-01b98f5f822b-kube-api-access-jmwtm" (OuterVolumeSpecName: "kube-api-access-jmwtm") pod "9cc258aa-21f4-4983-b041-01b98f5f822b" (UID: "9cc258aa-21f4-4983-b041-01b98f5f822b"). InnerVolumeSpecName "kube-api-access-jmwtm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:49:41.736749 kubelet[2762]: I0317 17:49:41.736610 2762 scope.go:117] "RemoveContainer" containerID="a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7" Mar 17 17:49:41.741131 containerd[1491]: time="2025-03-17T17:49:41.741084832Z" level=info msg="RemoveContainer for \"a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7\"" Mar 17 17:49:41.747557 containerd[1491]: time="2025-03-17T17:49:41.746467262Z" level=info msg="RemoveContainer for \"a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7\" returns successfully" Mar 17 17:49:41.747731 systemd[1]: Removed slice kubepods-besteffort-pod45d6880b_4381_47a1_9fe6_6cc68c1a51cf.slice - libcontainer container kubepods-besteffort-pod45d6880b_4381_47a1_9fe6_6cc68c1a51cf.slice. Mar 17 17:49:41.751483 kubelet[2762]: I0317 17:49:41.750014 2762 scope.go:117] "RemoveContainer" containerID="a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7" Mar 17 17:49:41.751483 kubelet[2762]: E0317 17:49:41.750552 2762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7\": not found" containerID="a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7" Mar 17 17:49:41.751483 kubelet[2762]: I0317 17:49:41.750579 2762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7"} err="failed to get container status \"a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7\": not found" Mar 17 17:49:41.751483 kubelet[2762]: I0317 17:49:41.750658 2762 scope.go:117] "RemoveContainer" containerID="66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d" Mar 17 17:49:41.752003 containerd[1491]: time="2025-03-17T17:49:41.750371422Z" level=error msg="ContainerStatus for \"a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8cf228ca42d791b8d24444b70f842561b5283594538d631534725e85152fba7\": not found" Mar 17 17:49:41.753533 systemd[1]: Removed slice kubepods-burstable-pod9cc258aa_21f4_4983_b041_01b98f5f822b.slice - libcontainer container kubepods-burstable-pod9cc258aa_21f4_4983_b041_01b98f5f822b.slice. Mar 17 17:49:41.753707 systemd[1]: kubepods-burstable-pod9cc258aa_21f4_4983_b041_01b98f5f822b.slice: Consumed 7.915s CPU time. Mar 17 17:49:41.755372 containerd[1491]: time="2025-03-17T17:49:41.755171880Z" level=info msg="RemoveContainer for \"66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d\"" Mar 17 17:49:41.759123 containerd[1491]: time="2025-03-17T17:49:41.759082720Z" level=info msg="RemoveContainer for \"66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d\" returns successfully" Mar 17 17:49:41.760452 kubelet[2762]: I0317 17:49:41.760275 2762 scope.go:117] "RemoveContainer" containerID="1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3" Mar 17 17:49:41.761846 containerd[1491]: time="2025-03-17T17:49:41.761809176Z" level=info msg="RemoveContainer for \"1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3\"" Mar 17 17:49:41.765689 containerd[1491]: time="2025-03-17T17:49:41.765624494Z" level=info msg="RemoveContainer for \"1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3\" returns successfully" Mar 17 17:49:41.766288 kubelet[2762]: I0317 17:49:41.765951 2762 scope.go:117] "RemoveContainer" containerID="0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030" Mar 17 17:49:41.767347 containerd[1491]: time="2025-03-17T17:49:41.767316809Z" level=info msg="RemoveContainer for \"0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030\"" Mar 17 17:49:41.773543 containerd[1491]: time="2025-03-17T17:49:41.772189908Z" level=info msg="RemoveContainer for \"0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030\" returns successfully" Mar 17 17:49:41.774842 kubelet[2762]: I0317 17:49:41.774767 2762 scope.go:117] "RemoveContainer" containerID="65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663" Mar 17 17:49:41.778783 containerd[1491]: time="2025-03-17T17:49:41.778732602Z" level=info msg="RemoveContainer for \"65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663\"" Mar 17 17:49:41.782626 containerd[1491]: time="2025-03-17T17:49:41.782578641Z" level=info msg="RemoveContainer for \"65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663\" returns successfully" Mar 17 17:49:41.783184 kubelet[2762]: I0317 17:49:41.783080 2762 scope.go:117] "RemoveContainer" containerID="151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918" Mar 17 17:49:41.786013 containerd[1491]: time="2025-03-17T17:49:41.785977551Z" level=info msg="RemoveContainer for \"151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918\"" Mar 17 17:49:41.792395 containerd[1491]: time="2025-03-17T17:49:41.792257879Z" level=info msg="RemoveContainer for \"151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918\" returns successfully" Mar 17 17:49:41.793100 kubelet[2762]: I0317 17:49:41.792554 2762 scope.go:117] "RemoveContainer" containerID="66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d" Mar 17 17:49:41.793571 containerd[1491]: time="2025-03-17T17:49:41.793427783Z" level=error msg="ContainerStatus for \"66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d\": not found" Mar 17 17:49:41.793686 kubelet[2762]: E0317 17:49:41.793637 2762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d\": not found" containerID="66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d" Mar 17 17:49:41.793686 kubelet[2762]: I0317 17:49:41.793668 2762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d"} err="failed to get container status \"66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"66cdb71f1cdaecc10074f01dc8cd530cf383d5d17870ed0f4c479027e7639b9d\": not found" Mar 17 17:49:41.794057 kubelet[2762]: I0317 17:49:41.793688 2762 scope.go:117] "RemoveContainer" containerID="1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3" Mar 17 17:49:41.794115 containerd[1491]: time="2025-03-17T17:49:41.793914633Z" level=error msg="ContainerStatus for \"1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3\": not found" Mar 17 17:49:41.794168 kubelet[2762]: E0317 17:49:41.794059 2762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3\": not found" containerID="1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3" Mar 17 17:49:41.794168 kubelet[2762]: I0317 17:49:41.794081 2762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3"} err="failed to get container status \"1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"1fd18ca621a4d6d9c28ef26a06a5e78ae94238a901ec15869fe3c6400b6087a3\": not found" Mar 17 17:49:41.794168 kubelet[2762]: I0317 17:49:41.794140 2762 scope.go:117] "RemoveContainer" containerID="0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030" Mar 17 17:49:41.794405 containerd[1491]: time="2025-03-17T17:49:41.794333762Z" level=error msg="ContainerStatus for \"0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030\": not found" Mar 17 17:49:41.794526 kubelet[2762]: E0317 17:49:41.794479 2762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030\": not found" containerID="0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030" Mar 17 17:49:41.794526 kubelet[2762]: I0317 17:49:41.794539 2762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030"} err="failed to get container status \"0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e7222bd21a97c1525a18dfff363399087dde01de1688f276f3e21515e332030\": not found" Mar 17 17:49:41.794805 kubelet[2762]: I0317 17:49:41.794575 2762 scope.go:117] "RemoveContainer" containerID="65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663" Mar 17 17:49:41.795148 containerd[1491]: time="2025-03-17T17:49:41.794999655Z" level=error msg="ContainerStatus for \"65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663\": not found" Mar 17 17:49:41.795269 kubelet[2762]: E0317 17:49:41.795153 2762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663\": not found" containerID="65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663" Mar 17 17:49:41.795269 kubelet[2762]: I0317 17:49:41.795172 2762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663"} err="failed to get container status \"65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663\": rpc error: code = NotFound desc = an error occurred when try to find container \"65b62bc1873d395768a03a9b316d2611a8854a8c7dc4dda27b4acd6236e50663\": not found" Mar 17 17:49:41.795269 kubelet[2762]: I0317 17:49:41.795189 2762 scope.go:117] "RemoveContainer" containerID="151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918" Mar 17 17:49:41.795942 containerd[1491]: time="2025-03-17T17:49:41.795764031Z" level=error msg="ContainerStatus for \"151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918\": not found" Mar 17 17:49:41.796059 kubelet[2762]: E0317 17:49:41.795975 2762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918\": not found" containerID="151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918" Mar 17 17:49:41.796059 kubelet[2762]: I0317 17:49:41.795995 2762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918"} err="failed to get container status \"151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918\": rpc error: code = NotFound desc = an error occurred when try to find container \"151c70ab7cffe1ae5c83c23f11dcc241fea7e2341f99e1998208b14acb897918\": not found" Mar 17 17:49:41.815392 kubelet[2762]: I0317 17:49:41.815332 2762 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-bpf-maps\") on node \"ci-4152-2-2-4-d76a313bf1\" DevicePath \"\"" Mar 17 17:49:41.815392 kubelet[2762]: I0317 17:49:41.815391 2762 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-cni-path\") on node \"ci-4152-2-2-4-d76a313bf1\" DevicePath \"\"" Mar 17 17:49:41.815693 kubelet[2762]: I0317 17:49:41.815412 2762 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-host-proc-sys-net\") on node \"ci-4152-2-2-4-d76a313bf1\" DevicePath \"\"" Mar 17 17:49:41.815693 kubelet[2762]: I0317 17:49:41.815431 2762 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9cc258aa-21f4-4983-b041-01b98f5f822b-hubble-tls\") on node \"ci-4152-2-2-4-d76a313bf1\" DevicePath \"\"" Mar 17 17:49:41.815693 kubelet[2762]: I0317 17:49:41.815450 2762 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-host-proc-sys-kernel\") on node \"ci-4152-2-2-4-d76a313bf1\" DevicePath \"\"" Mar 17 17:49:41.815693 kubelet[2762]: I0317 17:49:41.815469 2762 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-hostproc\") on node \"ci-4152-2-2-4-d76a313bf1\" DevicePath \"\"" Mar 17 17:49:41.815693 kubelet[2762]: I0317 17:49:41.815485 2762 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9cc258aa-21f4-4983-b041-01b98f5f822b-clustermesh-secrets\") on node \"ci-4152-2-2-4-d76a313bf1\" DevicePath \"\"" Mar 17 17:49:41.815693 kubelet[2762]: I0317 17:49:41.815502 2762 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-etc-cni-netd\") on node \"ci-4152-2-2-4-d76a313bf1\" DevicePath \"\"" Mar 17 17:49:41.815693 kubelet[2762]: I0317 17:49:41.815544 2762 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45d6880b-4381-47a1-9fe6-6cc68c1a51cf-cilium-config-path\") on node \"ci-4152-2-2-4-d76a313bf1\" DevicePath \"\"" Mar 17 17:49:41.815693 kubelet[2762]: I0317 17:49:41.815560 2762 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hprlf\" (UniqueName: \"kubernetes.io/projected/45d6880b-4381-47a1-9fe6-6cc68c1a51cf-kube-api-access-hprlf\") on node \"ci-4152-2-2-4-d76a313bf1\" DevicePath \"\"" Mar 17 17:49:41.816178 kubelet[2762]: I0317 17:49:41.815579 2762 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9cc258aa-21f4-4983-b041-01b98f5f822b-cilium-config-path\") on node \"ci-4152-2-2-4-d76a313bf1\" DevicePath \"\"" Mar 17 17:49:41.816178 kubelet[2762]: I0317 17:49:41.815602 2762 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-cilium-run\") on node \"ci-4152-2-2-4-d76a313bf1\" DevicePath \"\"" Mar 17 17:49:41.816178 kubelet[2762]: I0317 17:49:41.815620 2762 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-cilium-cgroup\") on node \"ci-4152-2-2-4-d76a313bf1\" DevicePath \"\"" Mar 17 17:49:41.816178 kubelet[2762]: I0317 17:49:41.815637 2762 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-lib-modules\") on node \"ci-4152-2-2-4-d76a313bf1\" DevicePath \"\"" Mar 17 17:49:41.816178 kubelet[2762]: I0317 17:49:41.815652 2762 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cc258aa-21f4-4983-b041-01b98f5f822b-xtables-lock\") on node \"ci-4152-2-2-4-d76a313bf1\" DevicePath \"\"" Mar 17 17:49:41.816178 kubelet[2762]: I0317 17:49:41.815669 2762 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jmwtm\" (UniqueName: \"kubernetes.io/projected/9cc258aa-21f4-4983-b041-01b98f5f822b-kube-api-access-jmwtm\") on node \"ci-4152-2-2-4-d76a313bf1\" DevicePath \"\"" Mar 17 17:49:42.342237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f-rootfs.mount: Deactivated successfully. Mar 17 17:49:42.342418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba-rootfs.mount: Deactivated successfully. Mar 17 17:49:42.342590 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba-shm.mount: Deactivated successfully. Mar 17 17:49:42.342705 systemd[1]: var-lib-kubelet-pods-45d6880b\x2d4381\x2d47a1\x2d9fe6\x2d6cc68c1a51cf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhprlf.mount: Deactivated successfully. Mar 17 17:49:42.342860 systemd[1]: var-lib-kubelet-pods-9cc258aa\x2d21f4\x2d4983\x2db041\x2d01b98f5f822b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djmwtm.mount: Deactivated successfully. Mar 17 17:49:42.342991 systemd[1]: var-lib-kubelet-pods-9cc258aa\x2d21f4\x2d4983\x2db041\x2d01b98f5f822b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:49:42.343095 systemd[1]: var-lib-kubelet-pods-9cc258aa\x2d21f4\x2d4983\x2db041\x2d01b98f5f822b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:49:42.770554 kubelet[2762]: I0317 17:49:42.769329 2762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45d6880b-4381-47a1-9fe6-6cc68c1a51cf" path="/var/lib/kubelet/pods/45d6880b-4381-47a1-9fe6-6cc68c1a51cf/volumes" Mar 17 17:49:42.770554 kubelet[2762]: I0317 17:49:42.770246 2762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cc258aa-21f4-4983-b041-01b98f5f822b" path="/var/lib/kubelet/pods/9cc258aa-21f4-4983-b041-01b98f5f822b/volumes" Mar 17 17:49:43.432087 sshd[4370]: Connection closed by 139.178.89.65 port 42064 Mar 17 17:49:43.432764 sshd-session[4368]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:43.438327 systemd[1]: sshd@26-88.198.122.152:22-139.178.89.65:42064.service: Deactivated successfully. Mar 17 17:49:43.441266 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:49:43.441460 systemd[1]: session-20.scope: Consumed 1.286s CPU time. Mar 17 17:49:43.442315 systemd-logind[1467]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:49:43.443929 systemd-logind[1467]: Removed session 20. Mar 17 17:49:43.614954 systemd[1]: Started sshd@27-88.198.122.152:22-139.178.89.65:60440.service - OpenSSH per-connection server daemon (139.178.89.65:60440). Mar 17 17:49:43.975274 kubelet[2762]: E0317 17:49:43.975083 2762 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:49:44.610162 sshd[4533]: Accepted publickey for core from 139.178.89.65 port 60440 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:49:44.612470 sshd-session[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:44.618455 systemd-logind[1467]: New session 21 of user core. Mar 17 17:49:44.623687 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:49:44.766193 kubelet[2762]: E0317 17:49:44.765848 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-fcsxx" podUID="0c0127c5-2a91-4623-89f3-78aab8b38ba9" Mar 17 17:49:46.632097 kubelet[2762]: I0317 17:49:46.632041 2762 memory_manager.go:355] "RemoveStaleState removing state" podUID="9cc258aa-21f4-4983-b041-01b98f5f822b" containerName="cilium-agent" Mar 17 17:49:46.632097 kubelet[2762]: I0317 17:49:46.632079 2762 memory_manager.go:355] "RemoveStaleState removing state" podUID="45d6880b-4381-47a1-9fe6-6cc68c1a51cf" containerName="cilium-operator" Mar 17 17:49:46.643549 systemd[1]: Created slice kubepods-burstable-podbfb84a01_f30b_430d_961e_7909d9fe6efd.slice - libcontainer container kubepods-burstable-podbfb84a01_f30b_430d_961e_7909d9fe6efd.slice. Mar 17 17:49:46.691851 kubelet[2762]: I0317 17:49:46.691765 2762 setters.go:602] "Node became not ready" node="ci-4152-2-2-4-d76a313bf1" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:49:46Z","lastTransitionTime":"2025-03-17T17:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:49:46.748690 kubelet[2762]: I0317 17:49:46.748439 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfb84a01-f30b-430d-961e-7909d9fe6efd-clustermesh-secrets\") pod \"cilium-sgxkw\" (UID: \"bfb84a01-f30b-430d-961e-7909d9fe6efd\") " pod="kube-system/cilium-sgxkw" Mar 17 17:49:46.748690 kubelet[2762]: I0317 17:49:46.748529 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfb84a01-f30b-430d-961e-7909d9fe6efd-bpf-maps\") pod \"cilium-sgxkw\" (UID: \"bfb84a01-f30b-430d-961e-7909d9fe6efd\") " pod="kube-system/cilium-sgxkw" Mar 17 17:49:46.748690 kubelet[2762]: I0317 17:49:46.748652 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfb84a01-f30b-430d-961e-7909d9fe6efd-cni-path\") pod \"cilium-sgxkw\" (UID: \"bfb84a01-f30b-430d-961e-7909d9fe6efd\") " pod="kube-system/cilium-sgxkw" Mar 17 17:49:46.749027 kubelet[2762]: I0317 17:49:46.748721 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfb84a01-f30b-430d-961e-7909d9fe6efd-hostproc\") pod \"cilium-sgxkw\" (UID: \"bfb84a01-f30b-430d-961e-7909d9fe6efd\") " pod="kube-system/cilium-sgxkw" Mar 17 17:49:46.749027 kubelet[2762]: I0317 17:49:46.748766 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfb84a01-f30b-430d-961e-7909d9fe6efd-cilium-cgroup\") pod \"cilium-sgxkw\" (UID: \"bfb84a01-f30b-430d-961e-7909d9fe6efd\") " pod="kube-system/cilium-sgxkw" Mar 17 17:49:46.749027 kubelet[2762]: I0317 17:49:46.748799 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfb84a01-f30b-430d-961e-7909d9fe6efd-lib-modules\") pod \"cilium-sgxkw\" (UID: \"bfb84a01-f30b-430d-961e-7909d9fe6efd\") " pod="kube-system/cilium-sgxkw" Mar 17 17:49:46.749027 kubelet[2762]: I0317 17:49:46.748830 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfb84a01-f30b-430d-961e-7909d9fe6efd-xtables-lock\") pod \"cilium-sgxkw\" (UID: \"bfb84a01-f30b-430d-961e-7909d9fe6efd\") " pod="kube-system/cilium-sgxkw" Mar 17 17:49:46.749027 kubelet[2762]: I0317 17:49:46.748884 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfb84a01-f30b-430d-961e-7909d9fe6efd-host-proc-sys-kernel\") pod \"cilium-sgxkw\" (UID: \"bfb84a01-f30b-430d-961e-7909d9fe6efd\") " pod="kube-system/cilium-sgxkw" Mar 17 17:49:46.749027 kubelet[2762]: I0317 17:49:46.748922 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfb84a01-f30b-430d-961e-7909d9fe6efd-etc-cni-netd\") pod \"cilium-sgxkw\" (UID: \"bfb84a01-f30b-430d-961e-7909d9fe6efd\") " pod="kube-system/cilium-sgxkw" Mar 17 17:49:46.749388 kubelet[2762]: I0317 17:49:46.748956 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfb84a01-f30b-430d-961e-7909d9fe6efd-host-proc-sys-net\") pod \"cilium-sgxkw\" (UID: \"bfb84a01-f30b-430d-961e-7909d9fe6efd\") " pod="kube-system/cilium-sgxkw" Mar 17 17:49:46.749388 kubelet[2762]: I0317 17:49:46.748988 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfb84a01-f30b-430d-961e-7909d9fe6efd-hubble-tls\") pod \"cilium-sgxkw\" (UID: \"bfb84a01-f30b-430d-961e-7909d9fe6efd\") " pod="kube-system/cilium-sgxkw" Mar 17 17:49:46.749388 kubelet[2762]: I0317 17:49:46.749021 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67vlx\" (UniqueName: \"kubernetes.io/projected/bfb84a01-f30b-430d-961e-7909d9fe6efd-kube-api-access-67vlx\") pod \"cilium-sgxkw\" (UID: \"bfb84a01-f30b-430d-961e-7909d9fe6efd\") " pod="kube-system/cilium-sgxkw" Mar 17 17:49:46.749388 kubelet[2762]: I0317 17:49:46.749050 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfb84a01-f30b-430d-961e-7909d9fe6efd-cilium-run\") pod \"cilium-sgxkw\" (UID: \"bfb84a01-f30b-430d-961e-7909d9fe6efd\") " pod="kube-system/cilium-sgxkw" Mar 17 17:49:46.749388 kubelet[2762]: I0317 17:49:46.749079 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfb84a01-f30b-430d-961e-7909d9fe6efd-cilium-config-path\") pod \"cilium-sgxkw\" (UID: \"bfb84a01-f30b-430d-961e-7909d9fe6efd\") " pod="kube-system/cilium-sgxkw" Mar 17 17:49:46.749661 kubelet[2762]: I0317 17:49:46.749107 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bfb84a01-f30b-430d-961e-7909d9fe6efd-cilium-ipsec-secrets\") pod \"cilium-sgxkw\" (UID: \"bfb84a01-f30b-430d-961e-7909d9fe6efd\") " pod="kube-system/cilium-sgxkw" Mar 17 17:49:46.765856 kubelet[2762]: E0317 17:49:46.765430 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-fcsxx" podUID="0c0127c5-2a91-4623-89f3-78aab8b38ba9" Mar 17 17:49:46.810223 sshd[4535]: Connection closed by 139.178.89.65 port 60440 Mar 17 17:49:46.811858 sshd-session[4533]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:46.815701 systemd[1]: sshd@27-88.198.122.152:22-139.178.89.65:60440.service: Deactivated successfully. Mar 17 17:49:46.819939 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:49:46.820292 systemd[1]: session-21.scope: Consumed 1.381s CPU time. Mar 17 17:49:46.825091 systemd-logind[1467]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:49:46.826942 systemd-logind[1467]: Removed session 21. Mar 17 17:49:46.948828 containerd[1491]: time="2025-03-17T17:49:46.947825032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sgxkw,Uid:bfb84a01-f30b-430d-961e-7909d9fe6efd,Namespace:kube-system,Attempt:0,}" Mar 17 17:49:46.984947 systemd[1]: Started sshd@28-88.198.122.152:22-139.178.89.65:60456.service - OpenSSH per-connection server daemon (139.178.89.65:60456). Mar 17 17:49:46.989771 containerd[1491]: time="2025-03-17T17:49:46.986238064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:46.989771 containerd[1491]: time="2025-03-17T17:49:46.988092662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:46.989771 containerd[1491]: time="2025-03-17T17:49:46.988105103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:46.989771 containerd[1491]: time="2025-03-17T17:49:46.988188624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:47.006808 systemd[1]: Started cri-containerd-9dda7965a4779bbcbe931a82f701a2eb03349344f9a278aa68611d5a3d9df650.scope - libcontainer container 9dda7965a4779bbcbe931a82f701a2eb03349344f9a278aa68611d5a3d9df650. Mar 17 17:49:47.043264 containerd[1491]: time="2025-03-17T17:49:47.043205719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sgxkw,Uid:bfb84a01-f30b-430d-961e-7909d9fe6efd,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dda7965a4779bbcbe931a82f701a2eb03349344f9a278aa68611d5a3d9df650\"" Mar 17 17:49:47.048077 containerd[1491]: time="2025-03-17T17:49:47.048035059Z" level=info msg="CreateContainer within sandbox \"9dda7965a4779bbcbe931a82f701a2eb03349344f9a278aa68611d5a3d9df650\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:49:47.064265 containerd[1491]: time="2025-03-17T17:49:47.064203193Z" level=info msg="CreateContainer within sandbox \"9dda7965a4779bbcbe931a82f701a2eb03349344f9a278aa68611d5a3d9df650\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aa82375f191ae4a914df6b6c018e12080c7dba86f69f02f67cf52592a1ac972b\"" Mar 17 17:49:47.066758 containerd[1491]: time="2025-03-17T17:49:47.065649063Z" level=info msg="StartContainer for \"aa82375f191ae4a914df6b6c018e12080c7dba86f69f02f67cf52592a1ac972b\"" Mar 17 17:49:47.091753 systemd[1]: Started cri-containerd-aa82375f191ae4a914df6b6c018e12080c7dba86f69f02f67cf52592a1ac972b.scope - libcontainer container aa82375f191ae4a914df6b6c018e12080c7dba86f69f02f67cf52592a1ac972b. Mar 17 17:49:47.125876 containerd[1491]: time="2025-03-17T17:49:47.125753543Z" level=info msg="StartContainer for \"aa82375f191ae4a914df6b6c018e12080c7dba86f69f02f67cf52592a1ac972b\" returns successfully" Mar 17 17:49:47.135043 systemd[1]: cri-containerd-aa82375f191ae4a914df6b6c018e12080c7dba86f69f02f67cf52592a1ac972b.scope: Deactivated successfully. Mar 17 17:49:47.172243 containerd[1491]: time="2025-03-17T17:49:47.172130100Z" level=info msg="shim disconnected" id=aa82375f191ae4a914df6b6c018e12080c7dba86f69f02f67cf52592a1ac972b namespace=k8s.io Mar 17 17:49:47.172243 containerd[1491]: time="2025-03-17T17:49:47.172200422Z" level=warning msg="cleaning up after shim disconnected" id=aa82375f191ae4a914df6b6c018e12080c7dba86f69f02f67cf52592a1ac972b namespace=k8s.io Mar 17 17:49:47.172243 containerd[1491]: time="2025-03-17T17:49:47.172210502Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:47.185296 containerd[1491]: time="2025-03-17T17:49:47.185231611Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:49:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:49:47.771981 containerd[1491]: time="2025-03-17T17:49:47.771499750Z" level=info msg="CreateContainer within sandbox \"9dda7965a4779bbcbe931a82f701a2eb03349344f9a278aa68611d5a3d9df650\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:49:47.783875 containerd[1491]: time="2025-03-17T17:49:47.783816524Z" level=info msg="CreateContainer within sandbox \"9dda7965a4779bbcbe931a82f701a2eb03349344f9a278aa68611d5a3d9df650\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eab0144e8f3c9dab95b6df2cce9824f0084a707e867c4433e6355fbd0a612d7f\"" Mar 17 17:49:47.785815 containerd[1491]: time="2025-03-17T17:49:47.784612141Z" level=info msg="StartContainer for \"eab0144e8f3c9dab95b6df2cce9824f0084a707e867c4433e6355fbd0a612d7f\"" Mar 17 17:49:47.816748 systemd[1]: Started cri-containerd-eab0144e8f3c9dab95b6df2cce9824f0084a707e867c4433e6355fbd0a612d7f.scope - libcontainer container eab0144e8f3c9dab95b6df2cce9824f0084a707e867c4433e6355fbd0a612d7f. Mar 17 17:49:47.852531 containerd[1491]: time="2025-03-17T17:49:47.850903149Z" level=info msg="StartContainer for \"eab0144e8f3c9dab95b6df2cce9824f0084a707e867c4433e6355fbd0a612d7f\" returns successfully" Mar 17 17:49:47.870750 systemd[1]: cri-containerd-eab0144e8f3c9dab95b6df2cce9824f0084a707e867c4433e6355fbd0a612d7f.scope: Deactivated successfully. Mar 17 17:49:47.893489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eab0144e8f3c9dab95b6df2cce9824f0084a707e867c4433e6355fbd0a612d7f-rootfs.mount: Deactivated successfully. Mar 17 17:49:47.901679 containerd[1491]: time="2025-03-17T17:49:47.901367510Z" level=info msg="shim disconnected" id=eab0144e8f3c9dab95b6df2cce9824f0084a707e867c4433e6355fbd0a612d7f namespace=k8s.io Mar 17 17:49:47.901679 containerd[1491]: time="2025-03-17T17:49:47.901447112Z" level=warning msg="cleaning up after shim disconnected" id=eab0144e8f3c9dab95b6df2cce9824f0084a707e867c4433e6355fbd0a612d7f namespace=k8s.io Mar 17 17:49:47.901679 containerd[1491]: time="2025-03-17T17:49:47.901464672Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:47.988553 sshd[4557]: Accepted publickey for core from 139.178.89.65 port 60456 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:49:47.990851 sshd-session[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:47.997486 systemd-logind[1467]: New session 22 of user core. Mar 17 17:49:48.005848 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:49:48.669577 sshd[4717]: Connection closed by 139.178.89.65 port 60456 Mar 17 17:49:48.670318 sshd-session[4557]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:48.675759 systemd[1]: sshd@28-88.198.122.152:22-139.178.89.65:60456.service: Deactivated successfully. Mar 17 17:49:48.677811 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:49:48.679140 systemd-logind[1467]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:49:48.680866 systemd-logind[1467]: Removed session 22. Mar 17 17:49:48.767005 kubelet[2762]: E0317 17:49:48.766210 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-fcsxx" podUID="0c0127c5-2a91-4623-89f3-78aab8b38ba9" Mar 17 17:49:48.781083 containerd[1491]: time="2025-03-17T17:49:48.778011063Z" level=info msg="CreateContainer within sandbox \"9dda7965a4779bbcbe931a82f701a2eb03349344f9a278aa68611d5a3d9df650\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:49:48.798544 containerd[1491]: time="2025-03-17T17:49:48.798483806Z" level=info msg="CreateContainer within sandbox \"9dda7965a4779bbcbe931a82f701a2eb03349344f9a278aa68611d5a3d9df650\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2247c102918761f5eb7dbe1e3d591f33723168dfc7ce2c4dec620d37cdb3b0b4\"" Mar 17 17:49:48.799441 containerd[1491]: time="2025-03-17T17:49:48.799402145Z" level=info msg="StartContainer for \"2247c102918761f5eb7dbe1e3d591f33723168dfc7ce2c4dec620d37cdb3b0b4\"" Mar 17 17:49:48.845812 systemd[1]: Started cri-containerd-2247c102918761f5eb7dbe1e3d591f33723168dfc7ce2c4dec620d37cdb3b0b4.scope - libcontainer container 2247c102918761f5eb7dbe1e3d591f33723168dfc7ce2c4dec620d37cdb3b0b4. Mar 17 17:49:48.847913 systemd[1]: Started sshd@29-88.198.122.152:22-139.178.89.65:60464.service - OpenSSH per-connection server daemon (139.178.89.65:60464). Mar 17 17:49:48.891018 containerd[1491]: time="2025-03-17T17:49:48.890124620Z" level=info msg="StartContainer for \"2247c102918761f5eb7dbe1e3d591f33723168dfc7ce2c4dec620d37cdb3b0b4\" returns successfully" Mar 17 17:49:48.895332 systemd[1]: cri-containerd-2247c102918761f5eb7dbe1e3d591f33723168dfc7ce2c4dec620d37cdb3b0b4.scope: Deactivated successfully. Mar 17 17:49:48.919975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2247c102918761f5eb7dbe1e3d591f33723168dfc7ce2c4dec620d37cdb3b0b4-rootfs.mount: Deactivated successfully. Mar 17 17:49:48.934543 containerd[1491]: time="2025-03-17T17:49:48.933352833Z" level=info msg="shim disconnected" id=2247c102918761f5eb7dbe1e3d591f33723168dfc7ce2c4dec620d37cdb3b0b4 namespace=k8s.io Mar 17 17:49:48.934543 containerd[1491]: time="2025-03-17T17:49:48.933413395Z" level=warning msg="cleaning up after shim disconnected" id=2247c102918761f5eb7dbe1e3d591f33723168dfc7ce2c4dec620d37cdb3b0b4 namespace=k8s.io Mar 17 17:49:48.934543 containerd[1491]: time="2025-03-17T17:49:48.933421795Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:48.977092 kubelet[2762]: E0317 17:49:48.976959 2762 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:49:49.784288 containerd[1491]: time="2025-03-17T17:49:49.783580543Z" level=info msg="CreateContainer within sandbox \"9dda7965a4779bbcbe931a82f701a2eb03349344f9a278aa68611d5a3d9df650\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:49:49.801207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3340748014.mount: Deactivated successfully. Mar 17 17:49:49.807701 containerd[1491]: time="2025-03-17T17:49:49.807584800Z" level=info msg="CreateContainer within sandbox \"9dda7965a4779bbcbe931a82f701a2eb03349344f9a278aa68611d5a3d9df650\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e0bac3e462169e5ddfbe7f1801a73c2914d39ed1228400036c5e75bfc81994fd\"" Mar 17 17:49:49.809677 containerd[1491]: time="2025-03-17T17:49:49.809636803Z" level=info msg="StartContainer for \"e0bac3e462169e5ddfbe7f1801a73c2914d39ed1228400036c5e75bfc81994fd\"" Mar 17 17:49:49.837824 systemd[1]: Started cri-containerd-e0bac3e462169e5ddfbe7f1801a73c2914d39ed1228400036c5e75bfc81994fd.scope - libcontainer container e0bac3e462169e5ddfbe7f1801a73c2914d39ed1228400036c5e75bfc81994fd. Mar 17 17:49:49.857721 sshd[4740]: Accepted publickey for core from 139.178.89.65 port 60464 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:49:49.863352 sshd-session[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:49.877234 systemd-logind[1467]: New session 23 of user core. Mar 17 17:49:49.880763 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:49:49.883236 systemd[1]: cri-containerd-e0bac3e462169e5ddfbe7f1801a73c2914d39ed1228400036c5e75bfc81994fd.scope: Deactivated successfully. Mar 17 17:49:49.889473 containerd[1491]: time="2025-03-17T17:49:49.889123367Z" level=info msg="StartContainer for \"e0bac3e462169e5ddfbe7f1801a73c2914d39ed1228400036c5e75bfc81994fd\" returns successfully" Mar 17 17:49:49.909282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0bac3e462169e5ddfbe7f1801a73c2914d39ed1228400036c5e75bfc81994fd-rootfs.mount: Deactivated successfully. Mar 17 17:49:49.916989 containerd[1491]: time="2025-03-17T17:49:49.916838341Z" level=info msg="shim disconnected" id=e0bac3e462169e5ddfbe7f1801a73c2914d39ed1228400036c5e75bfc81994fd namespace=k8s.io Mar 17 17:49:49.916989 containerd[1491]: time="2025-03-17T17:49:49.916929263Z" level=warning msg="cleaning up after shim disconnected" id=e0bac3e462169e5ddfbe7f1801a73c2914d39ed1228400036c5e75bfc81994fd namespace=k8s.io Mar 17 17:49:49.916989 containerd[1491]: time="2025-03-17T17:49:49.916939783Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:50.766391 kubelet[2762]: E0317 17:49:50.765762 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-fcsxx" podUID="0c0127c5-2a91-4623-89f3-78aab8b38ba9" Mar 17 17:49:50.790259 containerd[1491]: time="2025-03-17T17:49:50.789480857Z" level=info msg="CreateContainer within sandbox \"9dda7965a4779bbcbe931a82f701a2eb03349344f9a278aa68611d5a3d9df650\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:49:50.807799 containerd[1491]: time="2025-03-17T17:49:50.807147703Z" level=info msg="CreateContainer within sandbox \"9dda7965a4779bbcbe931a82f701a2eb03349344f9a278aa68611d5a3d9df650\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7bed3bdac38d89e65ddafb917a59f8a22e2ee6e325f49db300a02cac5390b9c2\"" Mar 17 17:49:50.809436 containerd[1491]: time="2025-03-17T17:49:50.809332228Z" level=info msg="StartContainer for \"7bed3bdac38d89e65ddafb917a59f8a22e2ee6e325f49db300a02cac5390b9c2\"" Mar 17 17:49:50.837697 systemd[1]: Started cri-containerd-7bed3bdac38d89e65ddafb917a59f8a22e2ee6e325f49db300a02cac5390b9c2.scope - libcontainer container 7bed3bdac38d89e65ddafb917a59f8a22e2ee6e325f49db300a02cac5390b9c2. Mar 17 17:49:50.868987 containerd[1491]: time="2025-03-17T17:49:50.868724218Z" level=info msg="StartContainer for \"7bed3bdac38d89e65ddafb917a59f8a22e2ee6e325f49db300a02cac5390b9c2\" returns successfully" Mar 17 17:49:51.158587 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 17 17:49:51.813526 kubelet[2762]: I0317 17:49:51.811984 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sgxkw" podStartSLOduration=5.81196118 podStartE2EDuration="5.81196118s" podCreationTimestamp="2025-03-17 17:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:49:51.810554831 +0000 UTC m=+353.145725820" watchObservedRunningTime="2025-03-17 17:49:51.81196118 +0000 UTC m=+353.147132169" Mar 17 17:49:52.767548 kubelet[2762]: E0317 17:49:52.766001 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-fcsxx" podUID="0c0127c5-2a91-4623-89f3-78aab8b38ba9" Mar 17 17:49:54.120983 systemd-networkd[1365]: lxc_health: Link UP Mar 17 17:49:54.144109 systemd-networkd[1365]: lxc_health: Gained carrier Mar 17 17:49:55.778762 systemd-networkd[1365]: lxc_health: Gained IPv6LL Mar 17 17:49:58.803905 containerd[1491]: time="2025-03-17T17:49:58.803722488Z" level=info msg="StopPodSandbox for \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\"" Mar 17 17:49:58.803905 containerd[1491]: time="2025-03-17T17:49:58.803836611Z" level=info msg="TearDown network for sandbox \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\" successfully" Mar 17 17:49:58.803905 containerd[1491]: time="2025-03-17T17:49:58.803852731Z" level=info msg="StopPodSandbox for \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\" returns successfully" Mar 17 17:49:58.805634 containerd[1491]: time="2025-03-17T17:49:58.805258240Z" level=info msg="RemovePodSandbox for \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\"" Mar 17 17:49:58.805634 containerd[1491]: time="2025-03-17T17:49:58.805289601Z" level=info msg="Forcibly stopping sandbox \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\"" Mar 17 17:49:58.805634 containerd[1491]: time="2025-03-17T17:49:58.805340362Z" level=info msg="TearDown network for sandbox \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\" successfully" Mar 17 17:49:58.808826 containerd[1491]: time="2025-03-17T17:49:58.808773434Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:58.809322 containerd[1491]: time="2025-03-17T17:49:58.809276164Z" level=info msg="RemovePodSandbox \"a94b91cc0644a6a74d7f8b929b0f7090d26da88113fda79a932152cf83ea5dba\" returns successfully" Mar 17 17:49:58.810320 containerd[1491]: time="2025-03-17T17:49:58.810093101Z" level=info msg="StopPodSandbox for \"5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f\"" Mar 17 17:49:58.810320 containerd[1491]: time="2025-03-17T17:49:58.810226984Z" level=info msg="TearDown network for sandbox \"5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f\" successfully" Mar 17 17:49:58.810320 containerd[1491]: time="2025-03-17T17:49:58.810247225Z" level=info msg="StopPodSandbox for \"5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f\" returns successfully" Mar 17 17:49:58.811461 containerd[1491]: time="2025-03-17T17:49:58.810892238Z" level=info msg="RemovePodSandbox for \"5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f\"" Mar 17 17:49:58.811461 containerd[1491]: time="2025-03-17T17:49:58.810935479Z" level=info msg="Forcibly stopping sandbox \"5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f\"" Mar 17 17:49:58.811461 containerd[1491]: time="2025-03-17T17:49:58.811000920Z" level=info msg="TearDown network for sandbox \"5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f\" successfully" Mar 17 17:49:58.815259 containerd[1491]: time="2025-03-17T17:49:58.815206768Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:58.815363 containerd[1491]: time="2025-03-17T17:49:58.815277250Z" level=info msg="RemovePodSandbox \"5fb84298f978ef06f56300744f0e783aae29b738834fd2bd2137ec7e844e915f\" returns successfully" Mar 17 17:50:01.410142 sshd[4810]: Connection closed by 139.178.89.65 port 60464 Mar 17 17:50:01.411858 sshd-session[4740]: pam_unix(sshd:session): session closed for user core Mar 17 17:50:01.416820 systemd[1]: sshd@29-88.198.122.152:22-139.178.89.65:60464.service: Deactivated successfully. Mar 17 17:50:01.419930 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:50:01.420991 systemd-logind[1467]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:50:01.422334 systemd-logind[1467]: Removed session 23.