Jan 30 15:23:27.903805 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 15:23:27.903886 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 15:23:27.903908 kernel: KASLR enabled Jan 30 15:23:27.903919 kernel: efi: EFI v2.7 by EDK II Jan 30 15:23:27.903928 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4d698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Jan 30 15:23:27.903938 kernel: random: crng init done Jan 30 15:23:27.903950 kernel: ACPI: Early table checksum verification disabled Jan 30 15:23:27.903960 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Jan 30 15:23:27.903971 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 30 15:23:27.903981 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:23:27.904053 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:23:27.904064 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:23:27.904074 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:23:27.904084 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:23:27.904097 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:23:27.904111 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:23:27.904122 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:23:27.904132 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:23:27.904143 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 15:23:27.904154 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 30 15:23:27.904165 kernel: NUMA: Failed to initialise from firmware Jan 30 15:23:27.904176 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 30 15:23:27.904186 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Jan 30 15:23:27.904197 kernel: Zone ranges: Jan 30 15:23:27.904210 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 30 15:23:27.904221 kernel: DMA32 empty Jan 30 15:23:27.904234 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 30 15:23:27.904245 kernel: Movable zone start for each node Jan 30 15:23:27.904255 kernel: Early memory node ranges Jan 30 15:23:27.904266 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Jan 30 15:23:27.904277 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Jan 30 15:23:27.904288 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Jan 30 15:23:27.904299 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Jan 30 15:23:27.904309 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Jan 30 15:23:27.904320 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 30 15:23:27.904331 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 30 15:23:27.904342 kernel: psci: probing for conduit method from ACPI. Jan 30 15:23:27.904354 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 15:23:27.904365 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 15:23:27.904376 kernel: psci: Trusted OS migration not required Jan 30 15:23:27.904392 kernel: psci: SMC Calling Convention v1.1 Jan 30 15:23:27.904404 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 15:23:27.904416 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 15:23:27.904429 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 15:23:27.904441 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 15:23:27.904452 kernel: Detected PIPT I-cache on CPU0 Jan 30 15:23:27.904464 kernel: CPU features: detected: GIC system register CPU interface Jan 30 15:23:27.904475 kernel: CPU features: detected: Hardware dirty bit management Jan 30 15:23:27.904487 kernel: CPU features: detected: Spectre-v4 Jan 30 15:23:27.904498 kernel: CPU features: detected: Spectre-BHB Jan 30 15:23:27.904509 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 15:23:27.904521 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 15:23:27.904532 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 15:23:27.904543 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 15:23:27.904565 kernel: alternatives: applying boot alternatives Jan 30 15:23:27.904580 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 15:23:27.904593 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 15:23:27.904608 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 15:23:27.904625 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 15:23:27.904640 kernel: Fallback order for Node 0: 0 Jan 30 15:23:27.904657 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 30 15:23:27.904691 kernel: Policy zone: Normal Jan 30 15:23:27.904710 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 15:23:27.904726 kernel: software IO TLB: area num 2. Jan 30 15:23:27.904742 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 30 15:23:27.904765 kernel: Memory: 3881592K/4096000K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 214408K reserved, 0K cma-reserved) Jan 30 15:23:27.904780 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 15:23:27.904791 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 15:23:27.904808 kernel: rcu: RCU event tracing is enabled. Jan 30 15:23:27.904820 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 15:23:27.904831 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 15:23:27.904843 kernel: Tracing variant of Tasks RCU enabled. Jan 30 15:23:27.904854 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 15:23:27.904866 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 15:23:27.904878 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 15:23:27.904889 kernel: GICv3: 256 SPIs implemented Jan 30 15:23:27.904903 kernel: GICv3: 0 Extended SPIs implemented Jan 30 15:23:27.904914 kernel: Root IRQ handler: gic_handle_irq Jan 30 15:23:27.904926 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 15:23:27.904937 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 15:23:27.904948 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 15:23:27.904960 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 15:23:27.904972 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 15:23:27.904992 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 30 15:23:27.905006 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 30 15:23:27.905018 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 15:23:27.905029 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 15:23:27.905044 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 15:23:27.905056 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 15:23:27.905068 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 15:23:27.905080 kernel: Console: colour dummy device 80x25 Jan 30 15:23:27.905092 kernel: ACPI: Core revision 20230628 Jan 30 15:23:27.905104 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 15:23:27.905116 kernel: pid_max: default: 32768 minimum: 301 Jan 30 15:23:27.905128 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 15:23:27.905139 kernel: landlock: Up and running. Jan 30 15:23:27.905151 kernel: SELinux: Initializing. Jan 30 15:23:27.905161 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 15:23:27.905172 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 15:23:27.905180 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:23:27.905187 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:23:27.905194 kernel: rcu: Hierarchical SRCU implementation. Jan 30 15:23:27.905202 kernel: rcu: Max phase no-delay instances is 400. Jan 30 15:23:27.905209 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 15:23:27.905216 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 15:23:27.905223 kernel: Remapping and enabling EFI services. Jan 30 15:23:27.905232 kernel: smp: Bringing up secondary CPUs ... Jan 30 15:23:27.905240 kernel: Detected PIPT I-cache on CPU1 Jan 30 15:23:27.905247 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 15:23:27.905254 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 30 15:23:27.905261 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 15:23:27.905269 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 15:23:27.905276 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 15:23:27.905283 kernel: SMP: Total of 2 processors activated. Jan 30 15:23:27.905290 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 15:23:27.905298 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 15:23:27.905306 kernel: CPU features: detected: Common not Private translations Jan 30 15:23:27.905313 kernel: CPU features: detected: CRC32 instructions Jan 30 15:23:27.905326 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 15:23:27.905335 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 15:23:27.905342 kernel: CPU features: detected: LSE atomic instructions Jan 30 15:23:27.905350 kernel: CPU features: detected: Privileged Access Never Jan 30 15:23:27.905357 kernel: CPU features: detected: RAS Extension Support Jan 30 15:23:27.905365 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 15:23:27.905373 kernel: CPU: All CPU(s) started at EL1 Jan 30 15:23:27.905382 kernel: alternatives: applying system-wide alternatives Jan 30 15:23:27.905389 kernel: devtmpfs: initialized Jan 30 15:23:27.905397 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 15:23:27.905404 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 15:23:27.905412 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 15:23:27.905419 kernel: SMBIOS 3.0.0 present. Jan 30 15:23:27.905427 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 30 15:23:27.905436 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 15:23:27.905444 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 15:23:27.905451 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 15:23:27.905459 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 15:23:27.905467 kernel: audit: initializing netlink subsys (disabled) Jan 30 15:23:27.905474 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Jan 30 15:23:27.905482 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 15:23:27.905489 kernel: cpuidle: using governor menu Jan 30 15:23:27.905497 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 15:23:27.905505 kernel: ASID allocator initialised with 32768 entries Jan 30 15:23:27.905513 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 15:23:27.905520 kernel: Serial: AMBA PL011 UART driver Jan 30 15:23:27.905528 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 15:23:27.905537 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 15:23:27.905547 kernel: Modules: 509040 pages in range for PLT usage Jan 30 15:23:27.905555 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 15:23:27.905562 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 15:23:27.905571 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 15:23:27.905583 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 15:23:27.905591 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 15:23:27.905601 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 15:23:27.905609 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 15:23:27.905618 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 15:23:27.905625 kernel: ACPI: Added _OSI(Module Device) Jan 30 15:23:27.905633 kernel: ACPI: Added _OSI(Processor Device) Jan 30 15:23:27.905644 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 15:23:27.905651 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 15:23:27.905670 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 15:23:27.905678 kernel: ACPI: Interpreter enabled Jan 30 15:23:27.905688 kernel: ACPI: Using GIC for interrupt routing Jan 30 15:23:27.905697 kernel: ACPI: MCFG table detected, 1 entries Jan 30 15:23:27.905705 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 15:23:27.905714 kernel: printk: console [ttyAMA0] enabled Jan 30 15:23:27.905723 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 15:23:27.905893 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 15:23:27.905976 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 15:23:27.906068 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 15:23:27.906138 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 15:23:27.906203 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 15:23:27.906213 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 15:23:27.906221 kernel: PCI host bridge to bus 0000:00 Jan 30 15:23:27.906295 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 15:23:27.906361 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 15:23:27.906420 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 15:23:27.906479 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 15:23:27.906569 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 15:23:27.906969 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 30 15:23:27.907080 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 30 15:23:27.907152 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 30 15:23:27.907235 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 15:23:27.907304 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 30 15:23:27.907386 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 15:23:27.907457 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 30 15:23:27.907536 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 15:23:27.907619 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 30 15:23:27.907772 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 15:23:27.907845 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 30 15:23:27.908403 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 15:23:27.908490 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 30 15:23:27.910782 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 15:23:27.910893 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 30 15:23:27.910977 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 15:23:27.911115 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 30 15:23:27.911196 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 15:23:27.911268 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 30 15:23:27.911346 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 30 15:23:27.911485 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 30 15:23:27.911578 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 30 15:23:27.911644 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Jan 30 15:23:27.911739 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 15:23:27.911810 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 30 15:23:27.911879 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 15:23:27.911951 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 30 15:23:27.912050 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 15:23:27.912128 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 30 15:23:27.912206 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 30 15:23:27.912278 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 30 15:23:27.912348 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 30 15:23:27.912498 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 30 15:23:27.912576 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 30 15:23:27.914723 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 15:23:27.914896 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 30 15:23:27.915066 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 30 15:23:27.915159 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 30 15:23:27.915229 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 30 15:23:27.915300 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 30 15:23:27.915389 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 15:23:27.915459 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 30 15:23:27.915529 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 30 15:23:27.915598 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 30 15:23:27.917251 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 30 15:23:27.917353 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 30 15:23:27.917423 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 30 15:23:27.917502 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 30 15:23:27.917589 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 30 15:23:27.917710 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 30 15:23:27.917795 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 30 15:23:27.917862 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 30 15:23:27.917928 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 30 15:23:27.918017 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 30 15:23:27.918096 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 30 15:23:27.918167 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 30 15:23:27.918237 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 30 15:23:27.918303 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 30 15:23:27.918368 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 30 15:23:27.918439 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 30 15:23:27.918505 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 30 15:23:27.918570 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 30 15:23:27.918645 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 15:23:27.918774 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 30 15:23:27.918850 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 30 15:23:27.918922 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 15:23:27.919009 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 30 15:23:27.919085 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 30 15:23:27.919158 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 15:23:27.919226 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 30 15:23:27.919299 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 30 15:23:27.919381 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 30 15:23:27.919467 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 15:23:27.919557 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 30 15:23:27.921763 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 15:23:27.921868 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 30 15:23:27.921948 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 15:23:27.922054 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 30 15:23:27.922161 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 15:23:27.922239 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 30 15:23:27.922307 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 15:23:27.922379 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 30 15:23:27.922448 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 15:23:27.922523 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 30 15:23:27.922591 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 15:23:27.922673 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 30 15:23:27.922759 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 15:23:27.922840 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 30 15:23:27.922924 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 15:23:27.923050 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 30 15:23:27.923141 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 30 15:23:27.923214 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 30 15:23:27.923281 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 15:23:27.923350 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 30 15:23:27.923417 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 15:23:27.923485 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 30 15:23:27.923553 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 15:23:27.923620 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 30 15:23:27.924865 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 30 15:23:27.924970 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 30 15:23:27.925110 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 30 15:23:27.925193 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 30 15:23:27.925262 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 30 15:23:27.925332 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 30 15:23:27.925399 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 30 15:23:27.925470 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 30 15:23:27.925545 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 30 15:23:27.925616 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 30 15:23:27.925902 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 30 15:23:27.926038 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 30 15:23:27.926136 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 30 15:23:27.926210 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 15:23:27.926279 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 30 15:23:27.926348 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 15:23:27.926434 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 30 15:23:27.926504 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 30 15:23:27.926573 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 15:23:27.926646 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 30 15:23:27.926737 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 15:23:27.926805 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 30 15:23:27.926870 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 30 15:23:27.926949 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 15:23:27.927048 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 30 15:23:27.927135 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 30 15:23:27.927204 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 15:23:27.927271 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 30 15:23:27.927340 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 30 15:23:27.927407 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 15:23:27.927483 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 30 15:23:27.927551 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 15:23:27.927617 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 30 15:23:27.927734 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 30 15:23:27.927802 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 15:23:27.927877 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 30 15:23:27.927961 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 30 15:23:27.928086 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 15:23:27.928167 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 30 15:23:27.928232 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 30 15:23:27.928297 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 15:23:27.928370 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 30 15:23:27.928438 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 30 15:23:27.928506 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 15:23:27.928576 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 30 15:23:27.928643 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 30 15:23:27.928724 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 15:23:27.928800 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 30 15:23:27.928871 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 30 15:23:27.928950 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 30 15:23:27.929055 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 15:23:27.929141 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 30 15:23:27.929213 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 30 15:23:27.929281 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 15:23:27.929351 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 15:23:27.929417 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 30 15:23:27.929484 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 30 15:23:27.929550 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 15:23:27.929619 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 15:23:27.931603 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 30 15:23:27.932772 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 30 15:23:27.932865 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 15:23:27.932942 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 15:23:27.933060 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 15:23:27.933125 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 15:23:27.933206 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 30 15:23:27.933269 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 30 15:23:27.933340 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 15:23:27.933409 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 30 15:23:27.933471 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 30 15:23:27.933531 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 15:23:27.933600 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 30 15:23:27.935726 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 30 15:23:27.935884 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 15:23:27.935972 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 30 15:23:27.936081 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 30 15:23:27.936180 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 15:23:27.936254 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 30 15:23:27.936320 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 30 15:23:27.936381 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 15:23:27.936456 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 30 15:23:27.936579 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 30 15:23:27.936649 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 15:23:27.936746 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 30 15:23:27.936818 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 30 15:23:27.936881 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 15:23:27.936950 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 30 15:23:27.937063 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 30 15:23:27.937131 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 15:23:27.937201 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 30 15:23:27.937265 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 30 15:23:27.937334 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 15:23:27.937345 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 15:23:27.937388 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 15:23:27.937397 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 15:23:27.937405 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 15:23:27.937413 kernel: iommu: Default domain type: Translated Jan 30 15:23:27.937421 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 15:23:27.937429 kernel: efivars: Registered efivars operations Jan 30 15:23:27.937437 kernel: vgaarb: loaded Jan 30 15:23:27.937448 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 15:23:27.937457 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 15:23:27.937465 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 15:23:27.937472 kernel: pnp: PnP ACPI init Jan 30 15:23:27.937573 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 15:23:27.937586 kernel: pnp: PnP ACPI: found 1 devices Jan 30 15:23:27.937607 kernel: NET: Registered PF_INET protocol family Jan 30 15:23:27.937616 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 15:23:27.937628 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 15:23:27.937636 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 15:23:27.937644 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 15:23:27.937652 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 15:23:27.938780 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 15:23:27.938797 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 15:23:27.938806 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 15:23:27.938814 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 15:23:27.938943 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 30 15:23:27.938963 kernel: PCI: CLS 0 bytes, default 64 Jan 30 15:23:27.938971 kernel: kvm [1]: HYP mode not available Jan 30 15:23:27.938982 kernel: Initialise system trusted keyrings Jan 30 15:23:27.939044 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 15:23:27.939053 kernel: Key type asymmetric registered Jan 30 15:23:27.939061 kernel: Asymmetric key parser 'x509' registered Jan 30 15:23:27.939069 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 15:23:27.939077 kernel: io scheduler mq-deadline registered Jan 30 15:23:27.939085 kernel: io scheduler kyber registered Jan 30 15:23:27.939096 kernel: io scheduler bfq registered Jan 30 15:23:27.939104 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 30 15:23:27.939194 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 30 15:23:27.939265 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 30 15:23:27.939332 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:23:27.939411 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 30 15:23:27.939481 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 30 15:23:27.939552 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:23:27.939624 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 30 15:23:27.939950 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 30 15:23:27.940057 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:23:27.940133 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 30 15:23:27.940200 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 30 15:23:27.940273 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:23:27.940343 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 30 15:23:27.940410 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 30 15:23:27.940477 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:23:27.940550 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 30 15:23:27.940618 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 30 15:23:27.940741 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:23:27.940820 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 30 15:23:27.940898 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 30 15:23:27.940967 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:23:27.941049 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 30 15:23:27.941119 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 30 15:23:27.941195 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:23:27.941206 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 30 15:23:27.941274 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 30 15:23:27.941341 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 30 15:23:27.941407 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:23:27.941418 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 15:23:27.941430 kernel: ACPI: button: Power Button [PWRB] Jan 30 15:23:27.941440 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 15:23:27.941513 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Jan 30 15:23:27.941589 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 30 15:23:27.941716 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 30 15:23:27.941730 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 15:23:27.941738 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 30 15:23:27.941815 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 30 15:23:27.941831 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 30 15:23:27.941839 kernel: thunder_xcv, ver 1.0 Jan 30 15:23:27.941847 kernel: thunder_bgx, ver 1.0 Jan 30 15:23:27.941855 kernel: nicpf, ver 1.0 Jan 30 15:23:27.941863 kernel: nicvf, ver 1.0 Jan 30 15:23:27.941942 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 15:23:27.942049 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T15:23:27 UTC (1738250607) Jan 30 15:23:27.942062 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 15:23:27.942073 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 15:23:27.942082 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 15:23:27.942090 kernel: watchdog: Hard watchdog permanently disabled Jan 30 15:23:27.942104 kernel: NET: Registered PF_INET6 protocol family Jan 30 15:23:27.942130 kernel: Segment Routing with IPv6 Jan 30 15:23:27.942144 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 15:23:27.942154 kernel: NET: Registered PF_PACKET protocol family Jan 30 15:23:27.942163 kernel: Key type dns_resolver registered Jan 30 15:23:27.942171 kernel: registered taskstats version 1 Jan 30 15:23:27.942179 kernel: Loading compiled-in X.509 certificates Jan 30 15:23:27.942189 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 15:23:27.942197 kernel: Key type .fscrypt registered Jan 30 15:23:27.942205 kernel: Key type fscrypt-provisioning registered Jan 30 15:23:27.942213 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 15:23:27.942221 kernel: ima: Allocated hash algorithm: sha1 Jan 30 15:23:27.942228 kernel: ima: No architecture policies found Jan 30 15:23:27.942236 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 15:23:27.942244 kernel: clk: Disabling unused clocks Jan 30 15:23:27.942254 kernel: Freeing unused kernel memory: 39360K Jan 30 15:23:27.942262 kernel: Run /init as init process Jan 30 15:23:27.942270 kernel: with arguments: Jan 30 15:23:27.942278 kernel: /init Jan 30 15:23:27.942285 kernel: with environment: Jan 30 15:23:27.942293 kernel: HOME=/ Jan 30 15:23:27.942301 kernel: TERM=linux Jan 30 15:23:27.942308 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 15:23:27.942318 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:23:27.942330 systemd[1]: Detected virtualization kvm. Jan 30 15:23:27.942339 systemd[1]: Detected architecture arm64. Jan 30 15:23:27.942347 systemd[1]: Running in initrd. Jan 30 15:23:27.942355 systemd[1]: No hostname configured, using default hostname. Jan 30 15:23:27.942364 systemd[1]: Hostname set to . Jan 30 15:23:27.942373 systemd[1]: Initializing machine ID from VM UUID. Jan 30 15:23:27.942382 systemd[1]: Queued start job for default target initrd.target. Jan 30 15:23:27.942391 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:23:27.942400 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:23:27.942409 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 15:23:27.942418 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:23:27.942426 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 15:23:27.942435 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 15:23:27.942445 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 15:23:27.942455 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 15:23:27.942464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:23:27.942472 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:23:27.942482 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:23:27.942491 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:23:27.942499 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:23:27.942507 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:23:27.942517 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:23:27.942525 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:23:27.942535 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 15:23:27.942544 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 15:23:27.942553 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:23:27.942561 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:23:27.942570 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:23:27.942578 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:23:27.942587 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 15:23:27.942595 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:23:27.942605 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 15:23:27.942614 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 15:23:27.942622 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:23:27.942631 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:23:27.942641 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:23:27.942649 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 15:23:27.942694 systemd-journald[236]: Collecting audit messages is disabled. Jan 30 15:23:27.942719 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:23:27.942728 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 15:23:27.942737 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 15:23:27.942748 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:23:27.942756 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 15:23:27.942765 kernel: Bridge firewalling registered Jan 30 15:23:27.942774 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:23:27.942783 systemd-journald[236]: Journal started Jan 30 15:23:27.942804 systemd-journald[236]: Runtime Journal (/run/log/journal/b1843e1d14aa4ab5963b5e0ff14b8d31) is 8.0M, max 76.5M, 68.5M free. Jan 30 15:23:27.919910 systemd-modules-load[237]: Inserted module 'overlay' Jan 30 15:23:27.943355 systemd-modules-load[237]: Inserted module 'br_netfilter' Jan 30 15:23:27.945278 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:23:27.948580 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:23:27.958156 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:23:27.974975 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:23:27.978555 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:23:27.982892 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:23:27.988270 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:23:27.995886 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 15:23:27.998321 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:23:28.009846 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:23:28.013817 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:23:28.026107 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:23:28.028724 dracut-cmdline[268]: dracut-dracut-053 Jan 30 15:23:28.036690 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 15:23:28.056388 systemd-resolved[278]: Positive Trust Anchors: Jan 30 15:23:28.057141 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:23:28.058031 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:23:28.068015 systemd-resolved[278]: Defaulting to hostname 'linux'. Jan 30 15:23:28.069169 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:23:28.070422 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:23:28.114748 kernel: SCSI subsystem initialized Jan 30 15:23:28.119780 kernel: Loading iSCSI transport class v2.0-870. Jan 30 15:23:28.129728 kernel: iscsi: registered transport (tcp) Jan 30 15:23:28.142845 kernel: iscsi: registered transport (qla4xxx) Jan 30 15:23:28.142951 kernel: QLogic iSCSI HBA Driver Jan 30 15:23:28.194248 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 15:23:28.201015 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 15:23:28.223026 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 15:23:28.223092 kernel: device-mapper: uevent: version 1.0.3 Jan 30 15:23:28.223105 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 15:23:28.274718 kernel: raid6: neonx8 gen() 15372 MB/s Jan 30 15:23:28.291735 kernel: raid6: neonx4 gen() 15558 MB/s Jan 30 15:23:28.308728 kernel: raid6: neonx2 gen() 13041 MB/s Jan 30 15:23:28.325756 kernel: raid6: neonx1 gen() 10388 MB/s Jan 30 15:23:28.342772 kernel: raid6: int64x8 gen() 6871 MB/s Jan 30 15:23:28.359744 kernel: raid6: int64x4 gen() 7268 MB/s Jan 30 15:23:28.376720 kernel: raid6: int64x2 gen() 6046 MB/s Jan 30 15:23:28.393743 kernel: raid6: int64x1 gen() 4990 MB/s Jan 30 15:23:28.393814 kernel: raid6: using algorithm neonx4 gen() 15558 MB/s Jan 30 15:23:28.410767 kernel: raid6: .... xor() 12218 MB/s, rmw enabled Jan 30 15:23:28.410855 kernel: raid6: using neon recovery algorithm Jan 30 15:23:28.415707 kernel: xor: measuring software checksum speed Jan 30 15:23:28.415767 kernel: 8regs : 19764 MB/sec Jan 30 15:23:28.415789 kernel: 32regs : 17631 MB/sec Jan 30 15:23:28.416717 kernel: arm64_neon : 26014 MB/sec Jan 30 15:23:28.416764 kernel: xor: using function: arm64_neon (26014 MB/sec) Jan 30 15:23:28.468721 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 15:23:28.485702 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:23:28.493853 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:23:28.507311 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jan 30 15:23:28.510807 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:23:28.520887 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 15:23:28.538689 dracut-pre-trigger[459]: rd.md=0: removing MD RAID activation Jan 30 15:23:28.576312 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:23:28.582924 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:23:28.632850 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:23:28.643368 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 15:23:28.658777 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 15:23:28.662753 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:23:28.664467 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:23:28.665784 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:23:28.674040 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 15:23:28.692024 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:23:28.735540 kernel: scsi host0: Virtio SCSI HBA Jan 30 15:23:28.744840 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 15:23:28.744923 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 30 15:23:28.759850 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:23:28.791503 kernel: ACPI: bus type USB registered Jan 30 15:23:28.791528 kernel: usbcore: registered new interface driver usbfs Jan 30 15:23:28.791539 kernel: usbcore: registered new interface driver hub Jan 30 15:23:28.791548 kernel: usbcore: registered new device driver usb Jan 30 15:23:28.760025 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:23:28.791508 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:23:28.794192 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:23:28.794379 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:23:28.795405 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:23:28.805853 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 15:23:28.851279 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 30 15:23:28.851394 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 15:23:28.851478 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 30 15:23:28.851597 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 15:23:28.851715 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 30 15:23:28.851894 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 30 15:23:28.852021 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 30 15:23:28.852141 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 15:23:28.852154 kernel: hub 1-0:1.0: USB hub found Jan 30 15:23:28.852300 kernel: hub 1-0:1.0: 4 ports detected Jan 30 15:23:28.852413 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 30 15:23:28.852534 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 15:23:28.852677 kernel: hub 2-0:1.0: USB hub found Jan 30 15:23:28.852789 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 30 15:23:28.852922 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 30 15:23:28.853056 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 30 15:23:28.853171 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 30 15:23:28.853279 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 15:23:28.853384 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 15:23:28.853397 kernel: GPT:17805311 != 80003071 Jan 30 15:23:28.853409 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 15:23:28.853420 kernel: GPT:17805311 != 80003071 Jan 30 15:23:28.853432 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 15:23:28.853443 kernel: hub 2-0:1.0: 4 ports detected Jan 30 15:23:28.853543 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 15:23:28.853559 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 30 15:23:28.806277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:23:28.832629 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:23:28.840391 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:23:28.880107 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:23:28.902687 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (502) Jan 30 15:23:28.908692 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (510) Jan 30 15:23:28.911576 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 30 15:23:28.921846 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 30 15:23:28.929129 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 15:23:28.935493 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 30 15:23:28.937338 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 30 15:23:28.948895 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 15:23:28.957829 disk-uuid[570]: Primary Header is updated. Jan 30 15:23:28.957829 disk-uuid[570]: Secondary Entries is updated. Jan 30 15:23:28.957829 disk-uuid[570]: Secondary Header is updated. Jan 30 15:23:28.962689 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 15:23:28.973704 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 15:23:28.976727 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 15:23:29.074033 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 15:23:29.314706 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 30 15:23:29.451299 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 30 15:23:29.451360 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 30 15:23:29.452795 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 30 15:23:29.508360 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 30 15:23:29.508821 kernel: usbcore: registered new interface driver usbhid Jan 30 15:23:29.508848 kernel: usbhid: USB HID core driver Jan 30 15:23:29.982752 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 15:23:29.984167 disk-uuid[572]: The operation has completed successfully. Jan 30 15:23:30.037204 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 15:23:30.037310 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 15:23:30.050862 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 15:23:30.067512 sh[590]: Success Jan 30 15:23:30.081713 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 15:23:30.137061 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 15:23:30.146627 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 15:23:30.147362 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 15:23:30.177760 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 15:23:30.177835 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 15:23:30.177852 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 15:23:30.178762 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 15:23:30.178800 kernel: BTRFS info (device dm-0): using free space tree Jan 30 15:23:30.184705 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 15:23:30.186508 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 15:23:30.187789 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 15:23:30.193940 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 15:23:30.197428 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 15:23:30.214702 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 15:23:30.214766 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 15:23:30.214780 kernel: BTRFS info (device sda6): using free space tree Jan 30 15:23:30.218753 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 15:23:30.218816 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 15:23:30.231079 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 15:23:30.230918 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 15:23:30.237071 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 15:23:30.242857 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 15:23:30.324713 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:23:30.332917 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:23:30.336916 ignition[684]: Ignition 2.19.0 Jan 30 15:23:30.336926 ignition[684]: Stage: fetch-offline Jan 30 15:23:30.336962 ignition[684]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:23:30.336970 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 15:23:30.337193 ignition[684]: parsed url from cmdline: "" Jan 30 15:23:30.337197 ignition[684]: no config URL provided Jan 30 15:23:30.337202 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 15:23:30.340327 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:23:30.337209 ignition[684]: no config at "/usr/lib/ignition/user.ign" Jan 30 15:23:30.337214 ignition[684]: failed to fetch config: resource requires networking Jan 30 15:23:30.337387 ignition[684]: Ignition finished successfully Jan 30 15:23:30.356252 systemd-networkd[777]: lo: Link UP Jan 30 15:23:30.356265 systemd-networkd[777]: lo: Gained carrier Jan 30 15:23:30.358529 systemd-networkd[777]: Enumeration completed Jan 30 15:23:30.358878 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:23:30.360202 systemd[1]: Reached target network.target - Network. Jan 30 15:23:30.361639 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:23:30.361642 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:23:30.362416 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:23:30.362419 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:23:30.362954 systemd-networkd[777]: eth0: Link UP Jan 30 15:23:30.362957 systemd-networkd[777]: eth0: Gained carrier Jan 30 15:23:30.362964 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:23:30.368610 systemd-networkd[777]: eth1: Link UP Jan 30 15:23:30.368613 systemd-networkd[777]: eth1: Gained carrier Jan 30 15:23:30.368625 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:23:30.371346 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 15:23:30.385553 ignition[781]: Ignition 2.19.0 Jan 30 15:23:30.386567 ignition[781]: Stage: fetch Jan 30 15:23:30.386797 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:23:30.386809 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 15:23:30.386909 ignition[781]: parsed url from cmdline: "" Jan 30 15:23:30.386912 ignition[781]: no config URL provided Jan 30 15:23:30.386917 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 15:23:30.386924 ignition[781]: no config at "/usr/lib/ignition/user.ign" Jan 30 15:23:30.386943 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 30 15:23:30.387840 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 30 15:23:30.402795 systemd-networkd[777]: eth0: DHCPv4 address 49.13.124.2/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 15:23:30.415837 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 15:23:30.588037 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 30 15:23:30.596011 ignition[781]: GET result: OK Jan 30 15:23:30.596151 ignition[781]: parsing config with SHA512: 7ebc74961da77b6407242ad3b97f494e7197c1c1103b8d8b28b6182f37d8d686b44c2e0985bf03c852afd7cb20ac52a18d70fb6cdc67f044df5c78c326e14415 Jan 30 15:23:30.606904 unknown[781]: fetched base config from "system" Jan 30 15:23:30.607482 unknown[781]: fetched base config from "system" Jan 30 15:23:30.607907 ignition[781]: fetch: fetch complete Jan 30 15:23:30.607490 unknown[781]: fetched user config from "hetzner" Jan 30 15:23:30.607913 ignition[781]: fetch: fetch passed Jan 30 15:23:30.609956 ignition[781]: Ignition finished successfully Jan 30 15:23:30.612766 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 15:23:30.619931 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 15:23:30.640967 ignition[788]: Ignition 2.19.0 Jan 30 15:23:30.640992 ignition[788]: Stage: kargs Jan 30 15:23:30.641187 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:23:30.641197 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 15:23:30.643827 ignition[788]: kargs: kargs passed Jan 30 15:23:30.643900 ignition[788]: Ignition finished successfully Jan 30 15:23:30.647732 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 15:23:30.657045 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 15:23:30.669812 ignition[794]: Ignition 2.19.0 Jan 30 15:23:30.669823 ignition[794]: Stage: disks Jan 30 15:23:30.670020 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:23:30.670037 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 15:23:30.671114 ignition[794]: disks: disks passed Jan 30 15:23:30.671176 ignition[794]: Ignition finished successfully Jan 30 15:23:30.673746 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 15:23:30.675647 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 15:23:30.676609 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 15:23:30.677793 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:23:30.678796 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:23:30.679691 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:23:30.687013 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 15:23:30.702954 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 15:23:30.706380 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 15:23:30.711874 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 15:23:30.755739 kernel: EXT4-fs (sda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 15:23:30.757111 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 15:23:30.759331 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 15:23:30.764794 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:23:30.769432 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 15:23:30.771077 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 15:23:30.773853 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 15:23:30.773887 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:23:30.780198 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 15:23:30.784845 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (810) Jan 30 15:23:30.784890 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 15:23:30.785838 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 15:23:30.791947 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 15:23:30.792028 kernel: BTRFS info (device sda6): using free space tree Jan 30 15:23:30.792043 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 15:23:30.792054 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 15:23:30.794192 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:23:30.835450 coreos-metadata[812]: Jan 30 15:23:30.835 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 30 15:23:30.837832 coreos-metadata[812]: Jan 30 15:23:30.837 INFO Fetch successful Jan 30 15:23:30.838889 coreos-metadata[812]: Jan 30 15:23:30.838 INFO wrote hostname ci-4081-3-0-1-b815e480da to /sysroot/etc/hostname Jan 30 15:23:30.844721 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 15:23:30.852019 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 15:23:30.857818 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 30 15:23:30.863694 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 15:23:30.869698 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 15:23:30.981369 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 15:23:30.989850 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 15:23:30.994845 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 15:23:31.003736 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 15:23:31.025420 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 15:23:31.029831 ignition[927]: INFO : Ignition 2.19.0 Jan 30 15:23:31.029831 ignition[927]: INFO : Stage: mount Jan 30 15:23:31.031238 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:23:31.031238 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 15:23:31.031238 ignition[927]: INFO : mount: mount passed Jan 30 15:23:31.031238 ignition[927]: INFO : Ignition finished successfully Jan 30 15:23:31.033523 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 15:23:31.040844 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 15:23:31.178657 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 15:23:31.187885 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:23:31.197727 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (939) Jan 30 15:23:31.199960 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 15:23:31.200032 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 15:23:31.200056 kernel: BTRFS info (device sda6): using free space tree Jan 30 15:23:31.204705 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 15:23:31.204777 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 15:23:31.206606 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:23:31.229418 ignition[956]: INFO : Ignition 2.19.0 Jan 30 15:23:31.229418 ignition[956]: INFO : Stage: files Jan 30 15:23:31.230648 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:23:31.230648 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 15:23:31.232330 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 30 15:23:31.232330 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 15:23:31.232330 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 15:23:31.235911 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 15:23:31.238876 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 15:23:31.241522 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 15:23:31.240810 unknown[956]: wrote ssh authorized keys file for user: core Jan 30 15:23:31.246361 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 15:23:31.246361 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 15:23:31.320345 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 15:23:31.440394 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 15:23:31.441859 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 15:23:31.441859 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 30 15:23:31.587275 systemd-networkd[777]: eth0: Gained IPv6LL Jan 30 15:23:31.763953 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 15:23:31.871852 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 15:23:31.873538 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 15:23:31.873538 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 15:23:31.873538 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 15:23:31.873538 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 15:23:31.873538 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 15:23:31.873538 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 15:23:31.873538 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 15:23:31.880921 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 15:23:31.880921 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:23:31.880921 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:23:31.880921 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 15:23:31.880921 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 15:23:31.880921 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 15:23:31.880921 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 30 15:23:32.291067 systemd-networkd[777]: eth1: Gained IPv6LL Jan 30 15:23:32.486733 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 15:23:33.772256 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 15:23:33.772256 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 15:23:33.774567 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 15:23:33.774567 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 15:23:33.774567 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 15:23:33.774567 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 30 15:23:33.774567 ignition[956]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 15:23:33.774567 ignition[956]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 15:23:33.774567 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 30 15:23:33.774567 ignition[956]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 15:23:33.782324 ignition[956]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 15:23:33.782324 ignition[956]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:23:33.782324 ignition[956]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:23:33.782324 ignition[956]: INFO : files: files passed Jan 30 15:23:33.782324 ignition[956]: INFO : Ignition finished successfully Jan 30 15:23:33.777131 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 15:23:33.786592 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 15:23:33.790200 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 15:23:33.791289 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 15:23:33.791409 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 15:23:33.815615 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:23:33.815615 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:23:33.819133 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:23:33.821575 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:23:33.823007 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 15:23:33.829992 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 15:23:33.864123 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 15:23:33.864305 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 15:23:33.866614 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 15:23:33.868042 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 15:23:33.869414 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 15:23:33.875864 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 15:23:33.894103 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:23:33.901056 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 15:23:33.915058 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:23:33.916150 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:23:33.917586 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 15:23:33.918856 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 15:23:33.919047 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:23:33.920648 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 15:23:33.921340 systemd[1]: Stopped target basic.target - Basic System. Jan 30 15:23:33.922383 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 15:23:33.923372 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:23:33.924352 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 15:23:33.925395 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 15:23:33.926439 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:23:33.927584 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 15:23:33.928562 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 15:23:33.929645 systemd[1]: Stopped target swap.target - Swaps. Jan 30 15:23:33.930539 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 15:23:33.930680 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:23:33.933045 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:23:33.934128 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:23:33.935357 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 15:23:33.939723 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:23:33.940852 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 15:23:33.941071 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 15:23:33.943672 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 15:23:33.943827 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:23:33.945183 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 15:23:33.945301 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 15:23:33.946522 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 15:23:33.946640 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 15:23:33.955343 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 15:23:33.957799 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 15:23:33.959148 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:23:33.962945 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 15:23:33.964049 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 15:23:33.964752 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:23:33.966279 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 15:23:33.967073 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:23:33.973322 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 15:23:33.975408 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 15:23:33.978707 ignition[1009]: INFO : Ignition 2.19.0 Jan 30 15:23:33.978707 ignition[1009]: INFO : Stage: umount Jan 30 15:23:33.979763 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:23:33.979763 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 15:23:33.982031 ignition[1009]: INFO : umount: umount passed Jan 30 15:23:33.982031 ignition[1009]: INFO : Ignition finished successfully Jan 30 15:23:33.982202 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 15:23:33.982377 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 15:23:33.984392 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 15:23:33.984450 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 15:23:33.985118 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 15:23:33.985163 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 15:23:33.985715 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 15:23:33.985752 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 15:23:33.986620 systemd[1]: Stopped target network.target - Network. Jan 30 15:23:33.988103 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 15:23:33.988164 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:23:33.988875 systemd[1]: Stopped target paths.target - Path Units. Jan 30 15:23:33.989854 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 15:23:33.993113 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:23:33.995486 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 15:23:33.998562 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 15:23:33.999800 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 15:23:33.999850 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:23:34.000747 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 15:23:34.000785 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:23:34.002382 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 15:23:34.002440 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 15:23:34.003917 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 15:23:34.004009 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 15:23:34.004809 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 15:23:34.006603 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 15:23:34.009849 systemd-networkd[777]: eth0: DHCPv6 lease lost Jan 30 15:23:34.012634 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 15:23:34.013803 systemd-networkd[777]: eth1: DHCPv6 lease lost Jan 30 15:23:34.013951 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 15:23:34.014140 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 15:23:34.017121 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 15:23:34.017250 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 15:23:34.018299 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 15:23:34.018399 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 15:23:34.020508 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 15:23:34.020565 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:23:34.021653 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 15:23:34.021723 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 15:23:34.026861 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 15:23:34.028147 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 15:23:34.028223 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:23:34.029281 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 15:23:34.029321 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:23:34.029904 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 15:23:34.029943 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 15:23:34.030528 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 15:23:34.030564 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:23:34.031720 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:23:34.050447 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 15:23:34.050616 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:23:34.052765 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 15:23:34.052814 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 15:23:34.053444 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 15:23:34.053476 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:23:34.054197 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 15:23:34.054246 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:23:34.056172 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 15:23:34.056224 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 15:23:34.057743 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:23:34.057792 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:23:34.064986 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 15:23:34.065540 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 15:23:34.065603 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:23:34.068774 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:23:34.068848 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:23:34.070461 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 15:23:34.072823 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 15:23:34.076567 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 15:23:34.076833 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 15:23:34.078407 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 15:23:34.087922 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 15:23:34.098621 systemd[1]: Switching root. Jan 30 15:23:34.130122 systemd-journald[236]: Journal stopped Jan 30 15:23:35.056090 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jan 30 15:23:35.056173 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 15:23:35.056189 kernel: SELinux: policy capability open_perms=1 Jan 30 15:23:35.056199 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 15:23:35.056209 kernel: SELinux: policy capability always_check_network=0 Jan 30 15:23:35.056223 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 15:23:35.056236 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 15:23:35.056254 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 15:23:35.056265 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 15:23:35.056276 kernel: audit: type=1403 audit(1738250614.313:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 15:23:35.056287 systemd[1]: Successfully loaded SELinux policy in 39.629ms. Jan 30 15:23:35.056308 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.830ms. Jan 30 15:23:35.056319 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:23:35.056331 systemd[1]: Detected virtualization kvm. Jan 30 15:23:35.056343 systemd[1]: Detected architecture arm64. Jan 30 15:23:35.056354 systemd[1]: Detected first boot. Jan 30 15:23:35.056364 systemd[1]: Hostname set to . Jan 30 15:23:35.056375 systemd[1]: Initializing machine ID from VM UUID. Jan 30 15:23:35.056385 zram_generator::config[1055]: No configuration found. Jan 30 15:23:35.056397 systemd[1]: Populated /etc with preset unit settings. Jan 30 15:23:35.056407 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 15:23:35.056419 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 15:23:35.056430 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 15:23:35.056442 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 15:23:35.056452 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 15:23:35.056468 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 15:23:35.056479 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 15:23:35.056490 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 15:23:35.056500 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 15:23:35.056511 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 15:23:35.056524 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 15:23:35.056534 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:23:35.056545 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:23:35.056555 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 15:23:35.056565 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 15:23:35.056580 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 15:23:35.056595 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:23:35.056605 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 15:23:35.056618 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:23:35.056630 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 15:23:35.056640 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 15:23:35.056651 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 15:23:35.056696 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 15:23:35.056708 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:23:35.056723 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:23:35.056736 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:23:35.056747 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:23:35.056761 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 15:23:35.056771 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 15:23:35.056781 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:23:35.056792 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:23:35.056802 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:23:35.056813 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 15:23:35.056823 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 15:23:35.056834 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 15:23:35.056845 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 15:23:35.056856 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 15:23:35.056866 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 15:23:35.056877 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 15:23:35.056891 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 15:23:35.056905 systemd[1]: Reached target machines.target - Containers. Jan 30 15:23:35.056915 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 15:23:35.056937 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:23:35.056959 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:23:35.056974 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 15:23:35.056985 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:23:35.056996 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 15:23:35.057006 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:23:35.057019 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 15:23:35.057030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:23:35.057041 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 15:23:35.057051 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 15:23:35.057062 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 15:23:35.057072 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 15:23:35.057083 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 15:23:35.057092 kernel: fuse: init (API version 7.39) Jan 30 15:23:35.057102 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:23:35.057115 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:23:35.057126 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 15:23:35.057137 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 15:23:35.057193 kernel: loop: module loaded Jan 30 15:23:35.057208 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:23:35.057219 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 15:23:35.057230 kernel: ACPI: bus type drm_connector registered Jan 30 15:23:35.057239 systemd[1]: Stopped verity-setup.service. Jan 30 15:23:35.057250 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 15:23:35.057264 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 15:23:35.057276 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 15:23:35.057287 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 15:23:35.057297 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 15:23:35.057309 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 15:23:35.057320 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:23:35.057331 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 15:23:35.057341 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 15:23:35.057352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:23:35.057363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:23:35.057374 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 15:23:35.057417 systemd-journald[1126]: Collecting audit messages is disabled. Jan 30 15:23:35.057458 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 15:23:35.057472 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:23:35.057508 systemd-journald[1126]: Journal started Jan 30 15:23:35.057535 systemd-journald[1126]: Runtime Journal (/run/log/journal/b1843e1d14aa4ab5963b5e0ff14b8d31) is 8.0M, max 76.5M, 68.5M free. Jan 30 15:23:34.791594 systemd[1]: Queued start job for default target multi-user.target. Jan 30 15:23:34.813609 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 15:23:34.814168 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 15:23:35.059723 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:23:35.062052 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:23:35.064595 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 15:23:35.064782 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 15:23:35.066098 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:23:35.066740 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:23:35.067795 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 15:23:35.068719 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:23:35.069596 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 15:23:35.070918 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 15:23:35.084277 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 15:23:35.091893 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 15:23:35.096803 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 15:23:35.099774 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 15:23:35.099879 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:23:35.101622 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 15:23:35.112857 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 15:23:35.116284 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 15:23:35.118168 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:23:35.127135 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 15:23:35.134785 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 15:23:35.135470 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:23:35.140933 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 15:23:35.141805 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:23:35.145891 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:23:35.151877 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 15:23:35.161962 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 15:23:35.168718 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:23:35.171201 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 15:23:35.177971 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 15:23:35.180085 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 15:23:35.183447 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 15:23:35.191641 systemd-journald[1126]: Time spent on flushing to /var/log/journal/b1843e1d14aa4ab5963b5e0ff14b8d31 is 37.699ms for 1132 entries. Jan 30 15:23:35.191641 systemd-journald[1126]: System Journal (/var/log/journal/b1843e1d14aa4ab5963b5e0ff14b8d31) is 8.0M, max 584.8M, 576.8M free. Jan 30 15:23:35.243000 systemd-journald[1126]: Received client request to flush runtime journal. Jan 30 15:23:35.243054 kernel: loop0: detected capacity change from 0 to 8 Jan 30 15:23:35.245771 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 15:23:35.193320 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 15:23:35.202863 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 15:23:35.219860 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 15:23:35.226466 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:23:35.253736 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 15:23:35.266214 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 15:23:35.271917 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 15:23:35.276274 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 15:23:35.277299 kernel: loop1: detected capacity change from 0 to 114432 Jan 30 15:23:35.293838 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 15:23:35.305799 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:23:35.317763 kernel: loop2: detected capacity change from 0 to 189592 Jan 30 15:23:35.344999 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 30 15:23:35.345016 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 30 15:23:35.354786 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:23:35.364696 kernel: loop3: detected capacity change from 0 to 114328 Jan 30 15:23:35.408045 kernel: loop4: detected capacity change from 0 to 8 Jan 30 15:23:35.412782 kernel: loop5: detected capacity change from 0 to 114432 Jan 30 15:23:35.431437 kernel: loop6: detected capacity change from 0 to 189592 Jan 30 15:23:35.457722 kernel: loop7: detected capacity change from 0 to 114328 Jan 30 15:23:35.480408 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 30 15:23:35.481131 (sd-merge)[1192]: Merged extensions into '/usr'. Jan 30 15:23:35.489352 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 15:23:35.489374 systemd[1]: Reloading... Jan 30 15:23:35.627277 zram_generator::config[1215]: No configuration found. Jan 30 15:23:35.657457 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 15:23:35.760586 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:23:35.809225 systemd[1]: Reloading finished in 317 ms. Jan 30 15:23:35.833447 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 15:23:35.834980 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 15:23:35.846994 systemd[1]: Starting ensure-sysext.service... Jan 30 15:23:35.849409 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:23:35.872740 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Jan 30 15:23:35.872772 systemd[1]: Reloading... Jan 30 15:23:35.894621 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 15:23:35.895838 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 15:23:35.896871 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 15:23:35.897291 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 30 15:23:35.897412 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 30 15:23:35.901269 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 15:23:35.901469 systemd-tmpfiles[1256]: Skipping /boot Jan 30 15:23:35.912411 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 15:23:35.912430 systemd-tmpfiles[1256]: Skipping /boot Jan 30 15:23:35.962784 zram_generator::config[1291]: No configuration found. Jan 30 15:23:36.050868 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:23:36.100401 systemd[1]: Reloading finished in 227 ms. Jan 30 15:23:36.120820 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 15:23:36.122025 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:23:36.139980 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 15:23:36.146910 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 15:23:36.150191 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 15:23:36.154880 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:23:36.160940 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:23:36.162896 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 15:23:36.171461 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:23:36.178045 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:23:36.183007 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:23:36.188047 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:23:36.189093 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:23:36.191063 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:23:36.191227 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:23:36.195977 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 15:23:36.202486 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:23:36.205901 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 15:23:36.207441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:23:36.209274 systemd[1]: Finished ensure-sysext.service. Jan 30 15:23:36.213429 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 15:23:36.219885 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 15:23:36.235920 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 15:23:36.246892 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 15:23:36.248819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:23:36.250017 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:23:36.252320 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:23:36.252471 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:23:36.253536 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 15:23:36.254737 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 15:23:36.262116 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:23:36.262795 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:23:36.264899 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:23:36.265002 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:23:36.279436 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 15:23:36.280529 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 15:23:36.285341 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Jan 30 15:23:36.289033 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 15:23:36.300204 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 15:23:36.310984 augenrules[1362]: No rules Jan 30 15:23:36.313652 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 15:23:36.320446 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:23:36.330524 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:23:36.391384 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 15:23:36.442531 systemd-networkd[1374]: lo: Link UP Jan 30 15:23:36.442883 systemd-networkd[1374]: lo: Gained carrier Jan 30 15:23:36.483074 systemd-resolved[1326]: Positive Trust Anchors: Jan 30 15:23:36.483099 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:23:36.483134 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:23:36.486399 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 15:23:36.487771 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 15:23:36.495436 systemd-resolved[1326]: Using system hostname 'ci-4081-3-0-1-b815e480da'. Jan 30 15:23:36.499306 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:23:36.500676 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:23:36.503794 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 15:23:36.509559 systemd-networkd[1374]: Enumeration completed Jan 30 15:23:36.509825 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:23:36.511026 systemd[1]: Reached target network.target - Network. Jan 30 15:23:36.522009 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 15:23:36.553595 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 30 15:23:36.553747 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:23:36.559070 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:23:36.565052 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:23:36.570469 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:23:36.571896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:23:36.571983 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 15:23:36.577469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:23:36.577739 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:23:36.577742 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:23:36.578030 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:23:36.579766 systemd-networkd[1374]: eth0: Link UP Jan 30 15:23:36.579866 systemd-networkd[1374]: eth0: Gained carrier Jan 30 15:23:36.579927 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:23:36.596369 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:23:36.596791 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:23:36.599122 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:23:36.600154 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:23:36.606210 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:23:36.606275 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:23:36.629432 systemd-networkd[1374]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:23:36.629573 systemd-networkd[1374]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:23:36.631211 systemd-networkd[1374]: eth1: Link UP Jan 30 15:23:36.631347 systemd-networkd[1374]: eth1: Gained carrier Jan 30 15:23:36.631422 systemd-networkd[1374]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:23:36.659783 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1381) Jan 30 15:23:36.666101 systemd-networkd[1374]: eth0: DHCPv4 address 49.13.124.2/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 15:23:36.668308 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jan 30 15:23:36.676035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:23:36.687694 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 30 15:23:36.687805 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 15:23:36.687824 kernel: [drm] features: -context_init Jan 30 15:23:36.689761 kernel: [drm] number of scanouts: 1 Jan 30 15:23:36.689845 kernel: [drm] number of cap sets: 0 Jan 30 15:23:36.689918 systemd-networkd[1374]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 15:23:36.690727 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 30 15:23:36.691304 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jan 30 15:23:36.698703 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 15:23:36.710396 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 15:23:36.715397 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 15:23:36.716554 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:23:36.716825 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:23:36.723909 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 15:23:36.727639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:23:36.745800 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 15:23:36.790002 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:23:36.832383 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 15:23:36.840079 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 15:23:36.863868 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 15:23:36.893447 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 15:23:36.896649 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:23:36.897619 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:23:36.898351 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 15:23:36.899264 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 15:23:36.900798 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 15:23:36.901599 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 15:23:36.902351 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 15:23:36.903078 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 15:23:36.903110 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:23:36.903607 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:23:36.905579 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 15:23:36.907750 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 15:23:36.914170 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 15:23:36.916519 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 15:23:36.917894 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 15:23:36.918613 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:23:36.919282 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:23:36.919935 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 15:23:36.919983 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 15:23:36.922831 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 15:23:36.928024 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 15:23:36.931018 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 15:23:36.932105 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 15:23:36.935746 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 15:23:36.940044 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 15:23:36.941516 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 15:23:36.946918 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 15:23:36.956816 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 15:23:36.960985 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 30 15:23:36.965196 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 15:23:36.972057 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 15:23:36.984006 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 15:23:36.985563 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 15:23:36.987019 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 15:23:36.989824 jq[1444]: false Jan 30 15:23:36.990652 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 15:23:36.995927 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 15:23:36.999058 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 15:23:37.001026 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 15:23:37.004626 coreos-metadata[1442]: Jan 30 15:23:37.004 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 30 15:23:37.016003 coreos-metadata[1442]: Jan 30 15:23:37.013 INFO Fetch successful Jan 30 15:23:37.016003 coreos-metadata[1442]: Jan 30 15:23:37.013 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 30 15:23:37.016003 coreos-metadata[1442]: Jan 30 15:23:37.014 INFO Fetch successful Jan 30 15:23:37.021099 dbus-daemon[1443]: [system] SELinux support is enabled Jan 30 15:23:37.022171 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 15:23:37.031244 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 15:23:37.031279 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 15:23:37.033715 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 15:23:37.033735 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 15:23:37.057141 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 15:23:37.057342 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 15:23:37.063670 jq[1456]: true Jan 30 15:23:37.067723 update_engine[1455]: I20250130 15:23:37.066078 1455 main.cc:92] Flatcar Update Engine starting Jan 30 15:23:37.071903 update_engine[1455]: I20250130 15:23:37.071852 1455 update_check_scheduler.cc:74] Next update check in 8m0s Jan 30 15:23:37.076198 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 15:23:37.076540 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 15:23:37.078965 systemd[1]: Started update-engine.service - Update Engine. Jan 30 15:23:37.095756 extend-filesystems[1445]: Found loop4 Jan 30 15:23:37.095756 extend-filesystems[1445]: Found loop5 Jan 30 15:23:37.095756 extend-filesystems[1445]: Found loop6 Jan 30 15:23:37.095756 extend-filesystems[1445]: Found loop7 Jan 30 15:23:37.095756 extend-filesystems[1445]: Found sda Jan 30 15:23:37.095756 extend-filesystems[1445]: Found sda1 Jan 30 15:23:37.095756 extend-filesystems[1445]: Found sda2 Jan 30 15:23:37.095756 extend-filesystems[1445]: Found sda3 Jan 30 15:23:37.095756 extend-filesystems[1445]: Found usr Jan 30 15:23:37.095756 extend-filesystems[1445]: Found sda4 Jan 30 15:23:37.095756 extend-filesystems[1445]: Found sda6 Jan 30 15:23:37.095756 extend-filesystems[1445]: Found sda7 Jan 30 15:23:37.095756 extend-filesystems[1445]: Found sda9 Jan 30 15:23:37.095756 extend-filesystems[1445]: Checking size of /dev/sda9 Jan 30 15:23:37.152523 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 30 15:23:37.104218 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 15:23:37.160983 extend-filesystems[1445]: Resized partition /dev/sda9 Jan 30 15:23:37.162012 tar[1462]: linux-arm64/helm Jan 30 15:23:37.106717 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 15:23:37.168675 extend-filesystems[1490]: resize2fs 1.47.1 (20-May-2024) Jan 30 15:23:37.123148 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 15:23:37.182686 jq[1478]: true Jan 30 15:23:37.195714 systemd-logind[1453]: New seat seat0. Jan 30 15:23:37.196866 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 15:23:37.196881 systemd-logind[1453]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 30 15:23:37.197505 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 15:23:37.241199 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 15:23:37.244122 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 15:23:37.322479 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Jan 30 15:23:37.324015 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 15:23:37.344237 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1382) Jan 30 15:23:37.347329 systemd[1]: Starting sshkeys.service... Jan 30 15:23:37.359719 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 30 15:23:37.380650 containerd[1475]: time="2025-01-30T15:23:37.380538520Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 15:23:37.384396 extend-filesystems[1490]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 15:23:37.384396 extend-filesystems[1490]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 30 15:23:37.384396 extend-filesystems[1490]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 30 15:23:37.392514 extend-filesystems[1445]: Resized filesystem in /dev/sda9 Jan 30 15:23:37.392514 extend-filesystems[1445]: Found sr0 Jan 30 15:23:37.387202 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 15:23:37.387359 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 15:23:37.391651 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 15:23:37.397137 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 15:23:37.403065 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 15:23:37.438152 coreos-metadata[1530]: Jan 30 15:23:37.437 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 30 15:23:37.441807 coreos-metadata[1530]: Jan 30 15:23:37.441 INFO Fetch successful Jan 30 15:23:37.447824 unknown[1530]: wrote ssh authorized keys file for user: core Jan 30 15:23:37.460639 containerd[1475]: time="2025-01-30T15:23:37.459871800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:23:37.466873 containerd[1475]: time="2025-01-30T15:23:37.466820960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:23:37.467984 containerd[1475]: time="2025-01-30T15:23:37.467934320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 15:23:37.468043 containerd[1475]: time="2025-01-30T15:23:37.467997640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 15:23:37.468754 containerd[1475]: time="2025-01-30T15:23:37.468721240Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 15:23:37.468819 containerd[1475]: time="2025-01-30T15:23:37.468760440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 15:23:37.468857 containerd[1475]: time="2025-01-30T15:23:37.468834440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:23:37.468857 containerd[1475]: time="2025-01-30T15:23:37.468854800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:23:37.469112 containerd[1475]: time="2025-01-30T15:23:37.469084080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:23:37.469112 containerd[1475]: time="2025-01-30T15:23:37.469108320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 15:23:37.469171 containerd[1475]: time="2025-01-30T15:23:37.469122440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:23:37.469171 containerd[1475]: time="2025-01-30T15:23:37.469132600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 15:23:37.469231 containerd[1475]: time="2025-01-30T15:23:37.469210160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:23:37.469431 containerd[1475]: time="2025-01-30T15:23:37.469407480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:23:37.469532 containerd[1475]: time="2025-01-30T15:23:37.469509920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:23:37.469532 containerd[1475]: time="2025-01-30T15:23:37.469529160Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 15:23:37.469753 containerd[1475]: time="2025-01-30T15:23:37.469606960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 15:23:37.472853 containerd[1475]: time="2025-01-30T15:23:37.469654920Z" level=info msg="metadata content store policy set" policy=shared Jan 30 15:23:37.478899 containerd[1475]: time="2025-01-30T15:23:37.478843440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 15:23:37.478899 containerd[1475]: time="2025-01-30T15:23:37.478919760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 15:23:37.479136 containerd[1475]: time="2025-01-30T15:23:37.478994440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 15:23:37.479136 containerd[1475]: time="2025-01-30T15:23:37.479016800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 15:23:37.479136 containerd[1475]: time="2025-01-30T15:23:37.479045640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 15:23:37.479438 containerd[1475]: time="2025-01-30T15:23:37.479233200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 15:23:37.479470 update-ssh-keys[1534]: Updated "/home/core/.ssh/authorized_keys" Jan 30 15:23:37.481316 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 15:23:37.482280 containerd[1475]: time="2025-01-30T15:23:37.479886680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 15:23:37.482280 containerd[1475]: time="2025-01-30T15:23:37.480034400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 15:23:37.482280 containerd[1475]: time="2025-01-30T15:23:37.480053880Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 15:23:37.482280 containerd[1475]: time="2025-01-30T15:23:37.480068360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 15:23:37.482280 containerd[1475]: time="2025-01-30T15:23:37.480081640Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 15:23:37.482280 containerd[1475]: time="2025-01-30T15:23:37.480094600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 15:23:37.482280 containerd[1475]: time="2025-01-30T15:23:37.480106960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 15:23:37.482280 containerd[1475]: time="2025-01-30T15:23:37.480121240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 15:23:37.482280 containerd[1475]: time="2025-01-30T15:23:37.480135880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 15:23:37.482280 containerd[1475]: time="2025-01-30T15:23:37.480149040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 15:23:37.482280 containerd[1475]: time="2025-01-30T15:23:37.480161640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 15:23:37.482280 containerd[1475]: time="2025-01-30T15:23:37.480176960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 15:23:37.482280 containerd[1475]: time="2025-01-30T15:23:37.480197280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.482280 containerd[1475]: time="2025-01-30T15:23:37.480212120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.482521 containerd[1475]: time="2025-01-30T15:23:37.480225480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.482521 containerd[1475]: time="2025-01-30T15:23:37.480240400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.482521 containerd[1475]: time="2025-01-30T15:23:37.480258680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.482521 containerd[1475]: time="2025-01-30T15:23:37.480272880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.482521 containerd[1475]: time="2025-01-30T15:23:37.480284480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.482521 containerd[1475]: time="2025-01-30T15:23:37.480297840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.482521 containerd[1475]: time="2025-01-30T15:23:37.480311040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.482521 containerd[1475]: time="2025-01-30T15:23:37.480327600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.482521 containerd[1475]: time="2025-01-30T15:23:37.480339840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.482521 containerd[1475]: time="2025-01-30T15:23:37.480353040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.482521 containerd[1475]: time="2025-01-30T15:23:37.480365920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.482521 containerd[1475]: time="2025-01-30T15:23:37.480383160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 15:23:37.482521 containerd[1475]: time="2025-01-30T15:23:37.480403720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.482521 containerd[1475]: time="2025-01-30T15:23:37.480423560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.482521 containerd[1475]: time="2025-01-30T15:23:37.480434480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 15:23:37.486650 containerd[1475]: time="2025-01-30T15:23:37.483519360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 15:23:37.486650 containerd[1475]: time="2025-01-30T15:23:37.483559000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 15:23:37.486650 containerd[1475]: time="2025-01-30T15:23:37.483570320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 15:23:37.486650 containerd[1475]: time="2025-01-30T15:23:37.483585240Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 15:23:37.486650 containerd[1475]: time="2025-01-30T15:23:37.483595360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.486650 containerd[1475]: time="2025-01-30T15:23:37.483614840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 15:23:37.486650 containerd[1475]: time="2025-01-30T15:23:37.483625840Z" level=info msg="NRI interface is disabled by configuration." Jan 30 15:23:37.486650 containerd[1475]: time="2025-01-30T15:23:37.483636080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 15:23:37.486873 containerd[1475]: time="2025-01-30T15:23:37.485426240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 15:23:37.486873 containerd[1475]: time="2025-01-30T15:23:37.485500720Z" level=info msg="Connect containerd service" Jan 30 15:23:37.486873 containerd[1475]: time="2025-01-30T15:23:37.485543160Z" level=info msg="using legacy CRI server" Jan 30 15:23:37.486873 containerd[1475]: time="2025-01-30T15:23:37.485551320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 15:23:37.486873 containerd[1475]: time="2025-01-30T15:23:37.485647560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 15:23:37.487914 systemd[1]: Finished sshkeys.service. Jan 30 15:23:37.490410 containerd[1475]: time="2025-01-30T15:23:37.488366000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 15:23:37.490410 containerd[1475]: time="2025-01-30T15:23:37.489319400Z" level=info msg="Start subscribing containerd event" Jan 30 15:23:37.490410 containerd[1475]: time="2025-01-30T15:23:37.489381360Z" level=info msg="Start recovering state" Jan 30 15:23:37.490410 containerd[1475]: time="2025-01-30T15:23:37.489452920Z" level=info msg="Start event monitor" Jan 30 15:23:37.490410 containerd[1475]: time="2025-01-30T15:23:37.489463360Z" level=info msg="Start snapshots syncer" Jan 30 15:23:37.490410 containerd[1475]: time="2025-01-30T15:23:37.489473640Z" level=info msg="Start cni network conf syncer for default" Jan 30 15:23:37.490410 containerd[1475]: time="2025-01-30T15:23:37.489482240Z" level=info msg="Start streaming server" Jan 30 15:23:37.490410 containerd[1475]: time="2025-01-30T15:23:37.490196640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 15:23:37.490410 containerd[1475]: time="2025-01-30T15:23:37.490257400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 15:23:37.490410 containerd[1475]: time="2025-01-30T15:23:37.490313720Z" level=info msg="containerd successfully booted in 0.120419s" Jan 30 15:23:37.490900 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 15:23:37.667900 systemd-networkd[1374]: eth0: Gained IPv6LL Jan 30 15:23:37.668445 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jan 30 15:23:37.673616 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 15:23:37.676377 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 15:23:37.686152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:23:37.694989 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 15:23:37.752595 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 15:23:37.830829 tar[1462]: linux-arm64/LICENSE Jan 30 15:23:37.831068 tar[1462]: linux-arm64/README.md Jan 30 15:23:37.842536 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 15:23:38.102978 sshd_keygen[1473]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 15:23:38.116886 systemd-networkd[1374]: eth1: Gained IPv6LL Jan 30 15:23:38.117328 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jan 30 15:23:38.125906 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 15:23:38.135058 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 15:23:38.147917 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 15:23:38.148155 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 15:23:38.155321 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 15:23:38.165703 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 15:23:38.173622 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 15:23:38.176152 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 15:23:38.178324 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 15:23:38.438713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:23:38.440048 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 15:23:38.441510 systemd[1]: Startup finished in 775ms (kernel) + 6.614s (initrd) + 4.167s (userspace) = 11.557s. Jan 30 15:23:38.445284 (kubelet)[1574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:23:38.958468 kubelet[1574]: E0130 15:23:38.958397 1574 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:23:38.962973 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:23:38.963244 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:23:49.184769 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 15:23:49.198162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:23:49.321167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:23:49.332429 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:23:49.387237 kubelet[1594]: E0130 15:23:49.387158 1594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:23:49.392413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:23:49.392583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:23:59.434264 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 15:23:59.445154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:23:59.546615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:23:59.562234 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:23:59.607517 kubelet[1610]: E0130 15:23:59.607423 1610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:23:59.609589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:23:59.609810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:24:07.896875 systemd-resolved[1326]: Clock change detected. Flushing caches. Jan 30 15:24:07.897146 systemd-timesyncd[1342]: Contacted time server 94.130.23.46:123 (2.flatcar.pool.ntp.org). Jan 30 15:24:07.897223 systemd-timesyncd[1342]: Initial clock synchronization to Thu 2025-01-30 15:24:07.896823 UTC. Jan 30 15:24:09.254552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 15:24:09.269013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:24:09.391231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:24:09.396447 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:24:09.444065 kubelet[1625]: E0130 15:24:09.443979 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:24:09.447065 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:24:09.447439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:24:19.504376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 15:24:19.516990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:24:19.633339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:24:19.648795 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:24:19.697392 kubelet[1640]: E0130 15:24:19.697327 1640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:24:19.699871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:24:19.700017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:24:21.499096 update_engine[1455]: I20250130 15:24:21.498868 1455 update_attempter.cc:509] Updating boot flags... Jan 30 15:24:21.552635 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1655) Jan 30 15:24:21.602702 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1657) Jan 30 15:24:29.754806 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 15:24:29.759944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:24:29.874906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:24:29.891230 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:24:29.942398 kubelet[1672]: E0130 15:24:29.942290 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:24:29.944970 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:24:29.945133 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:24:40.004770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 15:24:40.012918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:24:40.130827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:24:40.139336 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:24:40.180842 kubelet[1687]: E0130 15:24:40.180778 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:24:40.183784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:24:40.183985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:24:50.254686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 15:24:50.263938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:24:50.384877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:24:50.385017 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:24:50.432208 kubelet[1702]: E0130 15:24:50.432129 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:24:50.435229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:24:50.435389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:25:00.504037 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 30 15:25:00.516893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:25:00.637197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:25:00.643246 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:25:00.682755 kubelet[1717]: E0130 15:25:00.682705 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:25:00.685193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:25:00.685370 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:25:10.754236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 30 15:25:10.760054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:25:10.883323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:25:10.897309 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:25:10.954594 kubelet[1732]: E0130 15:25:10.954517 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:25:10.956910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:25:10.957084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:25:21.004533 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 30 15:25:21.020312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:25:21.158977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:25:21.161298 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:25:21.205647 kubelet[1747]: E0130 15:25:21.205556 1747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:25:21.209150 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:25:21.209388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:25:28.596291 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 15:25:28.603170 systemd[1]: Started sshd@0-49.13.124.2:22-139.178.68.195:38452.service - OpenSSH per-connection server daemon (139.178.68.195:38452). Jan 30 15:25:29.590874 sshd[1755]: Accepted publickey for core from 139.178.68.195 port 38452 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:25:29.594117 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:25:29.604947 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 15:25:29.614148 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 15:25:29.618573 systemd-logind[1453]: New session 1 of user core. Jan 30 15:25:29.630318 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 15:25:29.637117 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 15:25:29.650016 (systemd)[1759]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 15:25:29.766058 systemd[1759]: Queued start job for default target default.target. Jan 30 15:25:29.777274 systemd[1759]: Created slice app.slice - User Application Slice. Jan 30 15:25:29.777322 systemd[1759]: Reached target paths.target - Paths. Jan 30 15:25:29.777343 systemd[1759]: Reached target timers.target - Timers. Jan 30 15:25:29.779289 systemd[1759]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 15:25:29.794117 systemd[1759]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 15:25:29.794259 systemd[1759]: Reached target sockets.target - Sockets. Jan 30 15:25:29.794275 systemd[1759]: Reached target basic.target - Basic System. Jan 30 15:25:29.794335 systemd[1759]: Reached target default.target - Main User Target. Jan 30 15:25:29.794370 systemd[1759]: Startup finished in 137ms. Jan 30 15:25:29.794633 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 15:25:29.802965 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 15:25:30.502062 systemd[1]: Started sshd@1-49.13.124.2:22-139.178.68.195:38460.service - OpenSSH per-connection server daemon (139.178.68.195:38460). Jan 30 15:25:31.254650 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 30 15:25:31.263001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:25:31.387979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:25:31.389627 (kubelet)[1780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:25:31.424111 kubelet[1780]: E0130 15:25:31.424041 1780 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:25:31.427181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:25:31.427828 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:25:31.480419 sshd[1770]: Accepted publickey for core from 139.178.68.195 port 38460 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:25:31.482772 sshd[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:25:31.488953 systemd-logind[1453]: New session 2 of user core. Jan 30 15:25:31.498955 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 15:25:32.162845 sshd[1770]: pam_unix(sshd:session): session closed for user core Jan 30 15:25:32.167781 systemd[1]: sshd@1-49.13.124.2:22-139.178.68.195:38460.service: Deactivated successfully. Jan 30 15:25:32.170226 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 15:25:32.171916 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Jan 30 15:25:32.173300 systemd-logind[1453]: Removed session 2. Jan 30 15:25:32.330615 systemd[1]: Started sshd@2-49.13.124.2:22-139.178.68.195:38472.service - OpenSSH per-connection server daemon (139.178.68.195:38472). Jan 30 15:25:33.303308 sshd[1792]: Accepted publickey for core from 139.178.68.195 port 38472 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:25:33.305721 sshd[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:25:33.310531 systemd-logind[1453]: New session 3 of user core. Jan 30 15:25:33.317790 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 15:25:33.973897 sshd[1792]: pam_unix(sshd:session): session closed for user core Jan 30 15:25:33.978505 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Jan 30 15:25:33.979360 systemd[1]: sshd@2-49.13.124.2:22-139.178.68.195:38472.service: Deactivated successfully. Jan 30 15:25:33.981527 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 15:25:33.984551 systemd-logind[1453]: Removed session 3. Jan 30 15:25:34.145692 systemd[1]: Started sshd@3-49.13.124.2:22-139.178.68.195:38476.service - OpenSSH per-connection server daemon (139.178.68.195:38476). Jan 30 15:25:35.134006 sshd[1799]: Accepted publickey for core from 139.178.68.195 port 38476 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:25:35.136545 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:25:35.144043 systemd-logind[1453]: New session 4 of user core. Jan 30 15:25:35.149978 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 15:25:35.821652 sshd[1799]: pam_unix(sshd:session): session closed for user core Jan 30 15:25:35.828848 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Jan 30 15:25:35.828911 systemd[1]: sshd@3-49.13.124.2:22-139.178.68.195:38476.service: Deactivated successfully. Jan 30 15:25:35.831682 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 15:25:35.834063 systemd-logind[1453]: Removed session 4. Jan 30 15:25:36.005133 systemd[1]: Started sshd@4-49.13.124.2:22-139.178.68.195:59300.service - OpenSSH per-connection server daemon (139.178.68.195:59300). Jan 30 15:25:36.978860 sshd[1806]: Accepted publickey for core from 139.178.68.195 port 59300 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:25:36.980832 sshd[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:25:36.985251 systemd-logind[1453]: New session 5 of user core. Jan 30 15:25:36.999945 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 15:25:37.508304 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 15:25:37.508834 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:25:37.523910 sudo[1809]: pam_unix(sudo:session): session closed for user root Jan 30 15:25:37.683526 sshd[1806]: pam_unix(sshd:session): session closed for user core Jan 30 15:25:37.689136 systemd[1]: sshd@4-49.13.124.2:22-139.178.68.195:59300.service: Deactivated successfully. Jan 30 15:25:37.691257 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 15:25:37.693173 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Jan 30 15:25:37.694381 systemd-logind[1453]: Removed session 5. Jan 30 15:25:37.862099 systemd[1]: Started sshd@5-49.13.124.2:22-139.178.68.195:59302.service - OpenSSH per-connection server daemon (139.178.68.195:59302). Jan 30 15:25:38.839518 sshd[1814]: Accepted publickey for core from 139.178.68.195 port 59302 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:25:38.842208 sshd[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:25:38.848368 systemd-logind[1453]: New session 6 of user core. Jan 30 15:25:38.854941 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 15:25:39.363948 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 15:25:39.364216 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:25:39.368687 sudo[1818]: pam_unix(sudo:session): session closed for user root Jan 30 15:25:39.374345 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 15:25:39.375025 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:25:39.399069 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 15:25:39.401533 auditctl[1821]: No rules Jan 30 15:25:39.402186 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 15:25:39.402607 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 15:25:39.406372 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 15:25:39.449033 augenrules[1839]: No rules Jan 30 15:25:39.450867 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 15:25:39.453022 sudo[1817]: pam_unix(sudo:session): session closed for user root Jan 30 15:25:39.613023 sshd[1814]: pam_unix(sshd:session): session closed for user core Jan 30 15:25:39.619276 systemd[1]: sshd@5-49.13.124.2:22-139.178.68.195:59302.service: Deactivated successfully. Jan 30 15:25:39.621391 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 15:25:39.622291 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Jan 30 15:25:39.623203 systemd-logind[1453]: Removed session 6. Jan 30 15:25:39.781411 systemd[1]: Started sshd@6-49.13.124.2:22-139.178.68.195:59316.service - OpenSSH per-connection server daemon (139.178.68.195:59316). Jan 30 15:25:40.763924 sshd[1847]: Accepted publickey for core from 139.178.68.195 port 59316 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:25:40.765826 sshd[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:25:40.770091 systemd-logind[1453]: New session 7 of user core. Jan 30 15:25:40.780928 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 15:25:41.285988 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 15:25:41.286299 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:25:41.503919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 30 15:25:41.514325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:25:41.605775 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 15:25:41.607963 (dockerd)[1869]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 15:25:41.672568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:25:41.674038 (kubelet)[1875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:25:41.746322 kubelet[1875]: E0130 15:25:41.746176 1875 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:25:41.749481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:25:41.749848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:25:41.869698 dockerd[1869]: time="2025-01-30T15:25:41.869423079Z" level=info msg="Starting up" Jan 30 15:25:41.963580 dockerd[1869]: time="2025-01-30T15:25:41.963505586Z" level=info msg="Loading containers: start." Jan 30 15:25:42.064640 kernel: Initializing XFRM netlink socket Jan 30 15:25:42.139906 systemd-networkd[1374]: docker0: Link UP Jan 30 15:25:42.164145 dockerd[1869]: time="2025-01-30T15:25:42.164048641Z" level=info msg="Loading containers: done." Jan 30 15:25:42.179234 dockerd[1869]: time="2025-01-30T15:25:42.179176583Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 15:25:42.179414 dockerd[1869]: time="2025-01-30T15:25:42.179294343Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 15:25:42.179463 dockerd[1869]: time="2025-01-30T15:25:42.179410544Z" level=info msg="Daemon has completed initialization" Jan 30 15:25:42.220764 dockerd[1869]: time="2025-01-30T15:25:42.220336363Z" level=info msg="API listen on /run/docker.sock" Jan 30 15:25:42.221267 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 15:25:43.264725 containerd[1475]: time="2025-01-30T15:25:43.264669501Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 15:25:43.985873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount996536208.mount: Deactivated successfully. Jan 30 15:25:45.791229 containerd[1475]: time="2025-01-30T15:25:45.791175526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:45.793378 containerd[1475]: time="2025-01-30T15:25:45.793304786Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618162" Jan 30 15:25:45.795052 containerd[1475]: time="2025-01-30T15:25:45.794964731Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:45.802632 containerd[1475]: time="2025-01-30T15:25:45.800878236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:45.803614 containerd[1475]: time="2025-01-30T15:25:45.803544931Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 2.53882171s" Jan 30 15:25:45.803770 containerd[1475]: time="2025-01-30T15:25:45.803746249Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 30 15:25:45.805260 containerd[1475]: time="2025-01-30T15:25:45.805214996Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 15:25:48.193844 containerd[1475]: time="2025-01-30T15:25:48.193791552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:48.196150 containerd[1475]: time="2025-01-30T15:25:48.196087132Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469487" Jan 30 15:25:48.197511 containerd[1475]: time="2025-01-30T15:25:48.197459361Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:48.200638 containerd[1475]: time="2025-01-30T15:25:48.200565214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:48.202271 containerd[1475]: time="2025-01-30T15:25:48.202106241Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 2.396849486s" Jan 30 15:25:48.202271 containerd[1475]: time="2025-01-30T15:25:48.202149321Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 30 15:25:48.203142 containerd[1475]: time="2025-01-30T15:25:48.202888594Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 15:25:50.001634 containerd[1475]: time="2025-01-30T15:25:50.001196264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:50.003030 containerd[1475]: time="2025-01-30T15:25:50.002988529Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024237" Jan 30 15:25:50.003666 containerd[1475]: time="2025-01-30T15:25:50.003453445Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:50.007277 containerd[1475]: time="2025-01-30T15:25:50.007217095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:50.009198 containerd[1475]: time="2025-01-30T15:25:50.008344486Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.805405452s" Jan 30 15:25:50.009198 containerd[1475]: time="2025-01-30T15:25:50.008429365Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 30 15:25:50.009547 containerd[1475]: time="2025-01-30T15:25:50.009524996Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 15:25:51.511671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1809331433.mount: Deactivated successfully. Jan 30 15:25:51.754581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 30 15:25:51.763026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:25:51.841515 containerd[1475]: time="2025-01-30T15:25:51.841446125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:51.844621 containerd[1475]: time="2025-01-30T15:25:51.843672787Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772143" Jan 30 15:25:51.850119 containerd[1475]: time="2025-01-30T15:25:51.848039673Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:51.855220 containerd[1475]: time="2025-01-30T15:25:51.855125937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:51.858791 containerd[1475]: time="2025-01-30T15:25:51.856724885Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.84698073s" Jan 30 15:25:51.860435 containerd[1475]: time="2025-01-30T15:25:51.860398216Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 30 15:25:51.862383 containerd[1475]: time="2025-01-30T15:25:51.862316481Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 15:25:51.894980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:25:51.895255 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:25:51.941199 kubelet[2097]: E0130 15:25:51.941084 2097 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:25:51.944105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:25:51.944292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:25:52.496441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963446520.mount: Deactivated successfully. Jan 30 15:25:53.442347 containerd[1475]: time="2025-01-30T15:25:53.442275606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:53.443686 containerd[1475]: time="2025-01-30T15:25:53.443606397Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 30 15:25:53.444565 containerd[1475]: time="2025-01-30T15:25:53.444455190Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:53.447622 containerd[1475]: time="2025-01-30T15:25:53.447552247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:53.448975 containerd[1475]: time="2025-01-30T15:25:53.448935437Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.585831842s" Jan 30 15:25:53.449238 containerd[1475]: time="2025-01-30T15:25:53.449086796Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 15:25:53.450097 containerd[1475]: time="2025-01-30T15:25:53.449762031Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 15:25:53.997197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount449032883.mount: Deactivated successfully. Jan 30 15:25:54.006050 containerd[1475]: time="2025-01-30T15:25:54.005940424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:54.007343 containerd[1475]: time="2025-01-30T15:25:54.007293494Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 30 15:25:54.008520 containerd[1475]: time="2025-01-30T15:25:54.008138528Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:54.011845 containerd[1475]: time="2025-01-30T15:25:54.011809461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:54.012723 containerd[1475]: time="2025-01-30T15:25:54.012686855Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 562.842825ms" Jan 30 15:25:54.012841 containerd[1475]: time="2025-01-30T15:25:54.012825214Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 30 15:25:54.013904 containerd[1475]: time="2025-01-30T15:25:54.013571289Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 15:25:54.619109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3425662400.mount: Deactivated successfully. Jan 30 15:25:56.266633 containerd[1475]: time="2025-01-30T15:25:56.265343800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:56.266633 containerd[1475]: time="2025-01-30T15:25:56.266578751Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406487" Jan 30 15:25:56.267179 containerd[1475]: time="2025-01-30T15:25:56.267149267Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:56.270469 containerd[1475]: time="2025-01-30T15:25:56.270432885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:25:56.272130 containerd[1475]: time="2025-01-30T15:25:56.272084033Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.258471466s" Jan 30 15:25:56.272130 containerd[1475]: time="2025-01-30T15:25:56.272127113Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 30 15:26:02.004131 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 30 15:26:02.011940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:26:02.125809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:26:02.129466 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:26:02.170652 kubelet[2232]: E0130 15:26:02.170575 2232 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:26:02.173756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:26:02.173895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:26:02.907535 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:26:02.916099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:26:02.958299 systemd[1]: Reloading requested from client PID 2247 ('systemctl') (unit session-7.scope)... Jan 30 15:26:02.958323 systemd[1]: Reloading... Jan 30 15:26:03.085617 zram_generator::config[2287]: No configuration found. Jan 30 15:26:03.199547 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:26:03.275932 systemd[1]: Reloading finished in 317 ms. Jan 30 15:26:03.335702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:26:03.341374 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:26:03.343744 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 15:26:03.344763 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:26:03.350024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:26:03.477523 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:26:03.490248 (kubelet)[2337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 15:26:03.537154 kubelet[2337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:26:03.538078 kubelet[2337]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 15:26:03.538078 kubelet[2337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:26:03.540671 kubelet[2337]: I0130 15:26:03.540574 2337 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 15:26:04.272473 kubelet[2337]: I0130 15:26:04.272401 2337 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 15:26:04.272473 kubelet[2337]: I0130 15:26:04.272449 2337 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 15:26:04.272964 kubelet[2337]: I0130 15:26:04.272925 2337 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 15:26:04.299927 kubelet[2337]: I0130 15:26:04.299886 2337 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:26:04.302386 kubelet[2337]: E0130 15:26:04.302332 2337 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://49.13.124.2:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.13.124.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:26:04.311106 kubelet[2337]: E0130 15:26:04.311049 2337 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 15:26:04.311106 kubelet[2337]: I0130 15:26:04.311104 2337 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 15:26:04.317019 kubelet[2337]: I0130 15:26:04.316980 2337 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 15:26:04.319164 kubelet[2337]: I0130 15:26:04.318643 2337 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 15:26:04.319164 kubelet[2337]: I0130 15:26:04.318988 2337 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 15:26:04.319413 kubelet[2337]: I0130 15:26:04.319021 2337 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-1-b815e480da","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 15:26:04.319413 kubelet[2337]: I0130 15:26:04.319279 2337 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 15:26:04.319413 kubelet[2337]: I0130 15:26:04.319291 2337 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 15:26:04.319580 kubelet[2337]: I0130 15:26:04.319484 2337 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:26:04.321530 kubelet[2337]: I0130 15:26:04.321501 2337 kubelet.go:408] "Attempting to sync node with API server" Jan 30 15:26:04.321530 kubelet[2337]: I0130 15:26:04.321533 2337 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 15:26:04.323768 kubelet[2337]: I0130 15:26:04.321560 2337 kubelet.go:314] "Adding apiserver pod source" Jan 30 15:26:04.323768 kubelet[2337]: I0130 15:26:04.321574 2337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 15:26:04.329427 kubelet[2337]: W0130 15:26:04.329354 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.124.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 15:26:04.329573 kubelet[2337]: E0130 15:26:04.329433 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.13.124.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.124.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:26:04.329930 kubelet[2337]: W0130 15:26:04.329881 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.124.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-1-b815e480da&limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 15:26:04.329980 kubelet[2337]: E0130 15:26:04.329931 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.13.124.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-1-b815e480da&limit=500&resourceVersion=0\": dial tcp 49.13.124.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:26:04.330405 kubelet[2337]: I0130 15:26:04.330372 2337 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 15:26:04.332563 kubelet[2337]: I0130 15:26:04.332529 2337 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 15:26:04.333866 kubelet[2337]: W0130 15:26:04.333822 2337 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 15:26:04.335084 kubelet[2337]: I0130 15:26:04.335057 2337 server.go:1269] "Started kubelet" Jan 30 15:26:04.336610 kubelet[2337]: I0130 15:26:04.335933 2337 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 15:26:04.337526 kubelet[2337]: I0130 15:26:04.337497 2337 server.go:460] "Adding debug handlers to kubelet server" Jan 30 15:26:04.339428 kubelet[2337]: I0130 15:26:04.339365 2337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 15:26:04.339719 kubelet[2337]: I0130 15:26:04.339701 2337 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 15:26:04.341564 kubelet[2337]: I0130 15:26:04.341395 2337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 15:26:04.341937 kubelet[2337]: E0130 15:26:04.340653 2337 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.124.2:6443/api/v1/namespaces/default/events\": dial tcp 49.13.124.2:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-1-b815e480da.181f81dc4abd9e2e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-1-b815e480da,UID:ci-4081-3-0-1-b815e480da,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-1-b815e480da,},FirstTimestamp:2025-01-30 15:26:04.335029806 +0000 UTC m=+0.838447725,LastTimestamp:2025-01-30 15:26:04.335029806 +0000 UTC m=+0.838447725,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-1-b815e480da,}" Jan 30 15:26:04.342856 kubelet[2337]: I0130 15:26:04.342607 2337 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 15:26:04.348055 kubelet[2337]: E0130 15:26:04.348016 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-1-b815e480da\" not found" Jan 30 15:26:04.348180 kubelet[2337]: I0130 15:26:04.348125 2337 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 15:26:04.348385 kubelet[2337]: I0130 15:26:04.348358 2337 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 15:26:04.348440 kubelet[2337]: I0130 15:26:04.348423 2337 reconciler.go:26] "Reconciler: start to sync state" Jan 30 15:26:04.349629 kubelet[2337]: W0130 15:26:04.349559 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.124.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 15:26:04.349723 kubelet[2337]: E0130 15:26:04.349635 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.13.124.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.124.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:26:04.350470 kubelet[2337]: E0130 15:26:04.350430 2337 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 15:26:04.353343 kubelet[2337]: I0130 15:26:04.353084 2337 factory.go:221] Registration of the containerd container factory successfully Jan 30 15:26:04.353343 kubelet[2337]: I0130 15:26:04.353106 2337 factory.go:221] Registration of the systemd container factory successfully Jan 30 15:26:04.353343 kubelet[2337]: I0130 15:26:04.353220 2337 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 15:26:04.359641 kubelet[2337]: E0130 15:26:04.359541 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.124.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-1-b815e480da?timeout=10s\": dial tcp 49.13.124.2:6443: connect: connection refused" interval="200ms" Jan 30 15:26:04.367563 kubelet[2337]: I0130 15:26:04.367413 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 15:26:04.369027 kubelet[2337]: I0130 15:26:04.368997 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 15:26:04.369483 kubelet[2337]: I0130 15:26:04.369133 2337 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 15:26:04.369483 kubelet[2337]: I0130 15:26:04.369159 2337 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 15:26:04.369483 kubelet[2337]: E0130 15:26:04.369218 2337 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 15:26:04.376138 kubelet[2337]: W0130 15:26:04.375923 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.124.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 15:26:04.376138 kubelet[2337]: E0130 15:26:04.375989 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.13.124.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.124.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:26:04.385299 kubelet[2337]: I0130 15:26:04.385257 2337 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 15:26:04.385299 kubelet[2337]: I0130 15:26:04.385284 2337 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 15:26:04.385299 kubelet[2337]: I0130 15:26:04.385304 2337 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:26:04.388063 kubelet[2337]: I0130 15:26:04.388029 2337 policy_none.go:49] "None policy: Start" Jan 30 15:26:04.389431 kubelet[2337]: I0130 15:26:04.389394 2337 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 15:26:04.389497 kubelet[2337]: I0130 15:26:04.389448 2337 state_mem.go:35] "Initializing new in-memory state store" Jan 30 15:26:04.396405 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 15:26:04.407283 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 15:26:04.410963 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 15:26:04.424388 kubelet[2337]: I0130 15:26:04.423305 2337 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 15:26:04.424388 kubelet[2337]: I0130 15:26:04.423655 2337 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 15:26:04.424388 kubelet[2337]: I0130 15:26:04.423679 2337 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 15:26:04.424388 kubelet[2337]: I0130 15:26:04.424085 2337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 15:26:04.427968 kubelet[2337]: E0130 15:26:04.427929 2337 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-1-b815e480da\" not found" Jan 30 15:26:04.482696 systemd[1]: Created slice kubepods-burstable-pod1821fb7a5b1b0971fbc906e88dd671be.slice - libcontainer container kubepods-burstable-pod1821fb7a5b1b0971fbc906e88dd671be.slice. Jan 30 15:26:04.510369 systemd[1]: Created slice kubepods-burstable-podfba09c0cd6b44eacdaf8b0740a14b7a6.slice - libcontainer container kubepods-burstable-podfba09c0cd6b44eacdaf8b0740a14b7a6.slice. Jan 30 15:26:04.517099 systemd[1]: Created slice kubepods-burstable-podaa6f87c8510ae686e4520db81330e383.slice - libcontainer container kubepods-burstable-podaa6f87c8510ae686e4520db81330e383.slice. Jan 30 15:26:04.527001 kubelet[2337]: I0130 15:26:04.526888 2337 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-1-b815e480da" Jan 30 15:26:04.529030 kubelet[2337]: E0130 15:26:04.528997 2337 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://49.13.124.2:6443/api/v1/nodes\": dial tcp 49.13.124.2:6443: connect: connection refused" node="ci-4081-3-0-1-b815e480da" Jan 30 15:26:04.560446 kubelet[2337]: E0130 15:26:04.560385 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.124.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-1-b815e480da?timeout=10s\": dial tcp 49.13.124.2:6443: connect: connection refused" interval="400ms" Jan 30 15:26:04.649995 kubelet[2337]: I0130 15:26:04.649933 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fba09c0cd6b44eacdaf8b0740a14b7a6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-1-b815e480da\" (UID: \"fba09c0cd6b44eacdaf8b0740a14b7a6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-b815e480da" Jan 30 15:26:04.649995 kubelet[2337]: I0130 15:26:04.650004 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa6f87c8510ae686e4520db81330e383-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-1-b815e480da\" (UID: \"aa6f87c8510ae686e4520db81330e383\") " pod="kube-system/kube-scheduler-ci-4081-3-0-1-b815e480da" Jan 30 15:26:04.650555 kubelet[2337]: I0130 15:26:04.650045 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1821fb7a5b1b0971fbc906e88dd671be-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-1-b815e480da\" (UID: \"1821fb7a5b1b0971fbc906e88dd671be\") " pod="kube-system/kube-apiserver-ci-4081-3-0-1-b815e480da" Jan 30 15:26:04.650555 kubelet[2337]: I0130 15:26:04.650088 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1821fb7a5b1b0971fbc906e88dd671be-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-1-b815e480da\" (UID: \"1821fb7a5b1b0971fbc906e88dd671be\") " pod="kube-system/kube-apiserver-ci-4081-3-0-1-b815e480da" Jan 30 15:26:04.650555 kubelet[2337]: I0130 15:26:04.650130 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fba09c0cd6b44eacdaf8b0740a14b7a6-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-1-b815e480da\" (UID: \"fba09c0cd6b44eacdaf8b0740a14b7a6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-b815e480da" Jan 30 15:26:04.650555 kubelet[2337]: I0130 15:26:04.650169 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fba09c0cd6b44eacdaf8b0740a14b7a6-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-1-b815e480da\" (UID: \"fba09c0cd6b44eacdaf8b0740a14b7a6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-b815e480da" Jan 30 15:26:04.650555 kubelet[2337]: I0130 15:26:04.650239 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fba09c0cd6b44eacdaf8b0740a14b7a6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-1-b815e480da\" (UID: \"fba09c0cd6b44eacdaf8b0740a14b7a6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-b815e480da" Jan 30 15:26:04.650840 kubelet[2337]: I0130 15:26:04.650280 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1821fb7a5b1b0971fbc906e88dd671be-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-1-b815e480da\" (UID: \"1821fb7a5b1b0971fbc906e88dd671be\") " pod="kube-system/kube-apiserver-ci-4081-3-0-1-b815e480da" Jan 30 15:26:04.650840 kubelet[2337]: I0130 15:26:04.650313 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fba09c0cd6b44eacdaf8b0740a14b7a6-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-1-b815e480da\" (UID: \"fba09c0cd6b44eacdaf8b0740a14b7a6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-b815e480da" Jan 30 15:26:04.732315 kubelet[2337]: I0130 15:26:04.731891 2337 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-1-b815e480da" Jan 30 15:26:04.732547 kubelet[2337]: E0130 15:26:04.732515 2337 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://49.13.124.2:6443/api/v1/nodes\": dial tcp 49.13.124.2:6443: connect: connection refused" node="ci-4081-3-0-1-b815e480da" Jan 30 15:26:04.806919 containerd[1475]: time="2025-01-30T15:26:04.806802513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-1-b815e480da,Uid:1821fb7a5b1b0971fbc906e88dd671be,Namespace:kube-system,Attempt:0,}" Jan 30 15:26:04.816113 containerd[1475]: time="2025-01-30T15:26:04.815697944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-1-b815e480da,Uid:fba09c0cd6b44eacdaf8b0740a14b7a6,Namespace:kube-system,Attempt:0,}" Jan 30 15:26:04.821286 containerd[1475]: time="2025-01-30T15:26:04.820909555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-1-b815e480da,Uid:aa6f87c8510ae686e4520db81330e383,Namespace:kube-system,Attempt:0,}" Jan 30 15:26:04.961694 kubelet[2337]: E0130 15:26:04.961569 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.124.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-1-b815e480da?timeout=10s\": dial tcp 49.13.124.2:6443: connect: connection refused" interval="800ms" Jan 30 15:26:05.135929 kubelet[2337]: I0130 15:26:05.135779 2337 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-1-b815e480da" Jan 30 15:26:05.136480 kubelet[2337]: E0130 15:26:05.136173 2337 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://49.13.124.2:6443/api/v1/nodes\": dial tcp 49.13.124.2:6443: connect: connection refused" node="ci-4081-3-0-1-b815e480da" Jan 30 15:26:05.179420 kubelet[2337]: W0130 15:26:05.179264 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.124.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-1-b815e480da&limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 15:26:05.179420 kubelet[2337]: E0130 15:26:05.179372 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.13.124.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-1-b815e480da&limit=500&resourceVersion=0\": dial tcp 49.13.124.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:26:05.317174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2421310648.mount: Deactivated successfully. Jan 30 15:26:05.324060 containerd[1475]: time="2025-01-30T15:26:05.324010694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:26:05.325866 containerd[1475]: time="2025-01-30T15:26:05.325827524Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 30 15:26:05.328455 containerd[1475]: time="2025-01-30T15:26:05.328409590Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:26:05.331637 containerd[1475]: time="2025-01-30T15:26:05.330483659Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 15:26:05.331637 containerd[1475]: time="2025-01-30T15:26:05.331439534Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:26:05.332619 containerd[1475]: time="2025-01-30T15:26:05.332524328Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:26:05.334488 containerd[1475]: time="2025-01-30T15:26:05.334435198Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 15:26:05.335719 containerd[1475]: time="2025-01-30T15:26:05.335660111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:26:05.340137 containerd[1475]: time="2025-01-30T15:26:05.340080127Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 524.257824ms" Jan 30 15:26:05.342493 containerd[1475]: time="2025-01-30T15:26:05.342448674Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 535.561162ms" Jan 30 15:26:05.342794 containerd[1475]: time="2025-01-30T15:26:05.342769153Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 521.782358ms" Jan 30 15:26:05.469050 containerd[1475]: time="2025-01-30T15:26:05.468116196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:26:05.469050 containerd[1475]: time="2025-01-30T15:26:05.468210276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:26:05.469050 containerd[1475]: time="2025-01-30T15:26:05.468227716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:26:05.469050 containerd[1475]: time="2025-01-30T15:26:05.468365715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:26:05.476650 containerd[1475]: time="2025-01-30T15:26:05.476467151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:26:05.477748 containerd[1475]: time="2025-01-30T15:26:05.477502626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:26:05.477748 containerd[1475]: time="2025-01-30T15:26:05.477629945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:26:05.478487 containerd[1475]: time="2025-01-30T15:26:05.478016543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:26:05.478487 containerd[1475]: time="2025-01-30T15:26:05.478269621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:26:05.478487 containerd[1475]: time="2025-01-30T15:26:05.478298861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:26:05.478487 containerd[1475]: time="2025-01-30T15:26:05.478415261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:26:05.479117 containerd[1475]: time="2025-01-30T15:26:05.479073217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:26:05.501155 systemd[1]: Started cri-containerd-813f0745c0106f3df4d94961a51ee23b9ab22b099472e088296a031b3a9df817.scope - libcontainer container 813f0745c0106f3df4d94961a51ee23b9ab22b099472e088296a031b3a9df817. Jan 30 15:26:05.511942 systemd[1]: Started cri-containerd-1a95c6ea2302a62645d59989d09fd883dc1cf81000fce8c42e05c0575aad781a.scope - libcontainer container 1a95c6ea2302a62645d59989d09fd883dc1cf81000fce8c42e05c0575aad781a. Jan 30 15:26:05.519824 systemd[1]: Started cri-containerd-9ae7517a25dcb9ba10a8908d87b2da9d59e0122cfedac74777883169a4b5a820.scope - libcontainer container 9ae7517a25dcb9ba10a8908d87b2da9d59e0122cfedac74777883169a4b5a820. Jan 30 15:26:05.524728 kubelet[2337]: W0130 15:26:05.524345 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.124.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 15:26:05.524728 kubelet[2337]: E0130 15:26:05.524427 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.13.124.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.124.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:26:05.566392 containerd[1475]: time="2025-01-30T15:26:05.566165627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-1-b815e480da,Uid:aa6f87c8510ae686e4520db81330e383,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a95c6ea2302a62645d59989d09fd883dc1cf81000fce8c42e05c0575aad781a\"" Jan 30 15:26:05.568420 kubelet[2337]: W0130 15:26:05.567939 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.124.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 15:26:05.568763 kubelet[2337]: E0130 15:26:05.568444 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.13.124.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.124.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:26:05.576489 containerd[1475]: time="2025-01-30T15:26:05.576289532Z" level=info msg="CreateContainer within sandbox \"1a95c6ea2302a62645d59989d09fd883dc1cf81000fce8c42e05c0575aad781a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 15:26:05.588110 containerd[1475]: time="2025-01-30T15:26:05.588026269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-1-b815e480da,Uid:1821fb7a5b1b0971fbc906e88dd671be,Namespace:kube-system,Attempt:0,} returns sandbox id \"813f0745c0106f3df4d94961a51ee23b9ab22b099472e088296a031b3a9df817\"" Jan 30 15:26:05.591942 containerd[1475]: time="2025-01-30T15:26:05.591756009Z" level=info msg="CreateContainer within sandbox \"1a95c6ea2302a62645d59989d09fd883dc1cf81000fce8c42e05c0575aad781a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9c13b77a9d44aec9517ce9e75e5d21b93c10b938fa72c6e9bd9c57b28faebfd4\"" Jan 30 15:26:05.591942 containerd[1475]: time="2025-01-30T15:26:05.591843968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-1-b815e480da,Uid:fba09c0cd6b44eacdaf8b0740a14b7a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ae7517a25dcb9ba10a8908d87b2da9d59e0122cfedac74777883169a4b5a820\"" Jan 30 15:26:05.592728 containerd[1475]: time="2025-01-30T15:26:05.592514605Z" level=info msg="StartContainer for \"9c13b77a9d44aec9517ce9e75e5d21b93c10b938fa72c6e9bd9c57b28faebfd4\"" Jan 30 15:26:05.593723 containerd[1475]: time="2025-01-30T15:26:05.593677119Z" level=info msg="CreateContainer within sandbox \"813f0745c0106f3df4d94961a51ee23b9ab22b099472e088296a031b3a9df817\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 15:26:05.601653 containerd[1475]: time="2025-01-30T15:26:05.601612196Z" level=info msg="CreateContainer within sandbox \"9ae7517a25dcb9ba10a8908d87b2da9d59e0122cfedac74777883169a4b5a820\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 15:26:05.610616 containerd[1475]: time="2025-01-30T15:26:05.610400388Z" level=info msg="CreateContainer within sandbox \"813f0745c0106f3df4d94961a51ee23b9ab22b099472e088296a031b3a9df817\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"468d8320606c0ded5f4ce72193a519189b9b4e583d72b43a763de105c0f081cb\"" Jan 30 15:26:05.611008 containerd[1475]: time="2025-01-30T15:26:05.610960665Z" level=info msg="StartContainer for \"468d8320606c0ded5f4ce72193a519189b9b4e583d72b43a763de105c0f081cb\"" Jan 30 15:26:05.620720 containerd[1475]: time="2025-01-30T15:26:05.620640133Z" level=info msg="CreateContainer within sandbox \"9ae7517a25dcb9ba10a8908d87b2da9d59e0122cfedac74777883169a4b5a820\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a529a609d4ff5c88cf78c0f4c8d72b4f0760f15909f32322b48207c209cc7d87\"" Jan 30 15:26:05.621496 containerd[1475]: time="2025-01-30T15:26:05.621474889Z" level=info msg="StartContainer for \"a529a609d4ff5c88cf78c0f4c8d72b4f0760f15909f32322b48207c209cc7d87\"" Jan 30 15:26:05.631993 systemd[1]: Started cri-containerd-9c13b77a9d44aec9517ce9e75e5d21b93c10b938fa72c6e9bd9c57b28faebfd4.scope - libcontainer container 9c13b77a9d44aec9517ce9e75e5d21b93c10b938fa72c6e9bd9c57b28faebfd4. Jan 30 15:26:05.648686 systemd[1]: Started cri-containerd-468d8320606c0ded5f4ce72193a519189b9b4e583d72b43a763de105c0f081cb.scope - libcontainer container 468d8320606c0ded5f4ce72193a519189b9b4e583d72b43a763de105c0f081cb. Jan 30 15:26:05.678923 systemd[1]: Started cri-containerd-a529a609d4ff5c88cf78c0f4c8d72b4f0760f15909f32322b48207c209cc7d87.scope - libcontainer container a529a609d4ff5c88cf78c0f4c8d72b4f0760f15909f32322b48207c209cc7d87. Jan 30 15:26:05.706497 containerd[1475]: time="2025-01-30T15:26:05.706456950Z" level=info msg="StartContainer for \"468d8320606c0ded5f4ce72193a519189b9b4e583d72b43a763de105c0f081cb\" returns successfully" Jan 30 15:26:05.710877 containerd[1475]: time="2025-01-30T15:26:05.710810406Z" level=info msg="StartContainer for \"9c13b77a9d44aec9517ce9e75e5d21b93c10b938fa72c6e9bd9c57b28faebfd4\" returns successfully" Jan 30 15:26:05.757234 containerd[1475]: time="2025-01-30T15:26:05.757194316Z" level=info msg="StartContainer for \"a529a609d4ff5c88cf78c0f4c8d72b4f0760f15909f32322b48207c209cc7d87\" returns successfully" Jan 30 15:26:05.762937 kubelet[2337]: E0130 15:26:05.762789 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.124.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-1-b815e480da?timeout=10s\": dial tcp 49.13.124.2:6443: connect: connection refused" interval="1.6s" Jan 30 15:26:05.939145 kubelet[2337]: I0130 15:26:05.939107 2337 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-1-b815e480da" Jan 30 15:26:08.118477 kubelet[2337]: E0130 15:26:08.118418 2337 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-1-b815e480da\" not found" node="ci-4081-3-0-1-b815e480da" Jan 30 15:26:08.261408 kubelet[2337]: I0130 15:26:08.261122 2337 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-0-1-b815e480da" Jan 30 15:26:08.326982 kubelet[2337]: I0130 15:26:08.326933 2337 apiserver.go:52] "Watching apiserver" Jan 30 15:26:08.348900 kubelet[2337]: I0130 15:26:08.348811 2337 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 15:26:10.444823 systemd[1]: Reloading requested from client PID 2605 ('systemctl') (unit session-7.scope)... Jan 30 15:26:10.444842 systemd[1]: Reloading... Jan 30 15:26:10.541637 zram_generator::config[2645]: No configuration found. Jan 30 15:26:10.642374 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:26:10.729639 systemd[1]: Reloading finished in 284 ms. Jan 30 15:26:10.767641 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:26:10.780730 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 15:26:10.781410 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:26:10.781709 systemd[1]: kubelet.service: Consumed 1.249s CPU time, 112.9M memory peak, 0B memory swap peak. Jan 30 15:26:10.797192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:26:10.920565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:26:10.934158 (kubelet)[2690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 15:26:10.991795 kubelet[2690]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:26:10.991795 kubelet[2690]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 15:26:10.991795 kubelet[2690]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:26:10.991795 kubelet[2690]: I0130 15:26:10.991103 2690 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 15:26:10.999275 kubelet[2690]: I0130 15:26:10.999236 2690 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 15:26:10.999450 kubelet[2690]: I0130 15:26:10.999438 2690 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 15:26:10.999850 kubelet[2690]: I0130 15:26:10.999831 2690 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 15:26:11.001463 kubelet[2690]: I0130 15:26:11.001432 2690 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 15:26:11.004446 kubelet[2690]: I0130 15:26:11.004412 2690 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:26:11.009548 kubelet[2690]: E0130 15:26:11.009492 2690 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 15:26:11.009548 kubelet[2690]: I0130 15:26:11.009550 2690 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 15:26:11.013758 kubelet[2690]: I0130 15:26:11.013732 2690 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 15:26:11.013957 kubelet[2690]: I0130 15:26:11.013869 2690 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 15:26:11.014042 kubelet[2690]: I0130 15:26:11.014001 2690 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 15:26:11.014243 kubelet[2690]: I0130 15:26:11.014041 2690 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-1-b815e480da","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 15:26:11.014353 kubelet[2690]: I0130 15:26:11.014250 2690 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 15:26:11.014353 kubelet[2690]: I0130 15:26:11.014260 2690 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 15:26:11.014353 kubelet[2690]: I0130 15:26:11.014299 2690 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:26:11.014475 kubelet[2690]: I0130 15:26:11.014421 2690 kubelet.go:408] "Attempting to sync node with API server" Jan 30 15:26:11.014475 kubelet[2690]: I0130 15:26:11.014440 2690 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 15:26:11.014475 kubelet[2690]: I0130 15:26:11.014463 2690 kubelet.go:314] "Adding apiserver pod source" Jan 30 15:26:11.014475 kubelet[2690]: I0130 15:26:11.014474 2690 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 15:26:11.016464 kubelet[2690]: I0130 15:26:11.016429 2690 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 15:26:11.017339 kubelet[2690]: I0130 15:26:11.017079 2690 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 15:26:11.021288 kubelet[2690]: I0130 15:26:11.021259 2690 server.go:1269] "Started kubelet" Jan 30 15:26:11.024087 kubelet[2690]: I0130 15:26:11.024064 2690 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 15:26:11.027443 kubelet[2690]: I0130 15:26:11.027402 2690 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 15:26:11.028718 kubelet[2690]: I0130 15:26:11.028560 2690 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 15:26:11.038510 kubelet[2690]: I0130 15:26:11.038478 2690 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 15:26:11.039769 kubelet[2690]: I0130 15:26:11.031220 2690 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 15:26:11.041465 kubelet[2690]: I0130 15:26:11.041419 2690 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 15:26:11.042684 kubelet[2690]: I0130 15:26:11.042656 2690 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 15:26:11.042684 kubelet[2690]: I0130 15:26:11.042684 2690 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 15:26:11.042789 kubelet[2690]: I0130 15:26:11.042707 2690 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 15:26:11.042789 kubelet[2690]: E0130 15:26:11.042756 2690 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 15:26:11.042850 kubelet[2690]: E0130 15:26:11.031505 2690 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-1-b815e480da\" not found" Jan 30 15:26:11.051576 kubelet[2690]: I0130 15:26:11.038177 2690 server.go:460] "Adding debug handlers to kubelet server" Jan 30 15:26:11.055725 kubelet[2690]: I0130 15:26:11.029095 2690 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 15:26:11.056804 kubelet[2690]: I0130 15:26:11.031321 2690 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 15:26:11.057065 kubelet[2690]: I0130 15:26:11.057049 2690 reconciler.go:26] "Reconciler: start to sync state" Jan 30 15:26:11.059170 kubelet[2690]: I0130 15:26:11.058811 2690 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 15:26:11.065814 kubelet[2690]: I0130 15:26:11.065776 2690 factory.go:221] Registration of the containerd container factory successfully Jan 30 15:26:11.065814 kubelet[2690]: I0130 15:26:11.065807 2690 factory.go:221] Registration of the systemd container factory successfully Jan 30 15:26:11.086552 kubelet[2690]: E0130 15:26:11.086307 2690 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 15:26:11.143091 kubelet[2690]: E0130 15:26:11.142890 2690 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 15:26:11.145405 kubelet[2690]: I0130 15:26:11.144153 2690 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 15:26:11.145405 kubelet[2690]: I0130 15:26:11.144221 2690 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 15:26:11.145405 kubelet[2690]: I0130 15:26:11.144285 2690 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:26:11.145405 kubelet[2690]: I0130 15:26:11.144686 2690 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 15:26:11.145405 kubelet[2690]: I0130 15:26:11.144717 2690 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 15:26:11.145405 kubelet[2690]: I0130 15:26:11.144785 2690 policy_none.go:49] "None policy: Start" Jan 30 15:26:11.147571 kubelet[2690]: I0130 15:26:11.147411 2690 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 15:26:11.147571 kubelet[2690]: I0130 15:26:11.147448 2690 state_mem.go:35] "Initializing new in-memory state store" Jan 30 15:26:11.148038 kubelet[2690]: I0130 15:26:11.147796 2690 state_mem.go:75] "Updated machine memory state" Jan 30 15:26:11.155704 kubelet[2690]: I0130 15:26:11.154969 2690 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 15:26:11.155704 kubelet[2690]: I0130 15:26:11.155173 2690 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 15:26:11.155704 kubelet[2690]: I0130 15:26:11.155185 2690 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 15:26:11.155704 kubelet[2690]: I0130 15:26:11.155462 2690 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 15:26:11.265965 kubelet[2690]: I0130 15:26:11.265926 2690 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-1-b815e480da" Jan 30 15:26:11.277654 kubelet[2690]: I0130 15:26:11.277617 2690 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-0-1-b815e480da" Jan 30 15:26:11.277830 kubelet[2690]: I0130 15:26:11.277723 2690 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-0-1-b815e480da" Jan 30 15:26:11.359337 kubelet[2690]: I0130 15:26:11.359208 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fba09c0cd6b44eacdaf8b0740a14b7a6-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-1-b815e480da\" (UID: \"fba09c0cd6b44eacdaf8b0740a14b7a6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-b815e480da" Jan 30 15:26:11.359337 kubelet[2690]: I0130 15:26:11.359286 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fba09c0cd6b44eacdaf8b0740a14b7a6-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-1-b815e480da\" (UID: \"fba09c0cd6b44eacdaf8b0740a14b7a6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-b815e480da" Jan 30 15:26:11.359337 kubelet[2690]: I0130 15:26:11.359312 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa6f87c8510ae686e4520db81330e383-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-1-b815e480da\" (UID: \"aa6f87c8510ae686e4520db81330e383\") " pod="kube-system/kube-scheduler-ci-4081-3-0-1-b815e480da" Jan 30 15:26:11.359838 kubelet[2690]: I0130 15:26:11.359626 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1821fb7a5b1b0971fbc906e88dd671be-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-1-b815e480da\" (UID: \"1821fb7a5b1b0971fbc906e88dd671be\") " pod="kube-system/kube-apiserver-ci-4081-3-0-1-b815e480da" Jan 30 15:26:11.359838 kubelet[2690]: I0130 15:26:11.359663 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fba09c0cd6b44eacdaf8b0740a14b7a6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-1-b815e480da\" (UID: \"fba09c0cd6b44eacdaf8b0740a14b7a6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-b815e480da" Jan 30 15:26:11.359838 kubelet[2690]: I0130 15:26:11.359702 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fba09c0cd6b44eacdaf8b0740a14b7a6-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-1-b815e480da\" (UID: \"fba09c0cd6b44eacdaf8b0740a14b7a6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-b815e480da" Jan 30 15:26:11.359838 kubelet[2690]: I0130 15:26:11.359728 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fba09c0cd6b44eacdaf8b0740a14b7a6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-1-b815e480da\" (UID: \"fba09c0cd6b44eacdaf8b0740a14b7a6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-b815e480da" Jan 30 15:26:11.359838 kubelet[2690]: I0130 15:26:11.359754 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1821fb7a5b1b0971fbc906e88dd671be-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-1-b815e480da\" (UID: \"1821fb7a5b1b0971fbc906e88dd671be\") " pod="kube-system/kube-apiserver-ci-4081-3-0-1-b815e480da" Jan 30 15:26:11.360024 kubelet[2690]: I0130 15:26:11.359785 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1821fb7a5b1b0971fbc906e88dd671be-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-1-b815e480da\" (UID: \"1821fb7a5b1b0971fbc906e88dd671be\") " pod="kube-system/kube-apiserver-ci-4081-3-0-1-b815e480da" Jan 30 15:26:11.445266 sudo[2722]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 15:26:11.445735 sudo[2722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 15:26:11.898897 sudo[2722]: pam_unix(sudo:session): session closed for user root Jan 30 15:26:12.015988 kubelet[2690]: I0130 15:26:12.015735 2690 apiserver.go:52] "Watching apiserver" Jan 30 15:26:12.057480 kubelet[2690]: I0130 15:26:12.057402 2690 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 15:26:12.119990 kubelet[2690]: E0130 15:26:12.119906 2690 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-0-1-b815e480da\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-0-1-b815e480da" Jan 30 15:26:12.125039 kubelet[2690]: I0130 15:26:12.123876 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-1-b815e480da" podStartSLOduration=1.123850743 podStartE2EDuration="1.123850743s" podCreationTimestamp="2025-01-30 15:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:26:12.104838029 +0000 UTC m=+1.165158800" watchObservedRunningTime="2025-01-30 15:26:12.123850743 +0000 UTC m=+1.184171474" Jan 30 15:26:12.138492 kubelet[2690]: I0130 15:26:12.138425 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-1-b815e480da" podStartSLOduration=1.138406117 podStartE2EDuration="1.138406117s" podCreationTimestamp="2025-01-30 15:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:26:12.124099182 +0000 UTC m=+1.184419913" watchObservedRunningTime="2025-01-30 15:26:12.138406117 +0000 UTC m=+1.198726848" Jan 30 15:26:12.154316 kubelet[2690]: I0130 15:26:12.154105 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-1-b815e480da" podStartSLOduration=1.154081046 podStartE2EDuration="1.154081046s" podCreationTimestamp="2025-01-30 15:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:26:12.13995863 +0000 UTC m=+1.200279361" watchObservedRunningTime="2025-01-30 15:26:12.154081046 +0000 UTC m=+1.214401817" Jan 30 15:26:13.791283 sudo[1850]: pam_unix(sudo:session): session closed for user root Jan 30 15:26:13.953692 sshd[1847]: pam_unix(sshd:session): session closed for user core Jan 30 15:26:13.957198 systemd[1]: sshd@6-49.13.124.2:22-139.178.68.195:59316.service: Deactivated successfully. Jan 30 15:26:13.959568 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 15:26:13.960080 systemd[1]: session-7.scope: Consumed 8.632s CPU time, 151.8M memory peak, 0B memory swap peak. Jan 30 15:26:13.962984 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Jan 30 15:26:13.965088 systemd-logind[1453]: Removed session 7. Jan 30 15:26:17.645317 kubelet[2690]: I0130 15:26:17.645267 2690 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 15:26:17.646169 containerd[1475]: time="2025-01-30T15:26:17.646013815Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 15:26:17.647007 kubelet[2690]: I0130 15:26:17.646236 2690 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 15:26:18.615794 systemd[1]: Created slice kubepods-besteffort-pod1b856e28_efc7_4f88_a1c0_543ed4f8aad6.slice - libcontainer container kubepods-besteffort-pod1b856e28_efc7_4f88_a1c0_543ed4f8aad6.slice. Jan 30 15:26:18.631226 systemd[1]: Created slice kubepods-burstable-pod02f6cfd4_42b6_4e44_813b_fb72c58475e7.slice - libcontainer container kubepods-burstable-pod02f6cfd4_42b6_4e44_813b_fb72c58475e7.slice. Jan 30 15:26:18.709556 kubelet[2690]: I0130 15:26:18.709405 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-cni-path\") pod \"cilium-vwmfm\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " pod="kube-system/cilium-vwmfm" Jan 30 15:26:18.709556 kubelet[2690]: I0130 15:26:18.709515 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kd5c\" (UniqueName: \"kubernetes.io/projected/02f6cfd4-42b6-4e44-813b-fb72c58475e7-kube-api-access-9kd5c\") pod \"cilium-vwmfm\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " pod="kube-system/cilium-vwmfm" Jan 30 15:26:18.709556 kubelet[2690]: I0130 15:26:18.709555 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02f6cfd4-42b6-4e44-813b-fb72c58475e7-cilium-config-path\") pod \"cilium-vwmfm\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " pod="kube-system/cilium-vwmfm" Jan 30 15:26:18.710187 kubelet[2690]: I0130 15:26:18.709611 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1b856e28-efc7-4f88-a1c0-543ed4f8aad6-kube-proxy\") pod \"kube-proxy-4stnw\" (UID: \"1b856e28-efc7-4f88-a1c0-543ed4f8aad6\") " pod="kube-system/kube-proxy-4stnw" Jan 30 15:26:18.710187 kubelet[2690]: I0130 15:26:18.709645 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b856e28-efc7-4f88-a1c0-543ed4f8aad6-lib-modules\") pod \"kube-proxy-4stnw\" (UID: \"1b856e28-efc7-4f88-a1c0-543ed4f8aad6\") " pod="kube-system/kube-proxy-4stnw" Jan 30 15:26:18.710187 kubelet[2690]: I0130 15:26:18.709672 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5vzl\" (UniqueName: \"kubernetes.io/projected/1b856e28-efc7-4f88-a1c0-543ed4f8aad6-kube-api-access-h5vzl\") pod \"kube-proxy-4stnw\" (UID: \"1b856e28-efc7-4f88-a1c0-543ed4f8aad6\") " pod="kube-system/kube-proxy-4stnw" Jan 30 15:26:18.710187 kubelet[2690]: I0130 15:26:18.709701 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-hostproc\") pod \"cilium-vwmfm\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " pod="kube-system/cilium-vwmfm" Jan 30 15:26:18.710187 kubelet[2690]: I0130 15:26:18.709730 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-etc-cni-netd\") pod \"cilium-vwmfm\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " pod="kube-system/cilium-vwmfm" Jan 30 15:26:18.710187 kubelet[2690]: I0130 15:26:18.709759 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-xtables-lock\") pod \"cilium-vwmfm\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " pod="kube-system/cilium-vwmfm" Jan 30 15:26:18.710479 kubelet[2690]: I0130 15:26:18.709796 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-host-proc-sys-kernel\") pod \"cilium-vwmfm\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " pod="kube-system/cilium-vwmfm" Jan 30 15:26:18.710479 kubelet[2690]: I0130 15:26:18.709845 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-host-proc-sys-net\") pod \"cilium-vwmfm\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " pod="kube-system/cilium-vwmfm" Jan 30 15:26:18.710479 kubelet[2690]: I0130 15:26:18.709873 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02f6cfd4-42b6-4e44-813b-fb72c58475e7-hubble-tls\") pod \"cilium-vwmfm\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " pod="kube-system/cilium-vwmfm" Jan 30 15:26:18.710479 kubelet[2690]: I0130 15:26:18.709899 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b856e28-efc7-4f88-a1c0-543ed4f8aad6-xtables-lock\") pod \"kube-proxy-4stnw\" (UID: \"1b856e28-efc7-4f88-a1c0-543ed4f8aad6\") " pod="kube-system/kube-proxy-4stnw" Jan 30 15:26:18.710479 kubelet[2690]: I0130 15:26:18.709938 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-cilium-cgroup\") pod \"cilium-vwmfm\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " pod="kube-system/cilium-vwmfm" Jan 30 15:26:18.710479 kubelet[2690]: I0130 15:26:18.709968 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-cilium-run\") pod \"cilium-vwmfm\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " pod="kube-system/cilium-vwmfm" Jan 30 15:26:18.710800 kubelet[2690]: I0130 15:26:18.709995 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02f6cfd4-42b6-4e44-813b-fb72c58475e7-clustermesh-secrets\") pod \"cilium-vwmfm\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " pod="kube-system/cilium-vwmfm" Jan 30 15:26:18.710800 kubelet[2690]: I0130 15:26:18.710025 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-lib-modules\") pod \"cilium-vwmfm\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " pod="kube-system/cilium-vwmfm" Jan 30 15:26:18.710800 kubelet[2690]: I0130 15:26:18.710066 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-bpf-maps\") pod \"cilium-vwmfm\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " pod="kube-system/cilium-vwmfm" Jan 30 15:26:18.768364 systemd[1]: Created slice kubepods-besteffort-pod4780db41_30f7_484d_af3d_0169d62c36c5.slice - libcontainer container kubepods-besteffort-pod4780db41_30f7_484d_af3d_0169d62c36c5.slice. Jan 30 15:26:18.812771 kubelet[2690]: I0130 15:26:18.810715 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq666\" (UniqueName: \"kubernetes.io/projected/4780db41-30f7-484d-af3d-0169d62c36c5-kube-api-access-cq666\") pod \"cilium-operator-5d85765b45-qs8p8\" (UID: \"4780db41-30f7-484d-af3d-0169d62c36c5\") " pod="kube-system/cilium-operator-5d85765b45-qs8p8" Jan 30 15:26:18.812771 kubelet[2690]: I0130 15:26:18.811034 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4780db41-30f7-484d-af3d-0169d62c36c5-cilium-config-path\") pod \"cilium-operator-5d85765b45-qs8p8\" (UID: \"4780db41-30f7-484d-af3d-0169d62c36c5\") " pod="kube-system/cilium-operator-5d85765b45-qs8p8" Jan 30 15:26:18.929728 containerd[1475]: time="2025-01-30T15:26:18.927642557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4stnw,Uid:1b856e28-efc7-4f88-a1c0-543ed4f8aad6,Namespace:kube-system,Attempt:0,}" Jan 30 15:26:18.937319 containerd[1475]: time="2025-01-30T15:26:18.937083600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vwmfm,Uid:02f6cfd4-42b6-4e44-813b-fb72c58475e7,Namespace:kube-system,Attempt:0,}" Jan 30 15:26:18.963572 containerd[1475]: time="2025-01-30T15:26:18.963170298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:26:18.963844 containerd[1475]: time="2025-01-30T15:26:18.963702216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:26:18.963980 containerd[1475]: time="2025-01-30T15:26:18.963833096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:26:18.964149 containerd[1475]: time="2025-01-30T15:26:18.964103774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:26:18.967104 containerd[1475]: time="2025-01-30T15:26:18.966968723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:26:18.967104 containerd[1475]: time="2025-01-30T15:26:18.967041843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:26:18.967280 containerd[1475]: time="2025-01-30T15:26:18.967099363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:26:18.967280 containerd[1475]: time="2025-01-30T15:26:18.967190082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:26:18.985824 systemd[1]: Started cri-containerd-dc53120963759a2226a25a05fd380793dd6d026ec3063464ca9e0a9f4d4150c0.scope - libcontainer container dc53120963759a2226a25a05fd380793dd6d026ec3063464ca9e0a9f4d4150c0. Jan 30 15:26:18.992624 systemd[1]: Started cri-containerd-5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97.scope - libcontainer container 5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97. Jan 30 15:26:19.023649 containerd[1475]: time="2025-01-30T15:26:19.023509304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4stnw,Uid:1b856e28-efc7-4f88-a1c0-543ed4f8aad6,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc53120963759a2226a25a05fd380793dd6d026ec3063464ca9e0a9f4d4150c0\"" Jan 30 15:26:19.029847 containerd[1475]: time="2025-01-30T15:26:19.029808559Z" level=info msg="CreateContainer within sandbox \"dc53120963759a2226a25a05fd380793dd6d026ec3063464ca9e0a9f4d4150c0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 15:26:19.037685 containerd[1475]: time="2025-01-30T15:26:19.037634289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vwmfm,Uid:02f6cfd4-42b6-4e44-813b-fb72c58475e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\"" Jan 30 15:26:19.041421 containerd[1475]: time="2025-01-30T15:26:19.041380955Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 15:26:19.056214 containerd[1475]: time="2025-01-30T15:26:19.056081059Z" level=info msg="CreateContainer within sandbox \"dc53120963759a2226a25a05fd380793dd6d026ec3063464ca9e0a9f4d4150c0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"41644460ca5294406132c78d55034a5af60f08f2e5394a0739b79548554dcac1\"" Jan 30 15:26:19.056816 containerd[1475]: time="2025-01-30T15:26:19.056793816Z" level=info msg="StartContainer for \"41644460ca5294406132c78d55034a5af60f08f2e5394a0739b79548554dcac1\"" Jan 30 15:26:19.075414 containerd[1475]: time="2025-01-30T15:26:19.075315145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-qs8p8,Uid:4780db41-30f7-484d-af3d-0169d62c36c5,Namespace:kube-system,Attempt:0,}" Jan 30 15:26:19.083267 systemd[1]: Started cri-containerd-41644460ca5294406132c78d55034a5af60f08f2e5394a0739b79548554dcac1.scope - libcontainer container 41644460ca5294406132c78d55034a5af60f08f2e5394a0739b79548554dcac1. Jan 30 15:26:19.116004 containerd[1475]: time="2025-01-30T15:26:19.115797470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:26:19.116004 containerd[1475]: time="2025-01-30T15:26:19.115870870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:26:19.116004 containerd[1475]: time="2025-01-30T15:26:19.115885470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:26:19.117370 containerd[1475]: time="2025-01-30T15:26:19.117296184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:26:19.127447 containerd[1475]: time="2025-01-30T15:26:19.127365266Z" level=info msg="StartContainer for \"41644460ca5294406132c78d55034a5af60f08f2e5394a0739b79548554dcac1\" returns successfully" Jan 30 15:26:19.147923 kubelet[2690]: I0130 15:26:19.147694 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4stnw" podStartSLOduration=1.147669308 podStartE2EDuration="1.147669308s" podCreationTimestamp="2025-01-30 15:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:26:19.146687032 +0000 UTC m=+8.207007763" watchObservedRunningTime="2025-01-30 15:26:19.147669308 +0000 UTC m=+8.207990039" Jan 30 15:26:19.149772 systemd[1]: Started cri-containerd-9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4.scope - libcontainer container 9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4. Jan 30 15:26:19.194410 containerd[1475]: time="2025-01-30T15:26:19.194251170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-qs8p8,Uid:4780db41-30f7-484d-af3d-0169d62c36c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4\"" Jan 30 15:26:23.311109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1370151686.mount: Deactivated successfully. Jan 30 15:26:24.670654 containerd[1475]: time="2025-01-30T15:26:24.670551159Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:26:24.672523 containerd[1475]: time="2025-01-30T15:26:24.672489392Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 15:26:24.673941 containerd[1475]: time="2025-01-30T15:26:24.672825071Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:26:24.674506 containerd[1475]: time="2025-01-30T15:26:24.674468905Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.63304547s" Jan 30 15:26:24.674506 containerd[1475]: time="2025-01-30T15:26:24.674504185Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 15:26:24.676065 containerd[1475]: time="2025-01-30T15:26:24.675995140Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 15:26:24.679726 containerd[1475]: time="2025-01-30T15:26:24.679688447Z" level=info msg="CreateContainer within sandbox \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 15:26:24.696089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2969706560.mount: Deactivated successfully. Jan 30 15:26:24.702665 containerd[1475]: time="2025-01-30T15:26:24.700997094Z" level=info msg="CreateContainer within sandbox \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb\"" Jan 30 15:26:24.702665 containerd[1475]: time="2025-01-30T15:26:24.701752972Z" level=info msg="StartContainer for \"4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb\"" Jan 30 15:26:24.735980 systemd[1]: Started cri-containerd-4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb.scope - libcontainer container 4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb. Jan 30 15:26:24.766126 containerd[1475]: time="2025-01-30T15:26:24.765948312Z" level=info msg="StartContainer for \"4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb\" returns successfully" Jan 30 15:26:24.782779 systemd[1]: cri-containerd-4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb.scope: Deactivated successfully. Jan 30 15:26:24.996732 containerd[1475]: time="2025-01-30T15:26:24.996426444Z" level=info msg="shim disconnected" id=4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb namespace=k8s.io Jan 30 15:26:24.996732 containerd[1475]: time="2025-01-30T15:26:24.996678484Z" level=warning msg="cleaning up after shim disconnected" id=4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb namespace=k8s.io Jan 30 15:26:24.996732 containerd[1475]: time="2025-01-30T15:26:24.996696923Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:26:25.157067 containerd[1475]: time="2025-01-30T15:26:25.156872267Z" level=info msg="CreateContainer within sandbox \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 15:26:25.170703 containerd[1475]: time="2025-01-30T15:26:25.170460942Z" level=info msg="CreateContainer within sandbox \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff\"" Jan 30 15:26:25.171295 containerd[1475]: time="2025-01-30T15:26:25.171257899Z" level=info msg="StartContainer for \"5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff\"" Jan 30 15:26:25.208862 systemd[1]: Started cri-containerd-5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff.scope - libcontainer container 5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff. Jan 30 15:26:25.235879 containerd[1475]: time="2025-01-30T15:26:25.235751684Z" level=info msg="StartContainer for \"5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff\" returns successfully" Jan 30 15:26:25.249515 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 15:26:25.250123 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:26:25.250367 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:26:25.258901 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:26:25.259114 systemd[1]: cri-containerd-5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff.scope: Deactivated successfully. Jan 30 15:26:25.284754 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:26:25.294526 containerd[1475]: time="2025-01-30T15:26:25.294360888Z" level=info msg="shim disconnected" id=5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff namespace=k8s.io Jan 30 15:26:25.294526 containerd[1475]: time="2025-01-30T15:26:25.294422567Z" level=warning msg="cleaning up after shim disconnected" id=5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff namespace=k8s.io Jan 30 15:26:25.294526 containerd[1475]: time="2025-01-30T15:26:25.294435967Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:26:25.693973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb-rootfs.mount: Deactivated successfully. Jan 30 15:26:26.169726 containerd[1475]: time="2025-01-30T15:26:26.169656773Z" level=info msg="CreateContainer within sandbox \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 15:26:26.193046 containerd[1475]: time="2025-01-30T15:26:26.192955256Z" level=info msg="CreateContainer within sandbox \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b\"" Jan 30 15:26:26.197012 containerd[1475]: time="2025-01-30T15:26:26.193858293Z" level=info msg="StartContainer for \"e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b\"" Jan 30 15:26:26.232853 systemd[1]: Started cri-containerd-e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b.scope - libcontainer container e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b. Jan 30 15:26:26.265968 containerd[1475]: time="2025-01-30T15:26:26.265918258Z" level=info msg="StartContainer for \"e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b\" returns successfully" Jan 30 15:26:26.267736 systemd[1]: cri-containerd-e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b.scope: Deactivated successfully. Jan 30 15:26:26.298345 containerd[1475]: time="2025-01-30T15:26:26.298239152Z" level=info msg="shim disconnected" id=e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b namespace=k8s.io Jan 30 15:26:26.298345 containerd[1475]: time="2025-01-30T15:26:26.298317312Z" level=warning msg="cleaning up after shim disconnected" id=e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b namespace=k8s.io Jan 30 15:26:26.298345 containerd[1475]: time="2025-01-30T15:26:26.298344351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:26:26.692621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b-rootfs.mount: Deactivated successfully. Jan 30 15:26:27.174754 containerd[1475]: time="2025-01-30T15:26:27.174700736Z" level=info msg="CreateContainer within sandbox \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 15:26:27.193964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3739818686.mount: Deactivated successfully. Jan 30 15:26:27.199045 containerd[1475]: time="2025-01-30T15:26:27.198996738Z" level=info msg="CreateContainer within sandbox \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b\"" Jan 30 15:26:27.200703 containerd[1475]: time="2025-01-30T15:26:27.199496097Z" level=info msg="StartContainer for \"6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b\"" Jan 30 15:26:27.234900 systemd[1]: Started cri-containerd-6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b.scope - libcontainer container 6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b. Jan 30 15:26:27.277065 systemd[1]: cri-containerd-6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b.scope: Deactivated successfully. Jan 30 15:26:27.283107 containerd[1475]: time="2025-01-30T15:26:27.282818350Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02f6cfd4_42b6_4e44_813b_fb72c58475e7.slice/cri-containerd-6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b.scope/memory.events\": no such file or directory" Jan 30 15:26:27.284903 containerd[1475]: time="2025-01-30T15:26:27.284861663Z" level=info msg="StartContainer for \"6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b\" returns successfully" Jan 30 15:26:27.314427 containerd[1475]: time="2025-01-30T15:26:27.314354809Z" level=info msg="shim disconnected" id=6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b namespace=k8s.io Jan 30 15:26:27.314427 containerd[1475]: time="2025-01-30T15:26:27.314425769Z" level=warning msg="cleaning up after shim disconnected" id=6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b namespace=k8s.io Jan 30 15:26:27.315044 containerd[1475]: time="2025-01-30T15:26:27.314441649Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:26:27.691750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b-rootfs.mount: Deactivated successfully. Jan 30 15:26:27.967439 containerd[1475]: time="2025-01-30T15:26:27.967296798Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:26:27.968753 containerd[1475]: time="2025-01-30T15:26:27.968639394Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 15:26:27.969559 containerd[1475]: time="2025-01-30T15:26:27.969209072Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:26:27.971933 containerd[1475]: time="2025-01-30T15:26:27.971852104Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.295783564s" Jan 30 15:26:27.971933 containerd[1475]: time="2025-01-30T15:26:27.971898544Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 15:26:27.975967 containerd[1475]: time="2025-01-30T15:26:27.975919251Z" level=info msg="CreateContainer within sandbox \"9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 15:26:28.005184 containerd[1475]: time="2025-01-30T15:26:28.005119117Z" level=info msg="CreateContainer within sandbox \"9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350\"" Jan 30 15:26:28.007139 containerd[1475]: time="2025-01-30T15:26:28.006031075Z" level=info msg="StartContainer for \"f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350\"" Jan 30 15:26:28.040789 systemd[1]: Started cri-containerd-f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350.scope - libcontainer container f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350. Jan 30 15:26:28.069506 containerd[1475]: time="2025-01-30T15:26:28.069460876Z" level=info msg="StartContainer for \"f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350\" returns successfully" Jan 30 15:26:28.186527 containerd[1475]: time="2025-01-30T15:26:28.186094510Z" level=info msg="CreateContainer within sandbox \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 15:26:28.210364 containerd[1475]: time="2025-01-30T15:26:28.210298074Z" level=info msg="CreateContainer within sandbox \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975\"" Jan 30 15:26:28.211849 containerd[1475]: time="2025-01-30T15:26:28.211815470Z" level=info msg="StartContainer for \"e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975\"" Jan 30 15:26:28.253812 systemd[1]: Started cri-containerd-e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975.scope - libcontainer container e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975. Jan 30 15:26:28.358404 containerd[1475]: time="2025-01-30T15:26:28.358270211Z" level=info msg="StartContainer for \"e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975\" returns successfully" Jan 30 15:26:28.534630 kubelet[2690]: I0130 15:26:28.533997 2690 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 15:26:28.646025 kubelet[2690]: I0130 15:26:28.645938 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-qs8p8" podStartSLOduration=1.8696846489999999 podStartE2EDuration="10.645919549s" podCreationTimestamp="2025-01-30 15:26:18 +0000 UTC" firstStartedPulling="2025-01-30 15:26:19.196575241 +0000 UTC m=+8.256895972" lastFinishedPulling="2025-01-30 15:26:27.972810141 +0000 UTC m=+17.033130872" observedRunningTime="2025-01-30 15:26:28.242188295 +0000 UTC m=+17.302509026" watchObservedRunningTime="2025-01-30 15:26:28.645919549 +0000 UTC m=+17.706240280" Jan 30 15:26:28.654479 systemd[1]: Created slice kubepods-burstable-podcc421603_09a0_4bba_bd51_117279624968.slice - libcontainer container kubepods-burstable-podcc421603_09a0_4bba_bd51_117279624968.slice. Jan 30 15:26:28.667353 systemd[1]: Created slice kubepods-burstable-pod0d4a77e9_f8e4_47ab_93d2_5ede2dec0494.slice - libcontainer container kubepods-burstable-pod0d4a77e9_f8e4_47ab_93d2_5ede2dec0494.slice. Jan 30 15:26:28.702411 kubelet[2690]: I0130 15:26:28.702013 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56sqt\" (UniqueName: \"kubernetes.io/projected/cc421603-09a0-4bba-bd51-117279624968-kube-api-access-56sqt\") pod \"coredns-6f6b679f8f-ctqk7\" (UID: \"cc421603-09a0-4bba-bd51-117279624968\") " pod="kube-system/coredns-6f6b679f8f-ctqk7" Jan 30 15:26:28.702411 kubelet[2690]: I0130 15:26:28.702060 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc421603-09a0-4bba-bd51-117279624968-config-volume\") pod \"coredns-6f6b679f8f-ctqk7\" (UID: \"cc421603-09a0-4bba-bd51-117279624968\") " pod="kube-system/coredns-6f6b679f8f-ctqk7" Jan 30 15:26:28.702411 kubelet[2690]: I0130 15:26:28.702082 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d4a77e9-f8e4-47ab-93d2-5ede2dec0494-config-volume\") pod \"coredns-6f6b679f8f-t4ghq\" (UID: \"0d4a77e9-f8e4-47ab-93d2-5ede2dec0494\") " pod="kube-system/coredns-6f6b679f8f-t4ghq" Jan 30 15:26:28.702411 kubelet[2690]: I0130 15:26:28.702098 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4fkg\" (UniqueName: \"kubernetes.io/projected/0d4a77e9-f8e4-47ab-93d2-5ede2dec0494-kube-api-access-m4fkg\") pod \"coredns-6f6b679f8f-t4ghq\" (UID: \"0d4a77e9-f8e4-47ab-93d2-5ede2dec0494\") " pod="kube-system/coredns-6f6b679f8f-t4ghq" Jan 30 15:26:28.959334 containerd[1475]: time="2025-01-30T15:26:28.959201527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ctqk7,Uid:cc421603-09a0-4bba-bd51-117279624968,Namespace:kube-system,Attempt:0,}" Jan 30 15:26:28.976744 containerd[1475]: time="2025-01-30T15:26:28.972106487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4ghq,Uid:0d4a77e9-f8e4-47ab-93d2-5ede2dec0494,Namespace:kube-system,Attempt:0,}" Jan 30 15:26:29.227930 kubelet[2690]: I0130 15:26:29.227470 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vwmfm" podStartSLOduration=5.591078643 podStartE2EDuration="11.227454381s" podCreationTimestamp="2025-01-30 15:26:18 +0000 UTC" firstStartedPulling="2025-01-30 15:26:19.039466122 +0000 UTC m=+8.099786853" lastFinishedPulling="2025-01-30 15:26:24.67584182 +0000 UTC m=+13.736162591" observedRunningTime="2025-01-30 15:26:29.227077983 +0000 UTC m=+18.287398754" watchObservedRunningTime="2025-01-30 15:26:29.227454381 +0000 UTC m=+18.287775112" Jan 30 15:26:31.754821 systemd-networkd[1374]: cilium_host: Link UP Jan 30 15:26:31.755165 systemd-networkd[1374]: cilium_net: Link UP Jan 30 15:26:31.755471 systemd-networkd[1374]: cilium_net: Gained carrier Jan 30 15:26:31.755819 systemd-networkd[1374]: cilium_host: Gained carrier Jan 30 15:26:31.869353 systemd-networkd[1374]: cilium_vxlan: Link UP Jan 30 15:26:31.869582 systemd-networkd[1374]: cilium_vxlan: Gained carrier Jan 30 15:26:32.036928 systemd-networkd[1374]: cilium_net: Gained IPv6LL Jan 30 15:26:32.167671 kernel: NET: Registered PF_ALG protocol family Jan 30 15:26:32.661153 systemd-networkd[1374]: cilium_host: Gained IPv6LL Jan 30 15:26:32.915581 systemd-networkd[1374]: lxc_health: Link UP Jan 30 15:26:32.920903 systemd-networkd[1374]: lxc_health: Gained carrier Jan 30 15:26:33.429855 systemd-networkd[1374]: cilium_vxlan: Gained IPv6LL Jan 30 15:26:33.548206 systemd-networkd[1374]: lxc43e46db25bf5: Link UP Jan 30 15:26:33.555335 systemd-networkd[1374]: lxc74758499e812: Link UP Jan 30 15:26:33.561654 kernel: eth0: renamed from tmp561d0 Jan 30 15:26:33.568925 systemd-networkd[1374]: lxc43e46db25bf5: Gained carrier Jan 30 15:26:33.570618 kernel: eth0: renamed from tmp213a1 Jan 30 15:26:33.580777 systemd-networkd[1374]: lxc74758499e812: Gained carrier Jan 30 15:26:34.644988 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jan 30 15:26:34.708955 systemd-networkd[1374]: lxc43e46db25bf5: Gained IPv6LL Jan 30 15:26:34.710898 systemd-networkd[1374]: lxc74758499e812: Gained IPv6LL Jan 30 15:26:37.619419 containerd[1475]: time="2025-01-30T15:26:37.618195434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:26:37.619419 containerd[1475]: time="2025-01-30T15:26:37.618350114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:26:37.619419 containerd[1475]: time="2025-01-30T15:26:37.618481953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:26:37.620605 containerd[1475]: time="2025-01-30T15:26:37.620062389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:26:37.632308 containerd[1475]: time="2025-01-30T15:26:37.631949998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:26:37.632308 containerd[1475]: time="2025-01-30T15:26:37.632172278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:26:37.632744 containerd[1475]: time="2025-01-30T15:26:37.632185278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:26:37.633131 containerd[1475]: time="2025-01-30T15:26:37.633050275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:26:37.664832 systemd[1]: Started cri-containerd-213a17f1f940dc6608288b4750fcadeb3458905fcbf559a7fda83d41ebc3a823.scope - libcontainer container 213a17f1f940dc6608288b4750fcadeb3458905fcbf559a7fda83d41ebc3a823. Jan 30 15:26:37.670617 systemd[1]: Started cri-containerd-561d06c2a80dc216460b6309034e7f86863c791cd1fbea498323435eb9779e8e.scope - libcontainer container 561d06c2a80dc216460b6309034e7f86863c791cd1fbea498323435eb9779e8e. Jan 30 15:26:37.734571 containerd[1475]: time="2025-01-30T15:26:37.734513291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ctqk7,Uid:cc421603-09a0-4bba-bd51-117279624968,Namespace:kube-system,Attempt:0,} returns sandbox id \"213a17f1f940dc6608288b4750fcadeb3458905fcbf559a7fda83d41ebc3a823\"" Jan 30 15:26:37.737241 containerd[1475]: time="2025-01-30T15:26:37.737204523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4ghq,Uid:0d4a77e9-f8e4-47ab-93d2-5ede2dec0494,Namespace:kube-system,Attempt:0,} returns sandbox id \"561d06c2a80dc216460b6309034e7f86863c791cd1fbea498323435eb9779e8e\"" Jan 30 15:26:37.742695 containerd[1475]: time="2025-01-30T15:26:37.742528670Z" level=info msg="CreateContainer within sandbox \"213a17f1f940dc6608288b4750fcadeb3458905fcbf559a7fda83d41ebc3a823\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 15:26:37.743320 containerd[1475]: time="2025-01-30T15:26:37.743132108Z" level=info msg="CreateContainer within sandbox \"561d06c2a80dc216460b6309034e7f86863c791cd1fbea498323435eb9779e8e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 15:26:37.765706 containerd[1475]: time="2025-01-30T15:26:37.765070291Z" level=info msg="CreateContainer within sandbox \"561d06c2a80dc216460b6309034e7f86863c791cd1fbea498323435eb9779e8e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3493496cd1a5d828068dfa9bfb341f1f34a9146d3926831f9ec0414df00351d6\"" Jan 30 15:26:37.767749 containerd[1475]: time="2025-01-30T15:26:37.767368365Z" level=info msg="StartContainer for \"3493496cd1a5d828068dfa9bfb341f1f34a9146d3926831f9ec0414df00351d6\"" Jan 30 15:26:37.781606 containerd[1475]: time="2025-01-30T15:26:37.781353488Z" level=info msg="CreateContainer within sandbox \"213a17f1f940dc6608288b4750fcadeb3458905fcbf559a7fda83d41ebc3a823\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"246f830562e50c2f707de99dde34cfa54812c3d96a21b59449c68b290bcfe30c\"" Jan 30 15:26:37.784651 containerd[1475]: time="2025-01-30T15:26:37.784617280Z" level=info msg="StartContainer for \"246f830562e50c2f707de99dde34cfa54812c3d96a21b59449c68b290bcfe30c\"" Jan 30 15:26:37.812781 systemd[1]: Started cri-containerd-3493496cd1a5d828068dfa9bfb341f1f34a9146d3926831f9ec0414df00351d6.scope - libcontainer container 3493496cd1a5d828068dfa9bfb341f1f34a9146d3926831f9ec0414df00351d6. Jan 30 15:26:37.820804 systemd[1]: Started cri-containerd-246f830562e50c2f707de99dde34cfa54812c3d96a21b59449c68b290bcfe30c.scope - libcontainer container 246f830562e50c2f707de99dde34cfa54812c3d96a21b59449c68b290bcfe30c. Jan 30 15:26:37.863968 containerd[1475]: time="2025-01-30T15:26:37.863879513Z" level=info msg="StartContainer for \"246f830562e50c2f707de99dde34cfa54812c3d96a21b59449c68b290bcfe30c\" returns successfully" Jan 30 15:26:37.864207 containerd[1475]: time="2025-01-30T15:26:37.863881753Z" level=info msg="StartContainer for \"3493496cd1a5d828068dfa9bfb341f1f34a9146d3926831f9ec0414df00351d6\" returns successfully" Jan 30 15:26:38.249334 kubelet[2690]: I0130 15:26:38.249213 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-t4ghq" podStartSLOduration=20.249190039 podStartE2EDuration="20.249190039s" podCreationTimestamp="2025-01-30 15:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:26:38.248450241 +0000 UTC m=+27.308770972" watchObservedRunningTime="2025-01-30 15:26:38.249190039 +0000 UTC m=+27.309510770" Jan 30 15:26:38.250246 kubelet[2690]: I0130 15:26:38.250187 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ctqk7" podStartSLOduration=20.250174316 podStartE2EDuration="20.250174316s" podCreationTimestamp="2025-01-30 15:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:26:38.234039118 +0000 UTC m=+27.294359889" watchObservedRunningTime="2025-01-30 15:26:38.250174316 +0000 UTC m=+27.310495007" Jan 30 15:28:06.828102 systemd[1]: Started sshd@7-49.13.124.2:22-139.19.117.131:54460.service - OpenSSH per-connection server daemon (139.19.117.131:54460). Jan 30 15:28:16.811146 sshd[4080]: Connection closed by authenticating user root 139.19.117.131 port 54460 [preauth] Jan 30 15:28:16.815200 systemd[1]: sshd@7-49.13.124.2:22-139.19.117.131:54460.service: Deactivated successfully. Jan 30 15:28:51.430908 systemd[1]: Started sshd@8-49.13.124.2:22-165.22.206.140:42362.service - OpenSSH per-connection server daemon (165.22.206.140:42362). Jan 30 15:28:51.540927 sshd[4096]: Invalid user sol from 165.22.206.140 port 42362 Jan 30 15:28:51.558798 sshd[4096]: Connection closed by invalid user sol 165.22.206.140 port 42362 [preauth] Jan 30 15:28:51.560519 systemd[1]: sshd@8-49.13.124.2:22-165.22.206.140:42362.service: Deactivated successfully. Jan 30 15:31:01.675890 systemd[1]: Started sshd@9-49.13.124.2:22-139.178.68.195:60276.service - OpenSSH per-connection server daemon (139.178.68.195:60276). Jan 30 15:31:02.660750 sshd[4114]: Accepted publickey for core from 139.178.68.195 port 60276 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:31:02.662311 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:31:02.669110 systemd-logind[1453]: New session 8 of user core. Jan 30 15:31:02.677987 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 15:31:03.428696 sshd[4114]: pam_unix(sshd:session): session closed for user core Jan 30 15:31:03.433176 systemd[1]: sshd@9-49.13.124.2:22-139.178.68.195:60276.service: Deactivated successfully. Jan 30 15:31:03.435809 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 15:31:03.438784 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Jan 30 15:31:03.439901 systemd-logind[1453]: Removed session 8. Jan 30 15:31:08.605947 systemd[1]: Started sshd@10-49.13.124.2:22-139.178.68.195:41510.service - OpenSSH per-connection server daemon (139.178.68.195:41510). Jan 30 15:31:09.593305 sshd[4128]: Accepted publickey for core from 139.178.68.195 port 41510 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:31:09.595846 sshd[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:31:09.603633 systemd-logind[1453]: New session 9 of user core. Jan 30 15:31:09.610044 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 15:31:10.357746 sshd[4128]: pam_unix(sshd:session): session closed for user core Jan 30 15:31:10.365501 systemd[1]: sshd@10-49.13.124.2:22-139.178.68.195:41510.service: Deactivated successfully. Jan 30 15:31:10.370494 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 15:31:10.373683 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Jan 30 15:31:10.376618 systemd-logind[1453]: Removed session 9. Jan 30 15:31:15.529081 systemd[1]: Started sshd@11-49.13.124.2:22-139.178.68.195:37228.service - OpenSSH per-connection server daemon (139.178.68.195:37228). Jan 30 15:31:16.501931 sshd[4144]: Accepted publickey for core from 139.178.68.195 port 37228 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:31:16.505371 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:31:16.511442 systemd-logind[1453]: New session 10 of user core. Jan 30 15:31:16.516868 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 15:31:17.264957 sshd[4144]: pam_unix(sshd:session): session closed for user core Jan 30 15:31:17.270273 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Jan 30 15:31:17.270649 systemd[1]: sshd@11-49.13.124.2:22-139.178.68.195:37228.service: Deactivated successfully. Jan 30 15:31:17.275515 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 15:31:17.276662 systemd-logind[1453]: Removed session 10. Jan 30 15:31:17.443136 systemd[1]: Started sshd@12-49.13.124.2:22-139.178.68.195:37234.service - OpenSSH per-connection server daemon (139.178.68.195:37234). Jan 30 15:31:18.427622 sshd[4158]: Accepted publickey for core from 139.178.68.195 port 37234 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:31:18.429750 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:31:18.436925 systemd-logind[1453]: New session 11 of user core. Jan 30 15:31:18.442417 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 15:31:19.236091 sshd[4158]: pam_unix(sshd:session): session closed for user core Jan 30 15:31:19.241377 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Jan 30 15:31:19.242254 systemd[1]: sshd@12-49.13.124.2:22-139.178.68.195:37234.service: Deactivated successfully. Jan 30 15:31:19.244921 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 15:31:19.246557 systemd-logind[1453]: Removed session 11. Jan 30 15:31:19.408103 systemd[1]: Started sshd@13-49.13.124.2:22-139.178.68.195:37248.service - OpenSSH per-connection server daemon (139.178.68.195:37248). Jan 30 15:31:20.390356 sshd[4171]: Accepted publickey for core from 139.178.68.195 port 37248 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:31:20.392266 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:31:20.399138 systemd-logind[1453]: New session 12 of user core. Jan 30 15:31:20.403815 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 15:31:21.136920 sshd[4171]: pam_unix(sshd:session): session closed for user core Jan 30 15:31:21.142280 systemd[1]: sshd@13-49.13.124.2:22-139.178.68.195:37248.service: Deactivated successfully. Jan 30 15:31:21.144711 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 15:31:21.145944 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Jan 30 15:31:21.147345 systemd-logind[1453]: Removed session 12. Jan 30 15:31:26.314022 systemd[1]: Started sshd@14-49.13.124.2:22-139.178.68.195:57398.service - OpenSSH per-connection server daemon (139.178.68.195:57398). Jan 30 15:31:27.294380 sshd[4184]: Accepted publickey for core from 139.178.68.195 port 57398 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:31:27.297140 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:31:27.303142 systemd-logind[1453]: New session 13 of user core. Jan 30 15:31:27.313371 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 15:31:28.042358 sshd[4184]: pam_unix(sshd:session): session closed for user core Jan 30 15:31:28.047516 systemd[1]: sshd@14-49.13.124.2:22-139.178.68.195:57398.service: Deactivated successfully. Jan 30 15:31:28.049296 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 15:31:28.051933 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Jan 30 15:31:28.053027 systemd-logind[1453]: Removed session 13. Jan 30 15:31:28.213009 systemd[1]: Started sshd@15-49.13.124.2:22-139.178.68.195:57412.service - OpenSSH per-connection server daemon (139.178.68.195:57412). Jan 30 15:31:29.190649 sshd[4197]: Accepted publickey for core from 139.178.68.195 port 57412 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:31:29.192879 sshd[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:31:29.198208 systemd-logind[1453]: New session 14 of user core. Jan 30 15:31:29.208009 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 15:31:30.003064 sshd[4197]: pam_unix(sshd:session): session closed for user core Jan 30 15:31:30.009552 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Jan 30 15:31:30.009580 systemd[1]: sshd@15-49.13.124.2:22-139.178.68.195:57412.service: Deactivated successfully. Jan 30 15:31:30.013483 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 15:31:30.014868 systemd-logind[1453]: Removed session 14. Jan 30 15:31:30.173984 systemd[1]: Started sshd@16-49.13.124.2:22-139.178.68.195:57414.service - OpenSSH per-connection server daemon (139.178.68.195:57414). Jan 30 15:31:31.159348 sshd[4208]: Accepted publickey for core from 139.178.68.195 port 57414 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:31:31.161564 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:31:31.167280 systemd-logind[1453]: New session 15 of user core. Jan 30 15:31:31.173029 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 15:31:35.281985 sshd[4208]: pam_unix(sshd:session): session closed for user core Jan 30 15:31:35.286278 systemd[1]: sshd@16-49.13.124.2:22-139.178.68.195:57414.service: Deactivated successfully. Jan 30 15:31:35.289397 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 15:31:35.291481 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Jan 30 15:31:35.292627 systemd-logind[1453]: Removed session 15. Jan 30 15:31:35.458071 systemd[1]: Started sshd@17-49.13.124.2:22-139.178.68.195:45544.service - OpenSSH per-connection server daemon (139.178.68.195:45544). Jan 30 15:31:36.429731 sshd[4227]: Accepted publickey for core from 139.178.68.195 port 45544 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:31:36.431908 sshd[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:31:36.436745 systemd-logind[1453]: New session 16 of user core. Jan 30 15:31:36.442767 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 15:31:36.454502 update_engine[1455]: I20250130 15:31:36.454371 1455 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 15:31:36.454502 update_engine[1455]: I20250130 15:31:36.454473 1455 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 15:31:36.455412 update_engine[1455]: I20250130 15:31:36.454962 1455 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 15:31:36.456078 update_engine[1455]: I20250130 15:31:36.455811 1455 omaha_request_params.cc:62] Current group set to lts Jan 30 15:31:36.456078 update_engine[1455]: I20250130 15:31:36.455977 1455 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 15:31:36.456078 update_engine[1455]: I20250130 15:31:36.455996 1455 update_attempter.cc:643] Scheduling an action processor start. Jan 30 15:31:36.456078 update_engine[1455]: I20250130 15:31:36.456022 1455 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 15:31:36.456078 update_engine[1455]: I20250130 15:31:36.456068 1455 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 15:31:36.456365 update_engine[1455]: I20250130 15:31:36.456154 1455 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 15:31:36.456365 update_engine[1455]: I20250130 15:31:36.456169 1455 omaha_request_action.cc:272] Request: Jan 30 15:31:36.456365 update_engine[1455]: Jan 30 15:31:36.456365 update_engine[1455]: Jan 30 15:31:36.456365 update_engine[1455]: Jan 30 15:31:36.456365 update_engine[1455]: Jan 30 15:31:36.456365 update_engine[1455]: Jan 30 15:31:36.456365 update_engine[1455]: Jan 30 15:31:36.456365 update_engine[1455]: Jan 30 15:31:36.456365 update_engine[1455]: Jan 30 15:31:36.456365 update_engine[1455]: I20250130 15:31:36.456180 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 15:31:36.457792 locksmithd[1482]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 15:31:36.458699 update_engine[1455]: I20250130 15:31:36.458654 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 15:31:36.459339 update_engine[1455]: I20250130 15:31:36.459273 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 15:31:36.460068 update_engine[1455]: E20250130 15:31:36.460024 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 15:31:36.460141 update_engine[1455]: I20250130 15:31:36.460098 1455 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 15:31:37.306966 sshd[4227]: pam_unix(sshd:session): session closed for user core Jan 30 15:31:37.311367 systemd[1]: sshd@17-49.13.124.2:22-139.178.68.195:45544.service: Deactivated successfully. Jan 30 15:31:37.313273 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 15:31:37.315345 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Jan 30 15:31:37.316509 systemd-logind[1453]: Removed session 16. Jan 30 15:31:37.484431 systemd[1]: Started sshd@18-49.13.124.2:22-139.178.68.195:45560.service - OpenSSH per-connection server daemon (139.178.68.195:45560). Jan 30 15:31:38.459026 sshd[4237]: Accepted publickey for core from 139.178.68.195 port 45560 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:31:38.461221 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:31:38.470315 systemd-logind[1453]: New session 17 of user core. Jan 30 15:31:38.474893 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 15:31:39.214172 sshd[4237]: pam_unix(sshd:session): session closed for user core Jan 30 15:31:39.220200 systemd[1]: sshd@18-49.13.124.2:22-139.178.68.195:45560.service: Deactivated successfully. Jan 30 15:31:39.223518 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 15:31:39.224486 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Jan 30 15:31:39.225824 systemd-logind[1453]: Removed session 17. Jan 30 15:31:44.386065 systemd[1]: Started sshd@19-49.13.124.2:22-139.178.68.195:45566.service - OpenSSH per-connection server daemon (139.178.68.195:45566). Jan 30 15:31:45.398465 sshd[4252]: Accepted publickey for core from 139.178.68.195 port 45566 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:31:45.400879 sshd[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:31:45.406713 systemd-logind[1453]: New session 18 of user core. Jan 30 15:31:45.414997 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 15:31:46.158098 sshd[4252]: pam_unix(sshd:session): session closed for user core Jan 30 15:31:46.163962 systemd[1]: sshd@19-49.13.124.2:22-139.178.68.195:45566.service: Deactivated successfully. Jan 30 15:31:46.164192 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Jan 30 15:31:46.169314 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 15:31:46.171749 systemd-logind[1453]: Removed session 18. Jan 30 15:31:46.454291 update_engine[1455]: I20250130 15:31:46.454018 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 15:31:46.455219 update_engine[1455]: I20250130 15:31:46.455137 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 15:31:46.455583 update_engine[1455]: I20250130 15:31:46.455497 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 15:31:46.456643 update_engine[1455]: E20250130 15:31:46.456545 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 15:31:46.456744 update_engine[1455]: I20250130 15:31:46.456692 1455 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 15:31:51.333969 systemd[1]: Started sshd@20-49.13.124.2:22-139.178.68.195:45120.service - OpenSSH per-connection server daemon (139.178.68.195:45120). Jan 30 15:31:52.332321 sshd[4267]: Accepted publickey for core from 139.178.68.195 port 45120 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:31:52.334530 sshd[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:31:52.339790 systemd-logind[1453]: New session 19 of user core. Jan 30 15:31:52.356118 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 15:31:53.086919 sshd[4267]: pam_unix(sshd:session): session closed for user core Jan 30 15:31:53.098023 systemd[1]: sshd@20-49.13.124.2:22-139.178.68.195:45120.service: Deactivated successfully. Jan 30 15:31:53.104451 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 15:31:53.105671 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Jan 30 15:31:53.107057 systemd-logind[1453]: Removed session 19. Jan 30 15:31:53.262797 systemd[1]: Started sshd@21-49.13.124.2:22-139.178.68.195:45130.service - OpenSSH per-connection server daemon (139.178.68.195:45130). Jan 30 15:31:54.233326 sshd[4280]: Accepted publickey for core from 139.178.68.195 port 45130 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:31:54.236057 sshd[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:31:54.244019 systemd-logind[1453]: New session 20 of user core. Jan 30 15:31:54.251320 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 15:31:56.204219 systemd[1]: run-containerd-runc-k8s.io-e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975-runc.o6LxWB.mount: Deactivated successfully. Jan 30 15:31:56.219063 containerd[1475]: time="2025-01-30T15:31:56.219007704Z" level=info msg="StopContainer for \"f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350\" with timeout 30 (s)" Jan 30 15:31:56.221086 containerd[1475]: time="2025-01-30T15:31:56.219714185Z" level=info msg="Stop container \"f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350\" with signal terminated" Jan 30 15:31:56.231279 containerd[1475]: time="2025-01-30T15:31:56.231220287Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 15:31:56.240366 systemd[1]: cri-containerd-f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350.scope: Deactivated successfully. Jan 30 15:31:56.242803 containerd[1475]: time="2025-01-30T15:31:56.242381309Z" level=info msg="StopContainer for \"e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975\" with timeout 2 (s)" Jan 30 15:31:56.244231 containerd[1475]: time="2025-01-30T15:31:56.244194912Z" level=info msg="Stop container \"e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975\" with signal terminated" Jan 30 15:31:56.255245 systemd-networkd[1374]: lxc_health: Link DOWN Jan 30 15:31:56.255253 systemd-networkd[1374]: lxc_health: Lost carrier Jan 30 15:31:56.272537 kubelet[2690]: E0130 15:31:56.271024 2690 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 15:31:56.278753 systemd[1]: cri-containerd-e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975.scope: Deactivated successfully. Jan 30 15:31:56.279307 systemd[1]: cri-containerd-e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975.scope: Consumed 7.978s CPU time. Jan 30 15:31:56.296558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350-rootfs.mount: Deactivated successfully. Jan 30 15:31:56.312252 containerd[1475]: time="2025-01-30T15:31:56.310110839Z" level=info msg="shim disconnected" id=f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350 namespace=k8s.io Jan 30 15:31:56.312252 containerd[1475]: time="2025-01-30T15:31:56.310178279Z" level=warning msg="cleaning up after shim disconnected" id=f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350 namespace=k8s.io Jan 30 15:31:56.312252 containerd[1475]: time="2025-01-30T15:31:56.310188639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:31:56.312015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975-rootfs.mount: Deactivated successfully. Jan 30 15:31:56.323294 containerd[1475]: time="2025-01-30T15:31:56.323064143Z" level=info msg="shim disconnected" id=e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975 namespace=k8s.io Jan 30 15:31:56.323294 containerd[1475]: time="2025-01-30T15:31:56.323136984Z" level=warning msg="cleaning up after shim disconnected" id=e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975 namespace=k8s.io Jan 30 15:31:56.323294 containerd[1475]: time="2025-01-30T15:31:56.323148824Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:31:56.334458 containerd[1475]: time="2025-01-30T15:31:56.334409325Z" level=info msg="StopContainer for \"f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350\" returns successfully" Jan 30 15:31:56.335342 containerd[1475]: time="2025-01-30T15:31:56.335169527Z" level=info msg="StopPodSandbox for \"9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4\"" Jan 30 15:31:56.335342 containerd[1475]: time="2025-01-30T15:31:56.335207527Z" level=info msg="Container to stop \"f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:31:56.338524 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4-shm.mount: Deactivated successfully. Jan 30 15:31:56.347708 systemd[1]: cri-containerd-9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4.scope: Deactivated successfully. Jan 30 15:31:56.350690 containerd[1475]: time="2025-01-30T15:31:56.350454036Z" level=info msg="StopContainer for \"e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975\" returns successfully" Jan 30 15:31:56.351292 containerd[1475]: time="2025-01-30T15:31:56.351249837Z" level=info msg="StopPodSandbox for \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\"" Jan 30 15:31:56.351399 containerd[1475]: time="2025-01-30T15:31:56.351293238Z" level=info msg="Container to stop \"5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:31:56.351399 containerd[1475]: time="2025-01-30T15:31:56.351331158Z" level=info msg="Container to stop \"e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:31:56.351399 containerd[1475]: time="2025-01-30T15:31:56.351341438Z" level=info msg="Container to stop \"6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:31:56.351399 containerd[1475]: time="2025-01-30T15:31:56.351350958Z" level=info msg="Container to stop \"e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:31:56.351399 containerd[1475]: time="2025-01-30T15:31:56.351363558Z" level=info msg="Container to stop \"4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:31:56.363540 systemd[1]: cri-containerd-5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97.scope: Deactivated successfully. Jan 30 15:31:56.383151 containerd[1475]: time="2025-01-30T15:31:56.383072098Z" level=info msg="shim disconnected" id=9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4 namespace=k8s.io Jan 30 15:31:56.383151 containerd[1475]: time="2025-01-30T15:31:56.383129179Z" level=warning msg="cleaning up after shim disconnected" id=9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4 namespace=k8s.io Jan 30 15:31:56.383151 containerd[1475]: time="2025-01-30T15:31:56.383138179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:31:56.394435 containerd[1475]: time="2025-01-30T15:31:56.394342000Z" level=info msg="shim disconnected" id=5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97 namespace=k8s.io Jan 30 15:31:56.394659 containerd[1475]: time="2025-01-30T15:31:56.394441680Z" level=warning msg="cleaning up after shim disconnected" id=5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97 namespace=k8s.io Jan 30 15:31:56.394659 containerd[1475]: time="2025-01-30T15:31:56.394454600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:31:56.405185 containerd[1475]: time="2025-01-30T15:31:56.405140901Z" level=info msg="TearDown network for sandbox \"9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4\" successfully" Jan 30 15:31:56.405397 containerd[1475]: time="2025-01-30T15:31:56.405380461Z" level=info msg="StopPodSandbox for \"9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4\" returns successfully" Jan 30 15:31:56.417427 containerd[1475]: time="2025-01-30T15:31:56.417381084Z" level=info msg="TearDown network for sandbox \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\" successfully" Jan 30 15:31:56.417427 containerd[1475]: time="2025-01-30T15:31:56.417417364Z" level=info msg="StopPodSandbox for \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\" returns successfully" Jan 30 15:31:56.453983 update_engine[1455]: I20250130 15:31:56.453884 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 15:31:56.454421 update_engine[1455]: I20250130 15:31:56.454146 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 15:31:56.454421 update_engine[1455]: I20250130 15:31:56.454355 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 15:31:56.455225 update_engine[1455]: E20250130 15:31:56.455178 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 15:31:56.455296 update_engine[1455]: I20250130 15:31:56.455243 1455 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 15:31:56.495664 kubelet[2690]: I0130 15:31:56.495480 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-host-proc-sys-net\") pod \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " Jan 30 15:31:56.495664 kubelet[2690]: I0130 15:31:56.495573 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kd5c\" (UniqueName: \"kubernetes.io/projected/02f6cfd4-42b6-4e44-813b-fb72c58475e7-kube-api-access-9kd5c\") pod \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " Jan 30 15:31:56.499532 kubelet[2690]: I0130 15:31:56.497289 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "02f6cfd4-42b6-4e44-813b-fb72c58475e7" (UID: "02f6cfd4-42b6-4e44-813b-fb72c58475e7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:31:56.499532 kubelet[2690]: I0130 15:31:56.497867 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-hostproc" (OuterVolumeSpecName: "hostproc") pod "02f6cfd4-42b6-4e44-813b-fb72c58475e7" (UID: "02f6cfd4-42b6-4e44-813b-fb72c58475e7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:31:56.499532 kubelet[2690]: I0130 15:31:56.498805 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-hostproc\") pod \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " Jan 30 15:31:56.499532 kubelet[2690]: I0130 15:31:56.498860 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02f6cfd4-42b6-4e44-813b-fb72c58475e7-clustermesh-secrets\") pod \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " Jan 30 15:31:56.499532 kubelet[2690]: I0130 15:31:56.498909 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq666\" (UniqueName: \"kubernetes.io/projected/4780db41-30f7-484d-af3d-0169d62c36c5-kube-api-access-cq666\") pod \"4780db41-30f7-484d-af3d-0169d62c36c5\" (UID: \"4780db41-30f7-484d-af3d-0169d62c36c5\") " Jan 30 15:31:56.499769 kubelet[2690]: I0130 15:31:56.498934 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4780db41-30f7-484d-af3d-0169d62c36c5-cilium-config-path\") pod \"4780db41-30f7-484d-af3d-0169d62c36c5\" (UID: \"4780db41-30f7-484d-af3d-0169d62c36c5\") " Jan 30 15:31:56.499769 kubelet[2690]: I0130 15:31:56.498956 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-cni-path\") pod \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " Jan 30 15:31:56.499769 kubelet[2690]: I0130 15:31:56.498976 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-bpf-maps\") pod \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " Jan 30 15:31:56.499769 kubelet[2690]: I0130 15:31:56.498995 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02f6cfd4-42b6-4e44-813b-fb72c58475e7-cilium-config-path\") pod \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " Jan 30 15:31:56.499769 kubelet[2690]: I0130 15:31:56.499012 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-xtables-lock\") pod \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " Jan 30 15:31:56.499769 kubelet[2690]: I0130 15:31:56.499032 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-host-proc-sys-kernel\") pod \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " Jan 30 15:31:56.499946 kubelet[2690]: I0130 15:31:56.499049 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-lib-modules\") pod \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " Jan 30 15:31:56.499946 kubelet[2690]: I0130 15:31:56.499069 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-etc-cni-netd\") pod \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " Jan 30 15:31:56.499946 kubelet[2690]: I0130 15:31:56.499089 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02f6cfd4-42b6-4e44-813b-fb72c58475e7-hubble-tls\") pod \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " Jan 30 15:31:56.499946 kubelet[2690]: I0130 15:31:56.499107 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-cilium-cgroup\") pod \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " Jan 30 15:31:56.499946 kubelet[2690]: I0130 15:31:56.499126 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-cilium-run\") pod \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\" (UID: \"02f6cfd4-42b6-4e44-813b-fb72c58475e7\") " Jan 30 15:31:56.499946 kubelet[2690]: I0130 15:31:56.499178 2690 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-host-proc-sys-net\") on node \"ci-4081-3-0-1-b815e480da\" DevicePath \"\"" Jan 30 15:31:56.502427 kubelet[2690]: I0130 15:31:56.499216 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "02f6cfd4-42b6-4e44-813b-fb72c58475e7" (UID: "02f6cfd4-42b6-4e44-813b-fb72c58475e7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:31:56.502427 kubelet[2690]: I0130 15:31:56.499718 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "02f6cfd4-42b6-4e44-813b-fb72c58475e7" (UID: "02f6cfd4-42b6-4e44-813b-fb72c58475e7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:31:56.502427 kubelet[2690]: I0130 15:31:56.501212 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "02f6cfd4-42b6-4e44-813b-fb72c58475e7" (UID: "02f6cfd4-42b6-4e44-813b-fb72c58475e7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:31:56.502427 kubelet[2690]: I0130 15:31:56.501261 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "02f6cfd4-42b6-4e44-813b-fb72c58475e7" (UID: "02f6cfd4-42b6-4e44-813b-fb72c58475e7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:31:56.502427 kubelet[2690]: I0130 15:31:56.501284 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "02f6cfd4-42b6-4e44-813b-fb72c58475e7" (UID: "02f6cfd4-42b6-4e44-813b-fb72c58475e7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:31:56.502548 kubelet[2690]: I0130 15:31:56.502125 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "02f6cfd4-42b6-4e44-813b-fb72c58475e7" (UID: "02f6cfd4-42b6-4e44-813b-fb72c58475e7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:31:56.504616 kubelet[2690]: I0130 15:31:56.503301 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-cni-path" (OuterVolumeSpecName: "cni-path") pod "02f6cfd4-42b6-4e44-813b-fb72c58475e7" (UID: "02f6cfd4-42b6-4e44-813b-fb72c58475e7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:31:56.504616 kubelet[2690]: I0130 15:31:56.503461 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "02f6cfd4-42b6-4e44-813b-fb72c58475e7" (UID: "02f6cfd4-42b6-4e44-813b-fb72c58475e7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:31:56.508428 kubelet[2690]: I0130 15:31:56.508374 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02f6cfd4-42b6-4e44-813b-fb72c58475e7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "02f6cfd4-42b6-4e44-813b-fb72c58475e7" (UID: "02f6cfd4-42b6-4e44-813b-fb72c58475e7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:31:56.508556 kubelet[2690]: I0130 15:31:56.508516 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f6cfd4-42b6-4e44-813b-fb72c58475e7-kube-api-access-9kd5c" (OuterVolumeSpecName: "kube-api-access-9kd5c") pod "02f6cfd4-42b6-4e44-813b-fb72c58475e7" (UID: "02f6cfd4-42b6-4e44-813b-fb72c58475e7"). InnerVolumeSpecName "kube-api-access-9kd5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:31:56.510303 kubelet[2690]: I0130 15:31:56.510264 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f6cfd4-42b6-4e44-813b-fb72c58475e7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "02f6cfd4-42b6-4e44-813b-fb72c58475e7" (UID: "02f6cfd4-42b6-4e44-813b-fb72c58475e7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:31:56.510583 kubelet[2690]: I0130 15:31:56.510544 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4780db41-30f7-484d-af3d-0169d62c36c5-kube-api-access-cq666" (OuterVolumeSpecName: "kube-api-access-cq666") pod "4780db41-30f7-484d-af3d-0169d62c36c5" (UID: "4780db41-30f7-484d-af3d-0169d62c36c5"). InnerVolumeSpecName "kube-api-access-cq666". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:31:56.511582 kubelet[2690]: I0130 15:31:56.511539 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f6cfd4-42b6-4e44-813b-fb72c58475e7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "02f6cfd4-42b6-4e44-813b-fb72c58475e7" (UID: "02f6cfd4-42b6-4e44-813b-fb72c58475e7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:31:56.511843 kubelet[2690]: I0130 15:31:56.511794 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4780db41-30f7-484d-af3d-0169d62c36c5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4780db41-30f7-484d-af3d-0169d62c36c5" (UID: "4780db41-30f7-484d-af3d-0169d62c36c5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:31:56.600228 kubelet[2690]: I0130 15:31:56.600126 2690 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9kd5c\" (UniqueName: \"kubernetes.io/projected/02f6cfd4-42b6-4e44-813b-fb72c58475e7-kube-api-access-9kd5c\") on node \"ci-4081-3-0-1-b815e480da\" DevicePath \"\"" Jan 30 15:31:56.600228 kubelet[2690]: I0130 15:31:56.600184 2690 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-hostproc\") on node \"ci-4081-3-0-1-b815e480da\" DevicePath \"\"" Jan 30 15:31:56.600228 kubelet[2690]: I0130 15:31:56.600201 2690 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02f6cfd4-42b6-4e44-813b-fb72c58475e7-clustermesh-secrets\") on node \"ci-4081-3-0-1-b815e480da\" DevicePath \"\"" Jan 30 15:31:56.600228 kubelet[2690]: I0130 15:31:56.600215 2690 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cq666\" (UniqueName: \"kubernetes.io/projected/4780db41-30f7-484d-af3d-0169d62c36c5-kube-api-access-cq666\") on node \"ci-4081-3-0-1-b815e480da\" DevicePath \"\"" Jan 30 15:31:56.600228 kubelet[2690]: I0130 15:31:56.600227 2690 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4780db41-30f7-484d-af3d-0169d62c36c5-cilium-config-path\") on node \"ci-4081-3-0-1-b815e480da\" DevicePath \"\"" Jan 30 15:31:56.600228 kubelet[2690]: I0130 15:31:56.600240 2690 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-cni-path\") on node \"ci-4081-3-0-1-b815e480da\" DevicePath \"\"" Jan 30 15:31:56.600228 kubelet[2690]: I0130 15:31:56.600254 2690 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-bpf-maps\") on node \"ci-4081-3-0-1-b815e480da\" DevicePath \"\"" Jan 30 15:31:56.600228 kubelet[2690]: I0130 15:31:56.600266 2690 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02f6cfd4-42b6-4e44-813b-fb72c58475e7-cilium-config-path\") on node \"ci-4081-3-0-1-b815e480da\" DevicePath \"\"" Jan 30 15:31:56.601276 kubelet[2690]: I0130 15:31:56.600278 2690 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-xtables-lock\") on node \"ci-4081-3-0-1-b815e480da\" DevicePath \"\"" Jan 30 15:31:56.601276 kubelet[2690]: I0130 15:31:56.600290 2690 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-host-proc-sys-kernel\") on node \"ci-4081-3-0-1-b815e480da\" DevicePath \"\"" Jan 30 15:31:56.601276 kubelet[2690]: I0130 15:31:56.600302 2690 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-lib-modules\") on node \"ci-4081-3-0-1-b815e480da\" DevicePath \"\"" Jan 30 15:31:56.601276 kubelet[2690]: I0130 15:31:56.600313 2690 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-etc-cni-netd\") on node \"ci-4081-3-0-1-b815e480da\" DevicePath \"\"" Jan 30 15:31:56.601276 kubelet[2690]: I0130 15:31:56.600325 2690 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02f6cfd4-42b6-4e44-813b-fb72c58475e7-hubble-tls\") on node \"ci-4081-3-0-1-b815e480da\" DevicePath \"\"" Jan 30 15:31:56.601276 kubelet[2690]: I0130 15:31:56.600336 2690 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-cilium-cgroup\") on node \"ci-4081-3-0-1-b815e480da\" DevicePath \"\"" Jan 30 15:31:56.601276 kubelet[2690]: I0130 15:31:56.600348 2690 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02f6cfd4-42b6-4e44-813b-fb72c58475e7-cilium-run\") on node \"ci-4081-3-0-1-b815e480da\" DevicePath \"\"" Jan 30 15:31:57.020056 kubelet[2690]: I0130 15:31:57.019810 2690 scope.go:117] "RemoveContainer" containerID="e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975" Jan 30 15:31:57.026866 containerd[1475]: time="2025-01-30T15:31:57.025074410Z" level=info msg="RemoveContainer for \"e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975\"" Jan 30 15:31:57.026574 systemd[1]: Removed slice kubepods-burstable-pod02f6cfd4_42b6_4e44_813b_fb72c58475e7.slice - libcontainer container kubepods-burstable-pod02f6cfd4_42b6_4e44_813b_fb72c58475e7.slice. Jan 30 15:31:57.027153 systemd[1]: kubepods-burstable-pod02f6cfd4_42b6_4e44_813b_fb72c58475e7.slice: Consumed 8.064s CPU time. Jan 30 15:31:57.031134 systemd[1]: Removed slice kubepods-besteffort-pod4780db41_30f7_484d_af3d_0169d62c36c5.slice - libcontainer container kubepods-besteffort-pod4780db41_30f7_484d_af3d_0169d62c36c5.slice. Jan 30 15:31:57.036780 containerd[1475]: time="2025-01-30T15:31:57.036621552Z" level=info msg="RemoveContainer for \"e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975\" returns successfully" Jan 30 15:31:57.037281 kubelet[2690]: I0130 15:31:57.037254 2690 scope.go:117] "RemoveContainer" containerID="6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b" Jan 30 15:31:57.039907 containerd[1475]: time="2025-01-30T15:31:57.039856878Z" level=info msg="RemoveContainer for \"6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b\"" Jan 30 15:31:57.044971 containerd[1475]: time="2025-01-30T15:31:57.044921648Z" level=info msg="RemoveContainer for \"6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b\" returns successfully" Jan 30 15:31:57.045393 kubelet[2690]: I0130 15:31:57.045219 2690 scope.go:117] "RemoveContainer" containerID="e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b" Jan 30 15:31:57.048694 containerd[1475]: time="2025-01-30T15:31:57.047670773Z" level=info msg="RemoveContainer for \"e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b\"" Jan 30 15:31:57.056618 containerd[1475]: time="2025-01-30T15:31:57.056321310Z" level=info msg="RemoveContainer for \"e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b\" returns successfully" Jan 30 15:31:57.057995 kubelet[2690]: I0130 15:31:57.057767 2690 scope.go:117] "RemoveContainer" containerID="5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff" Jan 30 15:31:57.066202 containerd[1475]: time="2025-01-30T15:31:57.066160128Z" level=info msg="RemoveContainer for \"5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff\"" Jan 30 15:31:57.072308 containerd[1475]: time="2025-01-30T15:31:57.072166460Z" level=info msg="RemoveContainer for \"5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff\" returns successfully" Jan 30 15:31:57.072436 kubelet[2690]: I0130 15:31:57.072395 2690 scope.go:117] "RemoveContainer" containerID="4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb" Jan 30 15:31:57.073639 containerd[1475]: time="2025-01-30T15:31:57.073611423Z" level=info msg="RemoveContainer for \"4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb\"" Jan 30 15:31:57.076605 containerd[1475]: time="2025-01-30T15:31:57.076473268Z" level=info msg="RemoveContainer for \"4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb\" returns successfully" Jan 30 15:31:57.076777 kubelet[2690]: I0130 15:31:57.076747 2690 scope.go:117] "RemoveContainer" containerID="e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975" Jan 30 15:31:57.077065 containerd[1475]: time="2025-01-30T15:31:57.077025709Z" level=error msg="ContainerStatus for \"e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975\": not found" Jan 30 15:31:57.077441 kubelet[2690]: E0130 15:31:57.077406 2690 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975\": not found" containerID="e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975" Jan 30 15:31:57.077644 kubelet[2690]: I0130 15:31:57.077472 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975"} err="failed to get container status \"e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975\": rpc error: code = NotFound desc = an error occurred when try to find container \"e80c9c23bb255cc6781e9b0473afb8c2ce3efb7132a67dc817f6400c75fdc975\": not found" Jan 30 15:31:57.077699 kubelet[2690]: I0130 15:31:57.077649 2690 scope.go:117] "RemoveContainer" containerID="6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b" Jan 30 15:31:57.078173 containerd[1475]: time="2025-01-30T15:31:57.078127391Z" level=error msg="ContainerStatus for \"6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b\": not found" Jan 30 15:31:57.078343 kubelet[2690]: E0130 15:31:57.078313 2690 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b\": not found" containerID="6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b" Jan 30 15:31:57.078382 kubelet[2690]: I0130 15:31:57.078355 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b"} err="failed to get container status \"6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"6da0d8538b7fff13558d1d1c332afc6e616690760ffb626c8cb35620fec24c1b\": not found" Jan 30 15:31:57.078406 kubelet[2690]: I0130 15:31:57.078383 2690 scope.go:117] "RemoveContainer" containerID="e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b" Jan 30 15:31:57.078639 containerd[1475]: time="2025-01-30T15:31:57.078580552Z" level=error msg="ContainerStatus for \"e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b\": not found" Jan 30 15:31:57.079545 kubelet[2690]: E0130 15:31:57.079478 2690 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b\": not found" containerID="e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b" Jan 30 15:31:57.079680 kubelet[2690]: I0130 15:31:57.079621 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b"} err="failed to get container status \"e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b\": rpc error: code = NotFound desc = an error occurred when try to find container \"e2618bead5202d330451ce2e98b6606561a70d356b2617a00491878139a7bf4b\": not found" Jan 30 15:31:57.079680 kubelet[2690]: I0130 15:31:57.079658 2690 scope.go:117] "RemoveContainer" containerID="5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff" Jan 30 15:31:57.080040 containerd[1475]: time="2025-01-30T15:31:57.080006755Z" level=error msg="ContainerStatus for \"5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff\": not found" Jan 30 15:31:57.080378 kubelet[2690]: E0130 15:31:57.080348 2690 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff\": not found" containerID="5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff" Jan 30 15:31:57.080519 kubelet[2690]: I0130 15:31:57.080395 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff"} err="failed to get container status \"5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff\": rpc error: code = NotFound desc = an error occurred when try to find container \"5542b1fad7ad8da8b11b34a6c3bf063b35e60405e2211a7712e8229c23970eff\": not found" Jan 30 15:31:57.080556 kubelet[2690]: I0130 15:31:57.080525 2690 scope.go:117] "RemoveContainer" containerID="4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb" Jan 30 15:31:57.080842 containerd[1475]: time="2025-01-30T15:31:57.080811996Z" level=error msg="ContainerStatus for \"4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb\": not found" Jan 30 15:31:57.081791 kubelet[2690]: E0130 15:31:57.081699 2690 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb\": not found" containerID="4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb" Jan 30 15:31:57.081863 kubelet[2690]: I0130 15:31:57.081822 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb"} err="failed to get container status \"4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ba1644cc759cdda8ecebac7dd795cc0e200f221f0796eb06cbb430fb26ee1fb\": not found" Jan 30 15:31:57.081927 kubelet[2690]: I0130 15:31:57.081903 2690 scope.go:117] "RemoveContainer" containerID="f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350" Jan 30 15:31:57.083848 containerd[1475]: time="2025-01-30T15:31:57.083779562Z" level=info msg="RemoveContainer for \"f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350\"" Jan 30 15:31:57.087646 containerd[1475]: time="2025-01-30T15:31:57.087548689Z" level=info msg="RemoveContainer for \"f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350\" returns successfully" Jan 30 15:31:57.088107 kubelet[2690]: I0130 15:31:57.088079 2690 scope.go:117] "RemoveContainer" containerID="f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350" Jan 30 15:31:57.088404 containerd[1475]: time="2025-01-30T15:31:57.088336131Z" level=error msg="ContainerStatus for \"f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350\": not found" Jan 30 15:31:57.088669 kubelet[2690]: E0130 15:31:57.088558 2690 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350\": not found" containerID="f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350" Jan 30 15:31:57.088669 kubelet[2690]: I0130 15:31:57.088607 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350"} err="failed to get container status \"f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350\": rpc error: code = NotFound desc = an error occurred when try to find container \"f60eb0343cd46c6617cba644838e6863576bb6f846946842d535f089f945e350\": not found" Jan 30 15:31:57.199007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4-rootfs.mount: Deactivated successfully. Jan 30 15:31:57.199188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97-rootfs.mount: Deactivated successfully. Jan 30 15:31:57.199310 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97-shm.mount: Deactivated successfully. Jan 30 15:31:57.199448 systemd[1]: var-lib-kubelet-pods-4780db41\x2d30f7\x2d484d\x2daf3d\x2d0169d62c36c5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcq666.mount: Deactivated successfully. Jan 30 15:31:57.199576 systemd[1]: var-lib-kubelet-pods-02f6cfd4\x2d42b6\x2d4e44\x2d813b\x2dfb72c58475e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9kd5c.mount: Deactivated successfully. Jan 30 15:31:57.199707 systemd[1]: var-lib-kubelet-pods-02f6cfd4\x2d42b6\x2d4e44\x2d813b\x2dfb72c58475e7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 15:31:57.199813 systemd[1]: var-lib-kubelet-pods-02f6cfd4\x2d42b6\x2d4e44\x2d813b\x2dfb72c58475e7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 15:31:58.283768 sshd[4280]: pam_unix(sshd:session): session closed for user core Jan 30 15:31:58.289493 systemd[1]: sshd@21-49.13.124.2:22-139.178.68.195:45130.service: Deactivated successfully. Jan 30 15:31:58.292719 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 15:31:58.293573 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Jan 30 15:31:58.294740 systemd-logind[1453]: Removed session 20. Jan 30 15:31:58.458039 systemd[1]: Started sshd@22-49.13.124.2:22-139.178.68.195:51002.service - OpenSSH per-connection server daemon (139.178.68.195:51002). Jan 30 15:31:58.728537 kubelet[2690]: I0130 15:31:58.728293 2690 setters.go:600] "Node became not ready" node="ci-4081-3-0-1-b815e480da" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T15:31:58Z","lastTransitionTime":"2025-01-30T15:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 15:31:59.048265 kubelet[2690]: I0130 15:31:59.047907 2690 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f6cfd4-42b6-4e44-813b-fb72c58475e7" path="/var/lib/kubelet/pods/02f6cfd4-42b6-4e44-813b-fb72c58475e7/volumes" Jan 30 15:31:59.048799 kubelet[2690]: I0130 15:31:59.048771 2690 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4780db41-30f7-484d-af3d-0169d62c36c5" path="/var/lib/kubelet/pods/4780db41-30f7-484d-af3d-0169d62c36c5/volumes" Jan 30 15:31:59.427277 sshd[4446]: Accepted publickey for core from 139.178.68.195 port 51002 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:31:59.429692 sshd[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:31:59.434754 systemd-logind[1453]: New session 21 of user core. Jan 30 15:31:59.441812 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 15:32:01.152930 kubelet[2690]: E0130 15:32:01.151681 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="02f6cfd4-42b6-4e44-813b-fb72c58475e7" containerName="mount-cgroup" Jan 30 15:32:01.152930 kubelet[2690]: E0130 15:32:01.151713 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4780db41-30f7-484d-af3d-0169d62c36c5" containerName="cilium-operator" Jan 30 15:32:01.152930 kubelet[2690]: E0130 15:32:01.151720 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="02f6cfd4-42b6-4e44-813b-fb72c58475e7" containerName="cilium-agent" Jan 30 15:32:01.152930 kubelet[2690]: E0130 15:32:01.151727 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="02f6cfd4-42b6-4e44-813b-fb72c58475e7" containerName="apply-sysctl-overwrites" Jan 30 15:32:01.152930 kubelet[2690]: E0130 15:32:01.151734 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="02f6cfd4-42b6-4e44-813b-fb72c58475e7" containerName="mount-bpf-fs" Jan 30 15:32:01.152930 kubelet[2690]: E0130 15:32:01.151740 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="02f6cfd4-42b6-4e44-813b-fb72c58475e7" containerName="clean-cilium-state" Jan 30 15:32:01.152930 kubelet[2690]: I0130 15:32:01.151762 2690 memory_manager.go:354] "RemoveStaleState removing state" podUID="4780db41-30f7-484d-af3d-0169d62c36c5" containerName="cilium-operator" Jan 30 15:32:01.152930 kubelet[2690]: I0130 15:32:01.151768 2690 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f6cfd4-42b6-4e44-813b-fb72c58475e7" containerName="cilium-agent" Jan 30 15:32:01.162739 systemd[1]: Created slice kubepods-burstable-pode3679eb0_9165_4b2c_b56b_37cee05360e7.slice - libcontainer container kubepods-burstable-pode3679eb0_9165_4b2c_b56b_37cee05360e7.slice. Jan 30 15:32:01.232216 kubelet[2690]: I0130 15:32:01.232157 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3679eb0-9165-4b2c-b56b-37cee05360e7-clustermesh-secrets\") pod \"cilium-4qsfv\" (UID: \"e3679eb0-9165-4b2c-b56b-37cee05360e7\") " pod="kube-system/cilium-4qsfv" Jan 30 15:32:01.232420 kubelet[2690]: I0130 15:32:01.232237 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e3679eb0-9165-4b2c-b56b-37cee05360e7-cilium-ipsec-secrets\") pod \"cilium-4qsfv\" (UID: \"e3679eb0-9165-4b2c-b56b-37cee05360e7\") " pod="kube-system/cilium-4qsfv" Jan 30 15:32:01.232420 kubelet[2690]: I0130 15:32:01.232289 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3679eb0-9165-4b2c-b56b-37cee05360e7-xtables-lock\") pod \"cilium-4qsfv\" (UID: \"e3679eb0-9165-4b2c-b56b-37cee05360e7\") " pod="kube-system/cilium-4qsfv" Jan 30 15:32:01.232420 kubelet[2690]: I0130 15:32:01.232334 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3679eb0-9165-4b2c-b56b-37cee05360e7-cilium-cgroup\") pod \"cilium-4qsfv\" (UID: \"e3679eb0-9165-4b2c-b56b-37cee05360e7\") " pod="kube-system/cilium-4qsfv" Jan 30 15:32:01.232420 kubelet[2690]: I0130 15:32:01.232372 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3679eb0-9165-4b2c-b56b-37cee05360e7-lib-modules\") pod \"cilium-4qsfv\" (UID: \"e3679eb0-9165-4b2c-b56b-37cee05360e7\") " pod="kube-system/cilium-4qsfv" Jan 30 15:32:01.232420 kubelet[2690]: I0130 15:32:01.232412 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3679eb0-9165-4b2c-b56b-37cee05360e7-bpf-maps\") pod \"cilium-4qsfv\" (UID: \"e3679eb0-9165-4b2c-b56b-37cee05360e7\") " pod="kube-system/cilium-4qsfv" Jan 30 15:32:01.232809 kubelet[2690]: I0130 15:32:01.232456 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3679eb0-9165-4b2c-b56b-37cee05360e7-cni-path\") pod \"cilium-4qsfv\" (UID: \"e3679eb0-9165-4b2c-b56b-37cee05360e7\") " pod="kube-system/cilium-4qsfv" Jan 30 15:32:01.232809 kubelet[2690]: I0130 15:32:01.232500 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3679eb0-9165-4b2c-b56b-37cee05360e7-etc-cni-netd\") pod \"cilium-4qsfv\" (UID: \"e3679eb0-9165-4b2c-b56b-37cee05360e7\") " pod="kube-system/cilium-4qsfv" Jan 30 15:32:01.232809 kubelet[2690]: I0130 15:32:01.232541 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3679eb0-9165-4b2c-b56b-37cee05360e7-cilium-config-path\") pod \"cilium-4qsfv\" (UID: \"e3679eb0-9165-4b2c-b56b-37cee05360e7\") " pod="kube-system/cilium-4qsfv" Jan 30 15:32:01.232809 kubelet[2690]: I0130 15:32:01.232580 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3679eb0-9165-4b2c-b56b-37cee05360e7-host-proc-sys-kernel\") pod \"cilium-4qsfv\" (UID: \"e3679eb0-9165-4b2c-b56b-37cee05360e7\") " pod="kube-system/cilium-4qsfv" Jan 30 15:32:01.232809 kubelet[2690]: I0130 15:32:01.232652 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3679eb0-9165-4b2c-b56b-37cee05360e7-hubble-tls\") pod \"cilium-4qsfv\" (UID: \"e3679eb0-9165-4b2c-b56b-37cee05360e7\") " pod="kube-system/cilium-4qsfv" Jan 30 15:32:01.232809 kubelet[2690]: I0130 15:32:01.232694 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8x6b\" (UniqueName: \"kubernetes.io/projected/e3679eb0-9165-4b2c-b56b-37cee05360e7-kube-api-access-z8x6b\") pod \"cilium-4qsfv\" (UID: \"e3679eb0-9165-4b2c-b56b-37cee05360e7\") " pod="kube-system/cilium-4qsfv" Jan 30 15:32:01.233172 kubelet[2690]: I0130 15:32:01.232765 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3679eb0-9165-4b2c-b56b-37cee05360e7-cilium-run\") pod \"cilium-4qsfv\" (UID: \"e3679eb0-9165-4b2c-b56b-37cee05360e7\") " pod="kube-system/cilium-4qsfv" Jan 30 15:32:01.233172 kubelet[2690]: I0130 15:32:01.232810 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3679eb0-9165-4b2c-b56b-37cee05360e7-hostproc\") pod \"cilium-4qsfv\" (UID: \"e3679eb0-9165-4b2c-b56b-37cee05360e7\") " pod="kube-system/cilium-4qsfv" Jan 30 15:32:01.233172 kubelet[2690]: I0130 15:32:01.232851 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3679eb0-9165-4b2c-b56b-37cee05360e7-host-proc-sys-net\") pod \"cilium-4qsfv\" (UID: \"e3679eb0-9165-4b2c-b56b-37cee05360e7\") " pod="kube-system/cilium-4qsfv" Jan 30 15:32:01.272184 kubelet[2690]: E0130 15:32:01.272046 2690 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 15:32:01.304133 sshd[4446]: pam_unix(sshd:session): session closed for user core Jan 30 15:32:01.310335 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Jan 30 15:32:01.311478 systemd[1]: sshd@22-49.13.124.2:22-139.178.68.195:51002.service: Deactivated successfully. Jan 30 15:32:01.314161 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 15:32:01.314478 systemd[1]: session-21.scope: Consumed 1.086s CPU time. Jan 30 15:32:01.315650 systemd-logind[1453]: Removed session 21. Jan 30 15:32:01.469688 containerd[1475]: time="2025-01-30T15:32:01.469479013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4qsfv,Uid:e3679eb0-9165-4b2c-b56b-37cee05360e7,Namespace:kube-system,Attempt:0,}" Jan 30 15:32:01.480938 systemd[1]: Started sshd@23-49.13.124.2:22-139.178.68.195:51008.service - OpenSSH per-connection server daemon (139.178.68.195:51008). Jan 30 15:32:01.500170 containerd[1475]: time="2025-01-30T15:32:01.499912550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:32:01.500170 containerd[1475]: time="2025-01-30T15:32:01.499983150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:32:01.500667 containerd[1475]: time="2025-01-30T15:32:01.500509831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:32:01.501316 containerd[1475]: time="2025-01-30T15:32:01.501268153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:32:01.526034 systemd[1]: Started cri-containerd-d265621048b38c49c833a859e6ecf1f22ea4606273cf49fcabe4397b978f6710.scope - libcontainer container d265621048b38c49c833a859e6ecf1f22ea4606273cf49fcabe4397b978f6710. Jan 30 15:32:01.553899 containerd[1475]: time="2025-01-30T15:32:01.553821732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4qsfv,Uid:e3679eb0-9165-4b2c-b56b-37cee05360e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d265621048b38c49c833a859e6ecf1f22ea4606273cf49fcabe4397b978f6710\"" Jan 30 15:32:01.560838 containerd[1475]: time="2025-01-30T15:32:01.560791945Z" level=info msg="CreateContainer within sandbox \"d265621048b38c49c833a859e6ecf1f22ea4606273cf49fcabe4397b978f6710\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 15:32:01.576267 containerd[1475]: time="2025-01-30T15:32:01.576134214Z" level=info msg="CreateContainer within sandbox \"d265621048b38c49c833a859e6ecf1f22ea4606273cf49fcabe4397b978f6710\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b4dfa937f252c7e85d33e9bd616d423f4c3e097750ff8e1866be0af6e1f705d3\"" Jan 30 15:32:01.577254 containerd[1475]: time="2025-01-30T15:32:01.577043375Z" level=info msg="StartContainer for \"b4dfa937f252c7e85d33e9bd616d423f4c3e097750ff8e1866be0af6e1f705d3\"" Jan 30 15:32:01.606808 systemd[1]: Started cri-containerd-b4dfa937f252c7e85d33e9bd616d423f4c3e097750ff8e1866be0af6e1f705d3.scope - libcontainer container b4dfa937f252c7e85d33e9bd616d423f4c3e097750ff8e1866be0af6e1f705d3. Jan 30 15:32:01.637006 containerd[1475]: time="2025-01-30T15:32:01.636937568Z" level=info msg="StartContainer for \"b4dfa937f252c7e85d33e9bd616d423f4c3e097750ff8e1866be0af6e1f705d3\" returns successfully" Jan 30 15:32:01.645398 systemd[1]: cri-containerd-b4dfa937f252c7e85d33e9bd616d423f4c3e097750ff8e1866be0af6e1f705d3.scope: Deactivated successfully. Jan 30 15:32:01.680118 containerd[1475]: time="2025-01-30T15:32:01.679776929Z" level=info msg="shim disconnected" id=b4dfa937f252c7e85d33e9bd616d423f4c3e097750ff8e1866be0af6e1f705d3 namespace=k8s.io Jan 30 15:32:01.680118 containerd[1475]: time="2025-01-30T15:32:01.679889849Z" level=warning msg="cleaning up after shim disconnected" id=b4dfa937f252c7e85d33e9bd616d423f4c3e097750ff8e1866be0af6e1f705d3 namespace=k8s.io Jan 30 15:32:01.680118 containerd[1475]: time="2025-01-30T15:32:01.679907689Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:32:02.045029 containerd[1475]: time="2025-01-30T15:32:02.044875257Z" level=info msg="CreateContainer within sandbox \"d265621048b38c49c833a859e6ecf1f22ea4606273cf49fcabe4397b978f6710\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 15:32:02.058188 containerd[1475]: time="2025-01-30T15:32:02.058142042Z" level=info msg="CreateContainer within sandbox \"d265621048b38c49c833a859e6ecf1f22ea4606273cf49fcabe4397b978f6710\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3763416f7eb845f4a813cf5aec5952f84b7a18c0f3067f495a743e6051a65b29\"" Jan 30 15:32:02.059966 containerd[1475]: time="2025-01-30T15:32:02.059063284Z" level=info msg="StartContainer for \"3763416f7eb845f4a813cf5aec5952f84b7a18c0f3067f495a743e6051a65b29\"" Jan 30 15:32:02.093845 systemd[1]: Started cri-containerd-3763416f7eb845f4a813cf5aec5952f84b7a18c0f3067f495a743e6051a65b29.scope - libcontainer container 3763416f7eb845f4a813cf5aec5952f84b7a18c0f3067f495a743e6051a65b29. Jan 30 15:32:02.126673 containerd[1475]: time="2025-01-30T15:32:02.126014570Z" level=info msg="StartContainer for \"3763416f7eb845f4a813cf5aec5952f84b7a18c0f3067f495a743e6051a65b29\" returns successfully" Jan 30 15:32:02.134109 systemd[1]: cri-containerd-3763416f7eb845f4a813cf5aec5952f84b7a18c0f3067f495a743e6051a65b29.scope: Deactivated successfully. Jan 30 15:32:02.164648 containerd[1475]: time="2025-01-30T15:32:02.164358682Z" level=info msg="shim disconnected" id=3763416f7eb845f4a813cf5aec5952f84b7a18c0f3067f495a743e6051a65b29 namespace=k8s.io Jan 30 15:32:02.164648 containerd[1475]: time="2025-01-30T15:32:02.164439362Z" level=warning msg="cleaning up after shim disconnected" id=3763416f7eb845f4a813cf5aec5952f84b7a18c0f3067f495a743e6051a65b29 namespace=k8s.io Jan 30 15:32:02.164648 containerd[1475]: time="2025-01-30T15:32:02.164458282Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:32:02.446205 sshd[4461]: Accepted publickey for core from 139.178.68.195 port 51008 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:32:02.450084 sshd[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:32:02.458929 systemd-logind[1453]: New session 22 of user core. Jan 30 15:32:02.462819 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 15:32:03.052649 containerd[1475]: time="2025-01-30T15:32:03.052424870Z" level=info msg="CreateContainer within sandbox \"d265621048b38c49c833a859e6ecf1f22ea4606273cf49fcabe4397b978f6710\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 15:32:03.076613 containerd[1475]: time="2025-01-30T15:32:03.075656353Z" level=info msg="CreateContainer within sandbox \"d265621048b38c49c833a859e6ecf1f22ea4606273cf49fcabe4397b978f6710\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9942aa3cb694c5a209877e087e9737d158c50f6430495155cb7dba4fc261e2f1\"" Jan 30 15:32:03.078468 containerd[1475]: time="2025-01-30T15:32:03.078429399Z" level=info msg="StartContainer for \"9942aa3cb694c5a209877e087e9737d158c50f6430495155cb7dba4fc261e2f1\"" Jan 30 15:32:03.117827 systemd[1]: Started cri-containerd-9942aa3cb694c5a209877e087e9737d158c50f6430495155cb7dba4fc261e2f1.scope - libcontainer container 9942aa3cb694c5a209877e087e9737d158c50f6430495155cb7dba4fc261e2f1. Jan 30 15:32:03.118380 sshd[4461]: pam_unix(sshd:session): session closed for user core Jan 30 15:32:03.127089 systemd-logind[1453]: Session 22 logged out. Waiting for processes to exit. Jan 30 15:32:03.128155 systemd[1]: sshd@23-49.13.124.2:22-139.178.68.195:51008.service: Deactivated successfully. Jan 30 15:32:03.133753 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 15:32:03.135398 systemd-logind[1453]: Removed session 22. Jan 30 15:32:03.156162 systemd[1]: cri-containerd-9942aa3cb694c5a209877e087e9737d158c50f6430495155cb7dba4fc261e2f1.scope: Deactivated successfully. Jan 30 15:32:03.161419 containerd[1475]: time="2025-01-30T15:32:03.161323194Z" level=info msg="StartContainer for \"9942aa3cb694c5a209877e087e9737d158c50f6430495155cb7dba4fc261e2f1\" returns successfully" Jan 30 15:32:03.185479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9942aa3cb694c5a209877e087e9737d158c50f6430495155cb7dba4fc261e2f1-rootfs.mount: Deactivated successfully. Jan 30 15:32:03.194513 containerd[1475]: time="2025-01-30T15:32:03.194275055Z" level=info msg="shim disconnected" id=9942aa3cb694c5a209877e087e9737d158c50f6430495155cb7dba4fc261e2f1 namespace=k8s.io Jan 30 15:32:03.194513 containerd[1475]: time="2025-01-30T15:32:03.194381616Z" level=warning msg="cleaning up after shim disconnected" id=9942aa3cb694c5a209877e087e9737d158c50f6430495155cb7dba4fc261e2f1 namespace=k8s.io Jan 30 15:32:03.194513 containerd[1475]: time="2025-01-30T15:32:03.194395256Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:32:03.295206 systemd[1]: Started sshd@24-49.13.124.2:22-139.178.68.195:51020.service - OpenSSH per-connection server daemon (139.178.68.195:51020). Jan 30 15:32:04.058107 containerd[1475]: time="2025-01-30T15:32:04.057632272Z" level=info msg="CreateContainer within sandbox \"d265621048b38c49c833a859e6ecf1f22ea4606273cf49fcabe4397b978f6710\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 15:32:04.083700 containerd[1475]: time="2025-01-30T15:32:04.083052999Z" level=info msg="CreateContainer within sandbox \"d265621048b38c49c833a859e6ecf1f22ea4606273cf49fcabe4397b978f6710\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c9e473360383671648d1383cc0a5bc9fa7ed01ac3bf459e50434a00b48e17399\"" Jan 30 15:32:04.083911 containerd[1475]: time="2025-01-30T15:32:04.083833481Z" level=info msg="StartContainer for \"c9e473360383671648d1383cc0a5bc9fa7ed01ac3bf459e50434a00b48e17399\"" Jan 30 15:32:04.125843 systemd[1]: Started cri-containerd-c9e473360383671648d1383cc0a5bc9fa7ed01ac3bf459e50434a00b48e17399.scope - libcontainer container c9e473360383671648d1383cc0a5bc9fa7ed01ac3bf459e50434a00b48e17399. Jan 30 15:32:04.157530 systemd[1]: cri-containerd-c9e473360383671648d1383cc0a5bc9fa7ed01ac3bf459e50434a00b48e17399.scope: Deactivated successfully. Jan 30 15:32:04.160749 containerd[1475]: time="2025-01-30T15:32:04.160537544Z" level=info msg="StartContainer for \"c9e473360383671648d1383cc0a5bc9fa7ed01ac3bf459e50434a00b48e17399\" returns successfully" Jan 30 15:32:04.180760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9e473360383671648d1383cc0a5bc9fa7ed01ac3bf459e50434a00b48e17399-rootfs.mount: Deactivated successfully. Jan 30 15:32:04.186617 containerd[1475]: time="2025-01-30T15:32:04.186374752Z" level=info msg="shim disconnected" id=c9e473360383671648d1383cc0a5bc9fa7ed01ac3bf459e50434a00b48e17399 namespace=k8s.io Jan 30 15:32:04.186617 containerd[1475]: time="2025-01-30T15:32:04.186431712Z" level=warning msg="cleaning up after shim disconnected" id=c9e473360383671648d1383cc0a5bc9fa7ed01ac3bf459e50434a00b48e17399 namespace=k8s.io Jan 30 15:32:04.186617 containerd[1475]: time="2025-01-30T15:32:04.186440152Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:32:04.296458 sshd[4695]: Accepted publickey for core from 139.178.68.195 port 51020 ssh2: RSA SHA256:sEmXhGFGlwd7KeRcv2oD/pODTHGZASfNUvhka9D+Bx0 Jan 30 15:32:04.298530 sshd[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:32:04.304002 systemd-logind[1453]: New session 23 of user core. Jan 30 15:32:04.312914 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 15:32:05.067445 containerd[1475]: time="2025-01-30T15:32:05.066579435Z" level=info msg="CreateContainer within sandbox \"d265621048b38c49c833a859e6ecf1f22ea4606273cf49fcabe4397b978f6710\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 15:32:05.095282 containerd[1475]: time="2025-01-30T15:32:05.095230888Z" level=info msg="CreateContainer within sandbox \"d265621048b38c49c833a859e6ecf1f22ea4606273cf49fcabe4397b978f6710\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d46f6d7df96505573bef335945d28d1b316ef17390821ee627d55aa074cbf876\"" Jan 30 15:32:05.095802 containerd[1475]: time="2025-01-30T15:32:05.095782329Z" level=info msg="StartContainer for \"d46f6d7df96505573bef335945d28d1b316ef17390821ee627d55aa074cbf876\"" Jan 30 15:32:05.132822 systemd[1]: Started cri-containerd-d46f6d7df96505573bef335945d28d1b316ef17390821ee627d55aa074cbf876.scope - libcontainer container d46f6d7df96505573bef335945d28d1b316ef17390821ee627d55aa074cbf876. Jan 30 15:32:05.166623 containerd[1475]: time="2025-01-30T15:32:05.166563461Z" level=info msg="StartContainer for \"d46f6d7df96505573bef335945d28d1b316ef17390821ee627d55aa074cbf876\" returns successfully" Jan 30 15:32:05.481695 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 15:32:06.087316 kubelet[2690]: I0130 15:32:06.086293 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4qsfv" podStartSLOduration=5.086274291 podStartE2EDuration="5.086274291s" podCreationTimestamp="2025-01-30 15:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:32:06.08541865 +0000 UTC m=+355.145739421" watchObservedRunningTime="2025-01-30 15:32:06.086274291 +0000 UTC m=+355.146595022" Jan 30 15:32:06.455279 update_engine[1455]: I20250130 15:32:06.455026 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 15:32:06.455740 update_engine[1455]: I20250130 15:32:06.455367 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 15:32:06.455740 update_engine[1455]: I20250130 15:32:06.455657 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 15:32:06.456738 update_engine[1455]: E20250130 15:32:06.456671 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 15:32:06.456861 update_engine[1455]: I20250130 15:32:06.456743 1455 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 15:32:06.456861 update_engine[1455]: I20250130 15:32:06.456753 1455 omaha_request_action.cc:617] Omaha request response: Jan 30 15:32:06.456861 update_engine[1455]: E20250130 15:32:06.456831 1455 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 15:32:06.456946 update_engine[1455]: I20250130 15:32:06.456881 1455 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 15:32:06.456946 update_engine[1455]: I20250130 15:32:06.456892 1455 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 15:32:06.456946 update_engine[1455]: I20250130 15:32:06.456897 1455 update_attempter.cc:306] Processing Done. Jan 30 15:32:06.456946 update_engine[1455]: E20250130 15:32:06.456911 1455 update_attempter.cc:619] Update failed. Jan 30 15:32:06.456946 update_engine[1455]: I20250130 15:32:06.456918 1455 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 15:32:06.456946 update_engine[1455]: I20250130 15:32:06.456923 1455 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 15:32:06.456946 update_engine[1455]: I20250130 15:32:06.456928 1455 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 15:32:06.457117 update_engine[1455]: I20250130 15:32:06.456996 1455 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 15:32:06.457117 update_engine[1455]: I20250130 15:32:06.457020 1455 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 15:32:06.457117 update_engine[1455]: I20250130 15:32:06.457026 1455 omaha_request_action.cc:272] Request: Jan 30 15:32:06.457117 update_engine[1455]: Jan 30 15:32:06.457117 update_engine[1455]: Jan 30 15:32:06.457117 update_engine[1455]: Jan 30 15:32:06.457117 update_engine[1455]: Jan 30 15:32:06.457117 update_engine[1455]: Jan 30 15:32:06.457117 update_engine[1455]: Jan 30 15:32:06.457117 update_engine[1455]: I20250130 15:32:06.457033 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 15:32:06.457332 update_engine[1455]: I20250130 15:32:06.457172 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 15:32:06.457378 update_engine[1455]: I20250130 15:32:06.457345 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 15:32:06.457779 locksmithd[1482]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 15:32:06.458156 update_engine[1455]: E20250130 15:32:06.458001 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 15:32:06.458156 update_engine[1455]: I20250130 15:32:06.458051 1455 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 15:32:06.458156 update_engine[1455]: I20250130 15:32:06.458060 1455 omaha_request_action.cc:617] Omaha request response: Jan 30 15:32:06.458156 update_engine[1455]: I20250130 15:32:06.458066 1455 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 15:32:06.458156 update_engine[1455]: I20250130 15:32:06.458071 1455 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 15:32:06.458156 update_engine[1455]: I20250130 15:32:06.458077 1455 update_attempter.cc:306] Processing Done. Jan 30 15:32:06.458156 update_engine[1455]: I20250130 15:32:06.458083 1455 update_attempter.cc:310] Error event sent. Jan 30 15:32:06.458156 update_engine[1455]: I20250130 15:32:06.458093 1455 update_check_scheduler.cc:74] Next update check in 46m38s Jan 30 15:32:06.458504 locksmithd[1482]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 30 15:32:07.114045 kubelet[2690]: E0130 15:32:07.113985 2690 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56702->127.0.0.1:32969: write tcp 127.0.0.1:56702->127.0.0.1:32969: write: broken pipe Jan 30 15:32:08.580144 systemd-networkd[1374]: lxc_health: Link UP Jan 30 15:32:08.586432 systemd-networkd[1374]: lxc_health: Gained carrier Jan 30 15:32:09.298512 kubelet[2690]: E0130 15:32:09.298387 2690 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48696->127.0.0.1:32969: write tcp 127.0.0.1:48696->127.0.0.1:32969: write: broken pipe Jan 30 15:32:09.749700 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jan 30 15:32:11.096146 containerd[1475]: time="2025-01-30T15:32:11.095746835Z" level=info msg="StopPodSandbox for \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\"" Jan 30 15:32:11.096146 containerd[1475]: time="2025-01-30T15:32:11.095892916Z" level=info msg="TearDown network for sandbox \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\" successfully" Jan 30 15:32:11.096146 containerd[1475]: time="2025-01-30T15:32:11.095907756Z" level=info msg="StopPodSandbox for \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\" returns successfully" Jan 30 15:32:11.097108 containerd[1475]: time="2025-01-30T15:32:11.096991516Z" level=info msg="RemovePodSandbox for \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\"" Jan 30 15:32:11.097108 containerd[1475]: time="2025-01-30T15:32:11.097057036Z" level=info msg="Forcibly stopping sandbox \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\"" Jan 30 15:32:11.097313 containerd[1475]: time="2025-01-30T15:32:11.097240636Z" level=info msg="TearDown network for sandbox \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\" successfully" Jan 30 15:32:11.104716 containerd[1475]: time="2025-01-30T15:32:11.104500078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:32:11.104716 containerd[1475]: time="2025-01-30T15:32:11.104576878Z" level=info msg="RemovePodSandbox \"5fa2da5e1e79ad8f9292a5ff808936a8b7359815518cbf84374a63cc11ea0e97\" returns successfully" Jan 30 15:32:11.105579 containerd[1475]: time="2025-01-30T15:32:11.105356878Z" level=info msg="StopPodSandbox for \"9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4\"" Jan 30 15:32:11.105579 containerd[1475]: time="2025-01-30T15:32:11.105436918Z" level=info msg="TearDown network for sandbox \"9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4\" successfully" Jan 30 15:32:11.105579 containerd[1475]: time="2025-01-30T15:32:11.105447398Z" level=info msg="StopPodSandbox for \"9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4\" returns successfully" Jan 30 15:32:11.106788 containerd[1475]: time="2025-01-30T15:32:11.106154358Z" level=info msg="RemovePodSandbox for \"9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4\"" Jan 30 15:32:11.106788 containerd[1475]: time="2025-01-30T15:32:11.106179838Z" level=info msg="Forcibly stopping sandbox \"9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4\"" Jan 30 15:32:11.106788 containerd[1475]: time="2025-01-30T15:32:11.106227518Z" level=info msg="TearDown network for sandbox \"9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4\" successfully" Jan 30 15:32:11.109805 containerd[1475]: time="2025-01-30T15:32:11.109750799Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:32:11.109960 containerd[1475]: time="2025-01-30T15:32:11.109941999Z" level=info msg="RemovePodSandbox \"9e27a54169c7c7701574b17516f4aa2cf1090320f3e23b88585145b62aa526b4\" returns successfully" Jan 30 15:32:13.783971 sshd[4695]: pam_unix(sshd:session): session closed for user core Jan 30 15:32:13.789147 systemd[1]: sshd@24-49.13.124.2:22-139.178.68.195:51020.service: Deactivated successfully. Jan 30 15:32:13.791905 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 15:32:13.793166 systemd-logind[1453]: Session 23 logged out. Waiting for processes to exit. Jan 30 15:32:13.794521 systemd-logind[1453]: Removed session 23.