Jan 29 11:03:04.909629 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:03:04.909655 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:37:00 -00 2025 Jan 29 11:03:04.909665 kernel: KASLR enabled Jan 29 11:03:04.909671 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 29 11:03:04.909676 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d98 Jan 29 11:03:04.909682 kernel: random: crng init done Jan 29 11:03:04.909689 kernel: secureboot: Secure boot disabled Jan 29 11:03:04.909695 kernel: ACPI: Early table checksum verification disabled Jan 29 11:03:04.909700 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 29 11:03:04.909708 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:03:04.909714 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:04.909719 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:04.909725 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:04.909731 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:04.909738 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:04.910189 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:04.910197 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:04.910203 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:04.910209 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:04.910215 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 11:03:04.910222 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 29 11:03:04.910228 kernel: NUMA: Failed to initialise from firmware Jan 29 11:03:04.910234 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 11:03:04.910240 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 29 11:03:04.910246 kernel: Zone ranges: Jan 29 11:03:04.910254 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 11:03:04.910260 kernel: DMA32 empty Jan 29 11:03:04.910266 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 29 11:03:04.910273 kernel: Movable zone start for each node Jan 29 11:03:04.910279 kernel: Early memory node ranges Jan 29 11:03:04.910285 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jan 29 11:03:04.910291 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 29 11:03:04.910298 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 29 11:03:04.910304 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 29 11:03:04.910310 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 29 11:03:04.910316 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 29 11:03:04.910340 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 29 11:03:04.910349 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 11:03:04.910355 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 29 11:03:04.910361 kernel: psci: probing for conduit method from ACPI. Jan 29 11:03:04.910370 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:03:04.910377 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:03:04.910383 kernel: psci: Trusted OS migration not required Jan 29 11:03:04.910391 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:03:04.910398 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:03:04.910404 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:03:04.910411 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:03:04.910417 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 11:03:04.910424 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:03:04.910430 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:03:04.910440 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:03:04.910446 kernel: CPU features: detected: Spectre-v4 Jan 29 11:03:04.910453 kernel: CPU features: detected: Spectre-BHB Jan 29 11:03:04.910460 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:03:04.910467 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:03:04.910474 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:03:04.910481 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:03:04.910487 kernel: alternatives: applying boot alternatives Jan 29 11:03:04.910495 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:03:04.910502 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:03:04.910509 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:03:04.910516 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:03:04.910525 kernel: Fallback order for Node 0: 0 Jan 29 11:03:04.910533 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 29 11:03:04.910543 kernel: Policy zone: Normal Jan 29 11:03:04.910549 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:03:04.910556 kernel: software IO TLB: area num 2. Jan 29 11:03:04.910562 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 29 11:03:04.910569 kernel: Memory: 3882680K/4096000K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 213320K reserved, 0K cma-reserved) Jan 29 11:03:04.910576 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:03:04.910582 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:03:04.910589 kernel: rcu: RCU event tracing is enabled. Jan 29 11:03:04.910596 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:03:04.910603 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:03:04.910609 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:03:04.910616 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:03:04.910624 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:03:04.910631 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:03:04.910637 kernel: GICv3: 256 SPIs implemented Jan 29 11:03:04.910644 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:03:04.910650 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:03:04.910657 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:03:04.910663 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:03:04.910670 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:03:04.910677 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:03:04.910683 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:03:04.910690 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 29 11:03:04.910698 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 29 11:03:04.910705 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:03:04.910711 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:03:04.910718 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:03:04.910725 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:03:04.910731 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:03:04.910738 kernel: Console: colour dummy device 80x25 Jan 29 11:03:04.910772 kernel: ACPI: Core revision 20230628 Jan 29 11:03:04.910779 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:03:04.910786 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:03:04.910796 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:03:04.910802 kernel: landlock: Up and running. Jan 29 11:03:04.910809 kernel: SELinux: Initializing. Jan 29 11:03:04.910816 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:03:04.910823 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:03:04.910830 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:03:04.910836 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:03:04.910843 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:03:04.910850 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:03:04.910859 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:03:04.910866 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:03:04.910872 kernel: Remapping and enabling EFI services. Jan 29 11:03:04.910879 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:03:04.910886 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:03:04.910893 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:03:04.910900 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 29 11:03:04.910907 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:03:04.910913 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:03:04.910920 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:03:04.910928 kernel: SMP: Total of 2 processors activated. Jan 29 11:03:04.910935 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:03:04.910947 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:03:04.910956 kernel: CPU features: detected: Common not Private translations Jan 29 11:03:04.910963 kernel: CPU features: detected: CRC32 instructions Jan 29 11:03:04.910970 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:03:04.910977 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:03:04.910985 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:03:04.910992 kernel: CPU features: detected: Privileged Access Never Jan 29 11:03:04.911001 kernel: CPU features: detected: RAS Extension Support Jan 29 11:03:04.911008 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:03:04.911015 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:03:04.911022 kernel: alternatives: applying system-wide alternatives Jan 29 11:03:04.911030 kernel: devtmpfs: initialized Jan 29 11:03:04.911037 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:03:04.911044 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:03:04.911051 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:03:04.911060 kernel: SMBIOS 3.0.0 present. Jan 29 11:03:04.911067 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 29 11:03:04.911074 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:03:04.911081 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:03:04.911088 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:03:04.911096 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:03:04.911103 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:03:04.911110 kernel: audit: type=2000 audit(0.011:1): state=initialized audit_enabled=0 res=1 Jan 29 11:03:04.911117 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:03:04.911125 kernel: cpuidle: using governor menu Jan 29 11:03:04.911133 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:03:04.911140 kernel: ASID allocator initialised with 32768 entries Jan 29 11:03:04.911147 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:03:04.911154 kernel: Serial: AMBA PL011 UART driver Jan 29 11:03:04.911161 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:03:04.911168 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:03:04.911175 kernel: Modules: 508960 pages in range for PLT usage Jan 29 11:03:04.911183 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:03:04.911191 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:03:04.911198 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:03:04.911206 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:03:04.911213 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:03:04.911220 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:03:04.911227 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:03:04.911234 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:03:04.911241 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:03:04.911248 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:03:04.911257 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:03:04.911264 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:03:04.911271 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:03:04.911278 kernel: ACPI: Interpreter enabled Jan 29 11:03:04.911285 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:03:04.911292 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:03:04.911300 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:03:04.911307 kernel: printk: console [ttyAMA0] enabled Jan 29 11:03:04.911314 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:03:04.911523 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:03:04.911599 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:03:04.911663 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:03:04.911785 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:03:04.911853 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:03:04.911862 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:03:04.911869 kernel: PCI host bridge to bus 0000:00 Jan 29 11:03:04.911944 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:03:04.912003 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:03:04.912070 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:03:04.912127 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:03:04.912211 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:03:04.912299 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 29 11:03:04.912387 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 29 11:03:04.912470 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 11:03:04.912542 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:04.912607 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 29 11:03:04.912754 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:04.912881 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 29 11:03:04.912961 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:04.913037 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 29 11:03:04.913109 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:04.913175 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 29 11:03:04.913247 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:04.913312 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 29 11:03:04.913454 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:04.913524 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 29 11:03:04.913596 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:04.913661 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 29 11:03:04.913739 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:04.914478 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 29 11:03:04.914574 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:04.914657 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 29 11:03:04.914738 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 29 11:03:04.914826 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 29 11:03:04.914913 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 11:03:04.914984 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 29 11:03:04.915060 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:03:04.915133 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 11:03:04.915207 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 11:03:04.915275 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 29 11:03:04.915370 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 29 11:03:04.915442 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 29 11:03:04.915508 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 29 11:03:04.915585 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 29 11:03:04.915655 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 29 11:03:04.915729 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 11:03:04.918212 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 29 11:03:04.918311 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 29 11:03:04.918422 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 29 11:03:04.918498 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 29 11:03:04.918575 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 11:03:04.918659 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 11:03:04.918727 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 29 11:03:04.919277 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 29 11:03:04.919426 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 11:03:04.919503 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 29 11:03:04.919577 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 29 11:03:04.919642 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 29 11:03:04.919708 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 29 11:03:04.920894 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 29 11:03:04.920983 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 29 11:03:04.921088 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 11:03:04.921158 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 29 11:03:04.921222 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 29 11:03:04.921300 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 11:03:04.921422 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 29 11:03:04.921509 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 29 11:03:04.921580 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 11:03:04.921646 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 29 11:03:04.921710 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 29 11:03:04.922916 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 11:03:04.923004 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 29 11:03:04.923070 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 29 11:03:04.923141 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 11:03:04.923225 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 29 11:03:04.923372 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 29 11:03:04.923446 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 11:03:04.923515 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 29 11:03:04.923580 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 29 11:03:04.923659 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 11:03:04.923736 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 29 11:03:04.925617 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 29 11:03:04.925693 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 29 11:03:04.925797 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:03:04.925871 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 29 11:03:04.925937 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:03:04.926017 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 29 11:03:04.926135 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:03:04.926216 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 29 11:03:04.926284 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:03:04.926378 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 29 11:03:04.926452 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:03:04.926527 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 29 11:03:04.926594 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:03:04.926664 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 29 11:03:04.926730 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:03:04.928889 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 29 11:03:04.928971 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:03:04.929041 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 29 11:03:04.929118 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:03:04.929188 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 29 11:03:04.929253 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 29 11:03:04.929341 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 29 11:03:04.929417 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 11:03:04.929489 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 29 11:03:04.929570 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 11:03:04.929653 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 29 11:03:04.929734 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 11:03:04.929821 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 29 11:03:04.929887 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 11:03:04.929957 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 29 11:03:04.930276 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 11:03:04.930415 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 29 11:03:04.930489 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 11:03:04.930559 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 29 11:03:04.930632 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 11:03:04.930702 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 29 11:03:04.931177 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 11:03:04.931261 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 29 11:03:04.931352 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 29 11:03:04.931430 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 29 11:03:04.931507 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 29 11:03:04.931574 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:03:04.931648 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 29 11:03:04.931716 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 11:03:04.931917 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 11:03:04.931988 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 29 11:03:04.932052 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:03:04.932124 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 29 11:03:04.932198 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 11:03:04.932263 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 11:03:04.932343 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 29 11:03:04.932412 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:03:04.932485 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 11:03:04.932551 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 29 11:03:04.932621 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 11:03:04.932689 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 11:03:04.932874 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 29 11:03:04.932944 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:03:04.933016 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 11:03:04.933163 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 11:03:04.933229 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 11:03:04.933293 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 29 11:03:04.933424 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:03:04.933500 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 29 11:03:04.933567 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 29 11:03:04.933634 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 11:03:04.933697 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 11:03:04.933779 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 29 11:03:04.933846 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:03:04.933921 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 29 11:03:04.933994 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 29 11:03:04.934062 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 11:03:04.934127 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 11:03:04.934189 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 29 11:03:04.934252 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:03:04.934341 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 29 11:03:04.934416 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 29 11:03:04.934485 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 29 11:03:04.934557 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 11:03:04.934621 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 11:03:04.934733 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 29 11:03:04.934846 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:03:04.934917 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 11:03:04.934982 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 11:03:04.935046 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 29 11:03:04.935110 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:03:04.935183 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 11:03:04.935248 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 29 11:03:04.935314 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 29 11:03:04.935414 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:03:04.935492 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:03:04.935552 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:03:04.935610 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:03:04.935686 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 11:03:04.936823 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 29 11:03:04.936924 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:03:04.936998 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 29 11:03:04.937059 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 29 11:03:04.937118 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:03:04.937186 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 29 11:03:04.937254 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 29 11:03:04.937353 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:03:04.937437 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 29 11:03:04.937498 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 29 11:03:04.937558 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:03:04.937627 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 29 11:03:04.937693 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 29 11:03:04.938890 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:03:04.939003 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 29 11:03:04.939078 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 29 11:03:04.939211 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:03:04.939293 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 29 11:03:04.939425 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 29 11:03:04.939507 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:03:04.939593 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 29 11:03:04.939669 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 29 11:03:04.939792 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:03:04.939893 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 29 11:03:04.939972 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 29 11:03:04.940050 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:03:04.940062 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:03:04.940074 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:03:04.940084 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:03:04.940094 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:03:04.940103 kernel: iommu: Default domain type: Translated Jan 29 11:03:04.940116 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:03:04.940126 kernel: efivars: Registered efivars operations Jan 29 11:03:04.940135 kernel: vgaarb: loaded Jan 29 11:03:04.940145 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:03:04.940155 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:03:04.940165 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:03:04.940175 kernel: pnp: PnP ACPI init Jan 29 11:03:04.940272 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:03:04.940288 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:03:04.940298 kernel: NET: Registered PF_INET protocol family Jan 29 11:03:04.940308 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:03:04.940328 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:03:04.940342 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:03:04.940352 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:03:04.940362 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:03:04.940372 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:03:04.940385 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:03:04.940396 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:03:04.940406 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:03:04.940610 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 29 11:03:04.940629 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:03:04.940639 kernel: kvm [1]: HYP mode not available Jan 29 11:03:04.940649 kernel: Initialise system trusted keyrings Jan 29 11:03:04.940659 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:03:04.940669 kernel: Key type asymmetric registered Jan 29 11:03:04.940678 kernel: Asymmetric key parser 'x509' registered Jan 29 11:03:04.940692 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:03:04.940702 kernel: io scheduler mq-deadline registered Jan 29 11:03:04.940712 kernel: io scheduler kyber registered Jan 29 11:03:04.940721 kernel: io scheduler bfq registered Jan 29 11:03:04.940732 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 11:03:04.940867 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 29 11:03:04.940956 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 29 11:03:04.941041 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:04.941136 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 29 11:03:04.941247 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 29 11:03:04.941431 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:04.941556 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 29 11:03:04.941654 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 29 11:03:04.941738 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:04.941955 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 29 11:03:04.942065 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 29 11:03:04.942151 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:04.942259 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 29 11:03:04.942374 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 29 11:03:04.942453 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:04.942531 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 29 11:03:04.942598 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 29 11:03:04.942663 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:04.942732 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 29 11:03:04.942870 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 29 11:03:04.942935 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:04.943012 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 29 11:03:04.943076 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 29 11:03:04.943139 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:04.943150 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 29 11:03:04.943215 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 29 11:03:04.943280 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 29 11:03:04.943396 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:04.943410 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:03:04.943418 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:03:04.943426 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:03:04.943499 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 29 11:03:04.943578 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 29 11:03:04.943589 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:03:04.943597 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 11:03:04.943666 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 29 11:03:04.943680 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 29 11:03:04.943688 kernel: thunder_xcv, ver 1.0 Jan 29 11:03:04.943699 kernel: thunder_bgx, ver 1.0 Jan 29 11:03:04.943706 kernel: nicpf, ver 1.0 Jan 29 11:03:04.943714 kernel: nicvf, ver 1.0 Jan 29 11:03:04.943856 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:03:04.943922 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:03:04 UTC (1738148584) Jan 29 11:03:04.943932 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:03:04.943944 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:03:04.943951 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:03:04.943959 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:03:04.943967 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:03:04.943975 kernel: Segment Routing with IPv6 Jan 29 11:03:04.943982 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:03:04.943990 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:03:04.943997 kernel: Key type dns_resolver registered Jan 29 11:03:04.944012 kernel: registered taskstats version 1 Jan 29 11:03:04.944021 kernel: Loading compiled-in X.509 certificates Jan 29 11:03:04.944029 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f3333311a24aa8c58222f4e98a07eaa1f186ad1a' Jan 29 11:03:04.944036 kernel: Key type .fscrypt registered Jan 29 11:03:04.944044 kernel: Key type fscrypt-provisioning registered Jan 29 11:03:04.944052 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:03:04.944060 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:03:04.944068 kernel: ima: No architecture policies found Jan 29 11:03:04.944076 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:03:04.944085 kernel: clk: Disabling unused clocks Jan 29 11:03:04.944093 kernel: Freeing unused kernel memory: 39680K Jan 29 11:03:04.944101 kernel: Run /init as init process Jan 29 11:03:04.944108 kernel: with arguments: Jan 29 11:03:04.944116 kernel: /init Jan 29 11:03:04.944124 kernel: with environment: Jan 29 11:03:04.944131 kernel: HOME=/ Jan 29 11:03:04.944139 kernel: TERM=linux Jan 29 11:03:04.944146 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:03:04.944156 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:03:04.944167 systemd[1]: Detected virtualization kvm. Jan 29 11:03:04.944175 systemd[1]: Detected architecture arm64. Jan 29 11:03:04.944183 systemd[1]: Running in initrd. Jan 29 11:03:04.944191 systemd[1]: No hostname configured, using default hostname. Jan 29 11:03:04.944198 systemd[1]: Hostname set to . Jan 29 11:03:04.944207 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:03:04.944215 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:03:04.944226 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:03:04.944234 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:03:04.944243 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:03:04.944251 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:03:04.944259 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:03:04.944267 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:03:04.944277 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:03:04.944287 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:03:04.944295 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:03:04.944303 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:03:04.944311 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:03:04.944334 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:03:04.944343 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:03:04.944351 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:03:04.944359 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:03:04.944370 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:03:04.944378 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:03:04.944386 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:03:04.944395 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:03:04.944403 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:03:04.944412 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:03:04.944420 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:03:04.944428 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:03:04.944438 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:03:04.944446 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:03:04.944454 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:03:04.944463 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:03:04.944538 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:03:04.944553 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:03:04.944561 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:03:04.944570 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:03:04.944610 systemd-journald[237]: Collecting audit messages is disabled. Jan 29 11:03:04.944635 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:03:04.944646 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:03:04.944655 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:03:04.944663 kernel: Bridge firewalling registered Jan 29 11:03:04.944671 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:03:04.944679 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:03:04.944688 systemd-journald[237]: Journal started Jan 29 11:03:04.944709 systemd-journald[237]: Runtime Journal (/run/log/journal/5ef497cf58154c11867abfa66e513b6f) is 8.0M, max 76.6M, 68.6M free. Jan 29 11:03:04.914985 systemd-modules-load[238]: Inserted module 'overlay' Jan 29 11:03:04.937286 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 29 11:03:04.947849 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:03:04.956039 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:03:04.958975 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:03:04.965999 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:03:04.967034 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:03:04.980364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:03:04.992635 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:03:04.995297 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:03:04.997863 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:03:05.002986 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:03:05.007965 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:03:05.009845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:03:05.019441 dracut-cmdline[271]: dracut-dracut-053 Jan 29 11:03:05.026276 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:03:05.053355 systemd-resolved[272]: Positive Trust Anchors: Jan 29 11:03:05.053436 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:03:05.053468 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:03:05.064267 systemd-resolved[272]: Defaulting to hostname 'linux'. Jan 29 11:03:05.066442 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:03:05.067809 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:03:05.118773 kernel: SCSI subsystem initialized Jan 29 11:03:05.123815 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:03:05.131809 kernel: iscsi: registered transport (tcp) Jan 29 11:03:05.147990 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:03:05.148053 kernel: QLogic iSCSI HBA Driver Jan 29 11:03:05.201445 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:03:05.208121 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:03:05.224835 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:03:05.224928 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:03:05.225806 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:03:05.292754 kernel: raid6: neonx8 gen() 15130 MB/s Jan 29 11:03:05.293859 kernel: raid6: neonx4 gen() 15471 MB/s Jan 29 11:03:05.310810 kernel: raid6: neonx2 gen() 13057 MB/s Jan 29 11:03:05.327823 kernel: raid6: neonx1 gen() 10226 MB/s Jan 29 11:03:05.344848 kernel: raid6: int64x8 gen() 6792 MB/s Jan 29 11:03:05.361816 kernel: raid6: int64x4 gen() 7193 MB/s Jan 29 11:03:05.378811 kernel: raid6: int64x2 gen() 5958 MB/s Jan 29 11:03:05.395907 kernel: raid6: int64x1 gen() 4735 MB/s Jan 29 11:03:05.395977 kernel: raid6: using algorithm neonx4 gen() 15471 MB/s Jan 29 11:03:05.412814 kernel: raid6: .... xor() 12084 MB/s, rmw enabled Jan 29 11:03:05.412883 kernel: raid6: using neon recovery algorithm Jan 29 11:03:05.417971 kernel: xor: measuring software checksum speed Jan 29 11:03:05.418037 kernel: 8regs : 18662 MB/sec Jan 29 11:03:05.418858 kernel: 32regs : 19655 MB/sec Jan 29 11:03:05.418905 kernel: arm64_neon : 26954 MB/sec Jan 29 11:03:05.418915 kernel: xor: using function: arm64_neon (26954 MB/sec) Jan 29 11:03:05.470824 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:03:05.487818 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:03:05.496181 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:03:05.523870 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jan 29 11:03:05.527832 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:03:05.535928 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:03:05.554058 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Jan 29 11:03:05.594554 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:03:05.603199 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:03:05.657305 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:03:05.668018 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:03:05.688978 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:03:05.693252 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:03:05.695505 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:03:05.696174 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:03:05.704227 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:03:05.731795 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:03:05.758844 kernel: ACPI: bus type USB registered Jan 29 11:03:05.758907 kernel: usbcore: registered new interface driver usbfs Jan 29 11:03:05.758918 kernel: usbcore: registered new interface driver hub Jan 29 11:03:05.760784 kernel: usbcore: registered new device driver usb Jan 29 11:03:05.774565 kernel: scsi host0: Virtio SCSI HBA Jan 29 11:03:05.780294 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:03:05.780400 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 29 11:03:05.803528 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:03:05.804847 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:03:05.806871 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:03:05.808874 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:03:05.809059 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:03:05.809730 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:03:05.816122 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:03:05.822583 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 11:03:05.834010 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 29 11:03:05.834148 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 11:03:05.834229 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 11:03:05.834308 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 29 11:03:05.834462 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 29 11:03:05.834546 kernel: hub 1-0:1.0: USB hub found Jan 29 11:03:05.834650 kernel: hub 1-0:1.0: 4 ports detected Jan 29 11:03:05.834727 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 11:03:05.834881 kernel: hub 2-0:1.0: USB hub found Jan 29 11:03:05.834976 kernel: hub 2-0:1.0: 4 ports detected Jan 29 11:03:05.844517 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:03:05.849059 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 29 11:03:05.856338 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 29 11:03:05.856486 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:03:05.856499 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:03:05.852187 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:03:05.858825 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 29 11:03:05.868055 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 29 11:03:05.868187 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 29 11:03:05.868283 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 29 11:03:05.868439 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 11:03:05.868535 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:03:05.868555 kernel: GPT:17805311 != 80003071 Jan 29 11:03:05.868565 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:03:05.868574 kernel: GPT:17805311 != 80003071 Jan 29 11:03:05.868582 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:03:05.868646 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:03:05.868657 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 29 11:03:05.882345 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:03:05.912145 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (500) Jan 29 11:03:05.911166 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 29 11:03:05.922856 kernel: BTRFS: device fsid b5bc7ecc-f31a-46c7-9582-5efca7819025 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (507) Jan 29 11:03:05.925528 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 29 11:03:05.942034 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 11:03:05.947365 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 29 11:03:05.948128 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 29 11:03:05.962072 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:03:05.970982 disk-uuid[569]: Primary Header is updated. Jan 29 11:03:05.970982 disk-uuid[569]: Secondary Entries is updated. Jan 29 11:03:05.970982 disk-uuid[569]: Secondary Header is updated. Jan 29 11:03:05.980806 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:03:06.072782 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 11:03:06.314807 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 29 11:03:06.453767 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 29 11:03:06.454871 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 29 11:03:06.456811 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 29 11:03:06.510964 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 29 11:03:06.511202 kernel: usbcore: registered new interface driver usbhid Jan 29 11:03:06.511215 kernel: usbhid: USB HID core driver Jan 29 11:03:06.993769 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:03:06.993841 disk-uuid[571]: The operation has completed successfully. Jan 29 11:03:07.055023 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:03:07.056481 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:03:07.064093 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:03:07.085213 sh[586]: Success Jan 29 11:03:07.097789 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:03:07.159557 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:03:07.168939 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:03:07.172049 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:03:07.196210 kernel: BTRFS info (device dm-0): first mount of filesystem b5bc7ecc-f31a-46c7-9582-5efca7819025 Jan 29 11:03:07.196284 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:03:07.196312 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:03:07.196333 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:03:07.196361 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:03:07.203798 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:03:07.205661 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:03:07.207799 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:03:07.213011 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:03:07.215085 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:03:07.233860 kernel: BTRFS info (device sda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:03:07.233925 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:03:07.233936 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:03:07.241771 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:03:07.241835 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:03:07.252264 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:03:07.253990 kernel: BTRFS info (device sda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:03:07.260351 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:03:07.269180 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:03:07.332911 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:03:07.343155 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:03:07.372467 systemd-networkd[768]: lo: Link UP Jan 29 11:03:07.372479 systemd-networkd[768]: lo: Gained carrier Jan 29 11:03:07.376056 systemd-networkd[768]: Enumeration completed Jan 29 11:03:07.376300 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:03:07.377537 ignition[690]: Ignition 2.20.0 Jan 29 11:03:07.377429 systemd[1]: Reached target network.target - Network. Jan 29 11:03:07.377544 ignition[690]: Stage: fetch-offline Jan 29 11:03:07.380003 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:03:07.377585 ignition[690]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:03:07.380006 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:03:07.377593 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:03:07.381031 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:03:07.377793 ignition[690]: parsed url from cmdline: "" Jan 29 11:03:07.382126 systemd-networkd[768]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:03:07.377797 ignition[690]: no config URL provided Jan 29 11:03:07.382129 systemd-networkd[768]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:03:07.377802 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:03:07.382737 systemd-networkd[768]: eth0: Link UP Jan 29 11:03:07.377809 ignition[690]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:03:07.382764 systemd-networkd[768]: eth0: Gained carrier Jan 29 11:03:07.377815 ignition[690]: failed to fetch config: resource requires networking Jan 29 11:03:07.382774 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:03:07.378011 ignition[690]: Ignition finished successfully Jan 29 11:03:07.390300 systemd-networkd[768]: eth1: Link UP Jan 29 11:03:07.390355 systemd-networkd[768]: eth1: Gained carrier Jan 29 11:03:07.390367 systemd-networkd[768]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:03:07.391031 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:03:07.405703 ignition[776]: Ignition 2.20.0 Jan 29 11:03:07.405713 ignition[776]: Stage: fetch Jan 29 11:03:07.405926 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:03:07.405936 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:03:07.406037 ignition[776]: parsed url from cmdline: "" Jan 29 11:03:07.406041 ignition[776]: no config URL provided Jan 29 11:03:07.406046 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:03:07.406054 ignition[776]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:03:07.406143 ignition[776]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 29 11:03:07.406970 ignition[776]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 11:03:07.422840 systemd-networkd[768]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:03:07.453833 systemd-networkd[768]: eth0: DHCPv4 address 188.245.239.20/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 11:03:07.607916 ignition[776]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 29 11:03:07.612358 ignition[776]: GET result: OK Jan 29 11:03:07.612464 ignition[776]: parsing config with SHA512: ee159717fda7c9c14edd86f5294638e381c27892b4f16450c768149476364327e303ee2b39c23ee80c03d18ec2439cc6a67683878c78ca47d59e0c646771f1c2 Jan 29 11:03:07.619289 unknown[776]: fetched base config from "system" Jan 29 11:03:07.619340 unknown[776]: fetched base config from "system" Jan 29 11:03:07.620094 ignition[776]: fetch: fetch complete Jan 29 11:03:07.619346 unknown[776]: fetched user config from "hetzner" Jan 29 11:03:07.620099 ignition[776]: fetch: fetch passed Jan 29 11:03:07.622071 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:03:07.620159 ignition[776]: Ignition finished successfully Jan 29 11:03:07.628953 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:03:07.643732 ignition[783]: Ignition 2.20.0 Jan 29 11:03:07.643779 ignition[783]: Stage: kargs Jan 29 11:03:07.643968 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:03:07.643978 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:03:07.647243 ignition[783]: kargs: kargs passed Jan 29 11:03:07.647349 ignition[783]: Ignition finished successfully Jan 29 11:03:07.649206 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:03:07.655994 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:03:07.668315 ignition[789]: Ignition 2.20.0 Jan 29 11:03:07.668328 ignition[789]: Stage: disks Jan 29 11:03:07.668525 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:03:07.668535 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:03:07.669574 ignition[789]: disks: disks passed Jan 29 11:03:07.671099 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:03:07.669631 ignition[789]: Ignition finished successfully Jan 29 11:03:07.672237 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:03:07.673030 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:03:07.674029 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:03:07.675007 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:03:07.676142 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:03:07.683005 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:03:07.698908 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 11:03:07.703888 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:03:07.713989 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:03:07.767781 kernel: EXT4-fs (sda9): mounted filesystem bd47c032-97f4-4b3a-b174-3601de374086 r/w with ordered data mode. Quota mode: none. Jan 29 11:03:07.769041 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:03:07.771185 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:03:07.780911 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:03:07.785001 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:03:07.788981 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:03:07.792223 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:03:07.792262 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:03:07.797078 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:03:07.803539 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (805) Jan 29 11:03:07.803590 kernel: BTRFS info (device sda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:03:07.803602 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:03:07.804091 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:03:07.806029 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:03:07.809890 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:03:07.809930 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:03:07.818027 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:03:07.868200 coreos-metadata[807]: Jan 29 11:03:07.868 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 29 11:03:07.871763 coreos-metadata[807]: Jan 29 11:03:07.871 INFO Fetch successful Jan 29 11:03:07.874849 coreos-metadata[807]: Jan 29 11:03:07.874 INFO wrote hostname ci-4152-2-0-5-7d4b33c67e to /sysroot/etc/hostname Jan 29 11:03:07.877783 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:03:07.879235 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:03:07.885954 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:03:07.892232 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:03:07.896628 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:03:08.004793 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:03:08.011909 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:03:08.016812 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:03:08.021767 kernel: BTRFS info (device sda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:03:08.049242 ignition[922]: INFO : Ignition 2.20.0 Jan 29 11:03:08.051538 ignition[922]: INFO : Stage: mount Jan 29 11:03:08.051538 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:03:08.051538 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:03:08.051538 ignition[922]: INFO : mount: mount passed Jan 29 11:03:08.051538 ignition[922]: INFO : Ignition finished successfully Jan 29 11:03:08.050116 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:03:08.054630 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:03:08.061570 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:03:08.196383 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:03:08.204022 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:03:08.213073 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (935) Jan 29 11:03:08.214781 kernel: BTRFS info (device sda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:03:08.214843 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:03:08.214869 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:03:08.218804 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:03:08.218878 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:03:08.222199 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:03:08.247028 ignition[952]: INFO : Ignition 2.20.0 Jan 29 11:03:08.247028 ignition[952]: INFO : Stage: files Jan 29 11:03:08.248577 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:03:08.248577 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:03:08.248577 ignition[952]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:03:08.251380 ignition[952]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:03:08.251380 ignition[952]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:03:08.253951 ignition[952]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:03:08.253951 ignition[952]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:03:08.253951 ignition[952]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:03:08.253712 unknown[952]: wrote ssh authorized keys file for user: core Jan 29 11:03:08.259074 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:03:08.259074 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:03:08.259074 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:03:08.259074 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 11:03:08.318220 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:03:08.752717 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:03:08.752717 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:03:08.752717 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 11:03:08.875944 systemd-networkd[768]: eth0: Gained IPv6LL Jan 29 11:03:09.260731 systemd-networkd[768]: eth1: Gained IPv6LL Jan 29 11:03:09.306830 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 29 11:03:09.416082 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:03:09.416082 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:03:09.416082 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:03:09.416082 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:03:09.416082 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:03:09.416082 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:03:09.416082 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:03:09.416082 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:03:09.416082 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:03:09.416082 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:03:09.416082 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:03:09.416082 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:03:09.416082 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:03:09.416082 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:03:09.416082 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 11:03:09.940689 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 29 11:03:10.209515 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:03:10.209515 ignition[952]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 29 11:03:10.213690 ignition[952]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:03:10.213690 ignition[952]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:03:10.213690 ignition[952]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 29 11:03:10.213690 ignition[952]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 29 11:03:10.213690 ignition[952]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:03:10.213690 ignition[952]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:03:10.213690 ignition[952]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 29 11:03:10.213690 ignition[952]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 29 11:03:10.213690 ignition[952]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 11:03:10.213690 ignition[952]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 11:03:10.213690 ignition[952]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 29 11:03:10.213690 ignition[952]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:03:10.213690 ignition[952]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:03:10.213690 ignition[952]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:03:10.213690 ignition[952]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:03:10.213690 ignition[952]: INFO : files: files passed Jan 29 11:03:10.213690 ignition[952]: INFO : Ignition finished successfully Jan 29 11:03:10.213462 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:03:10.221148 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:03:10.226518 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:03:10.234423 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:03:10.236354 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:03:10.245479 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:03:10.245479 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:03:10.249189 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:03:10.252101 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:03:10.253951 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:03:10.259071 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:03:10.290264 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:03:10.291084 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:03:10.292363 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:03:10.293043 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:03:10.294777 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:03:10.301011 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:03:10.315443 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:03:10.322017 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:03:10.337393 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:03:10.338257 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:03:10.339789 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:03:10.341160 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:03:10.341317 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:03:10.344305 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:03:10.345333 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:03:10.346800 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:03:10.347852 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:03:10.349044 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:03:10.350244 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:03:10.351422 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:03:10.352690 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:03:10.353946 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:03:10.355086 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:03:10.356025 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:03:10.356156 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:03:10.357485 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:03:10.358197 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:03:10.359432 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:03:10.362849 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:03:10.363765 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:03:10.363913 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:03:10.366027 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:03:10.366205 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:03:10.367737 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:03:10.367878 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:03:10.369370 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:03:10.369557 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:03:10.382183 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:03:10.383423 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:03:10.383665 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:03:10.389058 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:03:10.389854 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:03:10.390197 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:03:10.391275 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:03:10.391528 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:03:10.403387 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:03:10.403510 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:03:10.410492 ignition[1004]: INFO : Ignition 2.20.0 Jan 29 11:03:10.410492 ignition[1004]: INFO : Stage: umount Jan 29 11:03:10.410492 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:03:10.410492 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:03:10.417483 ignition[1004]: INFO : umount: umount passed Jan 29 11:03:10.417483 ignition[1004]: INFO : Ignition finished successfully Jan 29 11:03:10.414329 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:03:10.414446 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:03:10.415737 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:03:10.415849 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:03:10.416906 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:03:10.416958 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:03:10.419187 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:03:10.419258 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:03:10.420271 systemd[1]: Stopped target network.target - Network. Jan 29 11:03:10.422119 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:03:10.422189 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:03:10.423575 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:03:10.426134 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:03:10.429916 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:03:10.431468 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:03:10.434324 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:03:10.435783 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:03:10.435926 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:03:10.436944 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:03:10.437008 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:03:10.438038 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:03:10.438099 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:03:10.439576 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:03:10.439632 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:03:10.441555 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:03:10.442726 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:03:10.445229 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:03:10.446004 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:03:10.446254 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:03:10.448170 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:03:10.448275 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:03:10.449193 systemd-networkd[768]: eth1: DHCPv6 lease lost Jan 29 11:03:10.451774 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:03:10.451913 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:03:10.453926 systemd-networkd[768]: eth0: DHCPv6 lease lost Jan 29 11:03:10.456119 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:03:10.456243 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:03:10.457712 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:03:10.457808 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:03:10.469057 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:03:10.469868 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:03:10.470001 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:03:10.472665 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:03:10.472799 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:03:10.476253 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:03:10.476344 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:03:10.477615 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:03:10.477679 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:03:10.479121 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:03:10.495036 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:03:10.495164 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:03:10.505983 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:03:10.506243 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:03:10.509263 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:03:10.509480 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:03:10.510957 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:03:10.511013 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:03:10.512866 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:03:10.512927 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:03:10.514813 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:03:10.514870 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:03:10.516527 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:03:10.516585 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:03:10.530372 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:03:10.532676 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:03:10.532856 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:03:10.534695 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:03:10.534840 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:03:10.541405 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:03:10.541521 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:03:10.543022 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:03:10.549072 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:03:10.570558 systemd[1]: Switching root. Jan 29 11:03:10.610605 systemd-journald[237]: Journal stopped Jan 29 11:03:11.611521 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 29 11:03:11.611596 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:03:11.611616 kernel: SELinux: policy capability open_perms=1 Jan 29 11:03:11.611627 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:03:11.611638 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:03:11.611649 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:03:11.611659 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:03:11.611668 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:03:11.611678 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:03:11.611688 systemd[1]: Successfully loaded SELinux policy in 36.308ms. Jan 29 11:03:11.611708 kernel: audit: type=1403 audit(1738148590.808:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:03:11.611722 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.995ms. Jan 29 11:03:11.611733 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:03:11.615521 systemd[1]: Detected virtualization kvm. Jan 29 11:03:11.615550 systemd[1]: Detected architecture arm64. Jan 29 11:03:11.615561 systemd[1]: Detected first boot. Jan 29 11:03:11.615573 systemd[1]: Hostname set to . Jan 29 11:03:11.615584 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:03:11.615594 zram_generator::config[1065]: No configuration found. Jan 29 11:03:11.615615 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:03:11.615626 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:03:11.615643 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 11:03:11.615655 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:03:11.615665 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:03:11.615676 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:03:11.615686 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:03:11.615696 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:03:11.615709 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:03:11.615719 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:03:11.615730 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:03:11.615798 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:03:11.615813 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:03:11.615824 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:03:11.615835 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:03:11.615846 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:03:11.615856 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:03:11.615869 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:03:11.615879 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:03:11.615889 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:03:11.615899 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:03:11.615910 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:03:11.615920 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:03:11.615931 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:03:11.615943 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:03:11.615954 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:03:11.615964 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:03:11.615975 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:03:11.615986 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:03:11.615998 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:03:11.616008 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:03:11.616018 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:03:11.616028 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:03:11.616040 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:03:11.616054 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:03:11.616066 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:03:11.616077 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:03:11.616087 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:03:11.616097 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:03:11.616109 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:03:11.616120 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:03:11.616130 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:03:11.616145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:03:11.616155 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:03:11.616166 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:03:11.616176 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:03:11.616187 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:03:11.616200 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:03:11.616210 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 11:03:11.616222 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 11:03:11.616234 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:03:11.616246 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:03:11.616256 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:03:11.616269 kernel: fuse: init (API version 7.39) Jan 29 11:03:11.616296 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:03:11.616311 kernel: loop: module loaded Jan 29 11:03:11.616322 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:03:11.616332 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:03:11.616342 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:03:11.616390 systemd-journald[1147]: Collecting audit messages is disabled. Jan 29 11:03:11.616423 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:03:11.616434 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:03:11.616444 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:03:11.616461 systemd-journald[1147]: Journal started Jan 29 11:03:11.616485 systemd-journald[1147]: Runtime Journal (/run/log/journal/5ef497cf58154c11867abfa66e513b6f) is 8.0M, max 76.6M, 68.6M free. Jan 29 11:03:11.620799 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:03:11.620883 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:03:11.623350 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:03:11.627422 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:03:11.632449 kernel: ACPI: bus type drm_connector registered Jan 29 11:03:11.627604 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:03:11.629245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:03:11.629493 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:03:11.631242 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:03:11.631540 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:03:11.633273 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:03:11.634022 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:03:11.635804 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:03:11.635980 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:03:11.637572 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:03:11.638316 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:03:11.640371 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:03:11.643839 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:03:11.645085 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:03:11.662656 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:03:11.669018 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:03:11.674474 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:03:11.676862 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:03:11.689068 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:03:11.693829 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:03:11.696880 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:03:11.702757 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:03:11.704825 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:03:11.718153 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:03:11.727996 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:03:11.736520 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:03:11.743576 systemd-journald[1147]: Time spent on flushing to /var/log/journal/5ef497cf58154c11867abfa66e513b6f is 43.056ms for 1116 entries. Jan 29 11:03:11.743576 systemd-journald[1147]: System Journal (/var/log/journal/5ef497cf58154c11867abfa66e513b6f) is 8.0M, max 584.8M, 576.8M free. Jan 29 11:03:11.796160 systemd-journald[1147]: Received client request to flush runtime journal. Jan 29 11:03:11.738917 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:03:11.739682 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:03:11.762312 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:03:11.768839 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:03:11.777242 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:03:11.787968 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:03:11.802620 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:03:11.808564 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:03:11.818940 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:03:11.820824 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 29 11:03:11.820837 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 29 11:03:11.829115 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:03:11.838015 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:03:11.876319 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:03:11.886029 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:03:11.906736 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Jan 29 11:03:11.906796 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Jan 29 11:03:11.915227 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:03:12.323702 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:03:12.331154 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:03:12.366258 systemd-udevd[1230]: Using default interface naming scheme 'v255'. Jan 29 11:03:12.390993 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:03:12.407037 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:03:12.430030 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:03:12.486850 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 29 11:03:12.510979 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:03:12.611843 systemd-networkd[1240]: lo: Link UP Jan 29 11:03:12.611850 systemd-networkd[1240]: lo: Gained carrier Jan 29 11:03:12.614196 systemd-networkd[1240]: Enumeration completed Jan 29 11:03:12.614431 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:03:12.617237 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:03:12.617912 systemd-networkd[1240]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:03:12.619113 systemd-networkd[1240]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:03:12.619178 systemd-networkd[1240]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:03:12.620947 systemd-networkd[1240]: eth0: Link UP Jan 29 11:03:12.620955 systemd-networkd[1240]: eth0: Gained carrier Jan 29 11:03:12.620974 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:03:12.622952 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:03:12.624171 systemd-networkd[1240]: eth1: Link UP Jan 29 11:03:12.624180 systemd-networkd[1240]: eth1: Gained carrier Jan 29 11:03:12.624200 systemd-networkd[1240]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:03:12.644821 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1242) Jan 29 11:03:12.648803 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:03:12.649963 systemd-networkd[1240]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:03:12.693176 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Jan 29 11:03:12.693198 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 29 11:03:12.693414 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:03:12.697961 systemd-networkd[1240]: eth0: DHCPv4 address 188.245.239.20/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 11:03:12.699020 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:03:12.701907 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:03:12.712495 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:03:12.714910 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:03:12.714951 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:03:12.722452 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:03:12.722635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:03:12.740070 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:03:12.740252 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:03:12.744221 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:03:12.752619 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:03:12.753055 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:03:12.756703 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:03:12.791911 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 11:03:12.794804 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 29 11:03:12.794919 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 11:03:12.794936 kernel: [drm] features: -context_init Jan 29 11:03:12.794948 kernel: [drm] number of scanouts: 1 Jan 29 11:03:12.794988 kernel: [drm] number of cap sets: 0 Jan 29 11:03:12.797774 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 29 11:03:12.800163 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:03:12.805778 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 11:03:12.823819 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 11:03:12.825093 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:03:12.825378 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:03:12.837086 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:03:12.895861 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:03:12.957125 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:03:12.965078 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:03:12.984315 lvm[1303]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:03:13.013687 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:03:13.017480 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:03:13.024987 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:03:13.032260 lvm[1306]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:03:13.059707 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:03:13.061825 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:03:13.063857 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:03:13.064039 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:03:13.067809 systemd[1]: Reached target machines.target - Containers. Jan 29 11:03:13.071982 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:03:13.079059 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:03:13.083028 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:03:13.084919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:03:13.090220 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:03:13.094234 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:03:13.100264 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:03:13.105977 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:03:13.131805 kernel: loop0: detected capacity change from 0 to 116808 Jan 29 11:03:13.138966 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:03:13.148051 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:03:13.149108 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:03:13.162808 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:03:13.185486 kernel: loop1: detected capacity change from 0 to 194096 Jan 29 11:03:13.225985 kernel: loop2: detected capacity change from 0 to 113536 Jan 29 11:03:13.272898 kernel: loop3: detected capacity change from 0 to 8 Jan 29 11:03:13.294055 kernel: loop4: detected capacity change from 0 to 116808 Jan 29 11:03:13.304796 kernel: loop5: detected capacity change from 0 to 194096 Jan 29 11:03:13.320899 kernel: loop6: detected capacity change from 0 to 113536 Jan 29 11:03:13.329897 kernel: loop7: detected capacity change from 0 to 8 Jan 29 11:03:13.330251 (sd-merge)[1328]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 29 11:03:13.330733 (sd-merge)[1328]: Merged extensions into '/usr'. Jan 29 11:03:13.335700 systemd[1]: Reloading requested from client PID 1314 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:03:13.335718 systemd[1]: Reloading... Jan 29 11:03:13.423786 zram_generator::config[1359]: No configuration found. Jan 29 11:03:13.543950 ldconfig[1310]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:03:13.546980 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:03:13.604914 systemd[1]: Reloading finished in 268 ms. Jan 29 11:03:13.621828 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:03:13.624443 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:03:13.637087 systemd[1]: Starting ensure-sysext.service... Jan 29 11:03:13.640997 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:03:13.648202 systemd[1]: Reloading requested from client PID 1400 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:03:13.648224 systemd[1]: Reloading... Jan 29 11:03:13.681024 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:03:13.681334 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:03:13.682798 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:03:13.683123 systemd-tmpfiles[1401]: ACLs are not supported, ignoring. Jan 29 11:03:13.683251 systemd-tmpfiles[1401]: ACLs are not supported, ignoring. Jan 29 11:03:13.688441 systemd-tmpfiles[1401]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:03:13.688599 systemd-tmpfiles[1401]: Skipping /boot Jan 29 11:03:13.700385 systemd-tmpfiles[1401]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:03:13.700531 systemd-tmpfiles[1401]: Skipping /boot Jan 29 11:03:13.735765 zram_generator::config[1429]: No configuration found. Jan 29 11:03:13.847839 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:03:13.905926 systemd[1]: Reloading finished in 257 ms. Jan 29 11:03:13.925893 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:03:13.947065 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:03:13.956171 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:03:13.966217 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:03:13.973088 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:03:13.987874 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:03:13.995144 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:03:14.002537 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:03:14.017170 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:03:14.031025 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:03:14.031769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:03:14.032665 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:03:14.040569 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:03:14.041382 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:03:14.069098 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:03:14.075509 augenrules[1510]: No rules Jan 29 11:03:14.077025 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:03:14.077236 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:03:14.079402 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:03:14.079728 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:03:14.081690 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:03:14.082451 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:03:14.086472 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:03:14.094097 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:03:14.096006 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:03:14.107286 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:03:14.116548 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:03:14.118919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:03:14.121396 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:03:14.122763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:03:14.122939 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:03:14.131551 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:03:14.133040 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:03:14.139108 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:03:14.146908 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:03:14.147635 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:03:14.148992 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:03:14.155147 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:03:14.156417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:03:14.156601 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:03:14.158101 systemd-resolved[1484]: Positive Trust Anchors: Jan 29 11:03:14.158183 systemd-resolved[1484]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:03:14.158215 systemd-resolved[1484]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:03:14.160233 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:03:14.160422 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:03:14.165609 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:03:14.167087 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:03:14.175587 systemd[1]: Finished ensure-sysext.service. Jan 29 11:03:14.176848 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:03:14.177025 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:03:14.178103 systemd-resolved[1484]: Using system hostname 'ci-4152-2-0-5-7d4b33c67e'. Jan 29 11:03:14.183915 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:03:14.183986 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:03:14.191320 augenrules[1530]: /sbin/augenrules: No change Jan 29 11:03:14.193017 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:03:14.196822 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:03:14.197727 systemd[1]: Reached target network.target - Network. Jan 29 11:03:14.198799 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:03:14.202218 augenrules[1561]: No rules Jan 29 11:03:14.205636 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:03:14.206170 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:03:14.253674 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:03:14.256110 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:03:14.257088 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:03:14.257918 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:03:14.258682 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:03:14.259688 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:03:14.259733 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:03:14.261021 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:03:14.261847 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:03:14.262639 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:03:14.263521 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:03:14.265381 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:03:14.268011 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:03:14.270195 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:03:14.274154 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:03:14.276070 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:03:14.277556 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:03:14.278656 systemd[1]: System is tainted: cgroupsv1 Jan 29 11:03:14.278853 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:03:14.278971 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:03:14.280837 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:03:14.285968 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:03:14.295055 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:03:14.303007 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:03:14.309194 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:03:14.311217 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:03:14.311904 systemd-timesyncd[1551]: Contacted time server 136.243.177.133:123 (0.flatcar.pool.ntp.org). Jan 29 11:03:14.311970 systemd-timesyncd[1551]: Initial clock synchronization to Wed 2025-01-29 11:03:13.941790 UTC. Jan 29 11:03:14.319604 jq[1575]: false Jan 29 11:03:14.323026 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:03:14.330274 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:03:14.347328 dbus-daemon[1574]: [system] SELinux support is enabled Jan 29 11:03:14.349417 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 29 11:03:14.358829 coreos-metadata[1572]: Jan 29 11:03:14.357 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 29 11:03:14.359288 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:03:14.365619 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:03:14.366543 coreos-metadata[1572]: Jan 29 11:03:14.366 INFO Fetch successful Jan 29 11:03:14.366579 extend-filesystems[1578]: Found loop4 Jan 29 11:03:14.366579 extend-filesystems[1578]: Found loop5 Jan 29 11:03:14.366579 extend-filesystems[1578]: Found loop6 Jan 29 11:03:14.366579 extend-filesystems[1578]: Found loop7 Jan 29 11:03:14.366579 extend-filesystems[1578]: Found sda Jan 29 11:03:14.366579 extend-filesystems[1578]: Found sda1 Jan 29 11:03:14.366579 extend-filesystems[1578]: Found sda2 Jan 29 11:03:14.366579 extend-filesystems[1578]: Found sda3 Jan 29 11:03:14.366579 extend-filesystems[1578]: Found usr Jan 29 11:03:14.366579 extend-filesystems[1578]: Found sda4 Jan 29 11:03:14.366579 extend-filesystems[1578]: Found sda6 Jan 29 11:03:14.366579 extend-filesystems[1578]: Found sda7 Jan 29 11:03:14.366579 extend-filesystems[1578]: Found sda9 Jan 29 11:03:14.366579 extend-filesystems[1578]: Checking size of /dev/sda9 Jan 29 11:03:14.389929 coreos-metadata[1572]: Jan 29 11:03:14.367 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 29 11:03:14.389929 coreos-metadata[1572]: Jan 29 11:03:14.369 INFO Fetch successful Jan 29 11:03:14.388155 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:03:14.392195 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:03:14.405046 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:03:14.406362 extend-filesystems[1578]: Resized partition /dev/sda9 Jan 29 11:03:14.413976 extend-filesystems[1604]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:03:14.415540 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:03:14.417928 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:03:14.431038 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:03:14.431326 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:03:14.431568 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:03:14.431805 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:03:14.432778 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 29 11:03:14.439703 jq[1603]: true Jan 29 11:03:14.447024 systemd-networkd[1240]: eth1: Gained IPv6LL Jan 29 11:03:14.451152 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:03:14.451412 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:03:14.459649 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:03:14.487995 jq[1613]: true Jan 29 11:03:14.491238 (ntainerd)[1618]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:03:14.514451 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:03:14.519771 update_engine[1600]: I20250129 11:03:14.514781 1600 main.cc:92] Flatcar Update Engine starting Jan 29 11:03:14.516589 systemd-networkd[1240]: eth0: Gained IPv6LL Jan 29 11:03:14.533710 update_engine[1600]: I20250129 11:03:14.532930 1600 update_check_scheduler.cc:74] Next update check in 11m52s Jan 29 11:03:14.542571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:03:14.549520 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:03:14.550889 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:03:14.550937 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:03:14.554288 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:03:14.554322 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:03:14.561754 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:03:14.570663 tar[1611]: linux-arm64/helm Jan 29 11:03:14.575977 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:03:14.593992 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:03:14.601821 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1249) Jan 29 11:03:14.664580 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:03:14.669719 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:03:14.677440 systemd-logind[1595]: New seat seat0. Jan 29 11:03:14.690915 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 29 11:03:14.687569 systemd-logind[1595]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:03:14.687594 systemd-logind[1595]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 29 11:03:14.687868 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:03:14.717515 extend-filesystems[1604]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 11:03:14.717515 extend-filesystems[1604]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 29 11:03:14.717515 extend-filesystems[1604]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 29 11:03:14.740922 extend-filesystems[1578]: Resized filesystem in /dev/sda9 Jan 29 11:03:14.740922 extend-filesystems[1578]: Found sr0 Jan 29 11:03:14.747397 bash[1669]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:03:14.719095 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:03:14.719449 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:03:14.745969 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:03:14.759678 systemd[1]: Starting sshkeys.service... Jan 29 11:03:14.761934 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:03:14.780757 containerd[1618]: time="2025-01-29T11:03:14.777140080Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:03:14.786592 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:03:14.793102 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:03:14.846865 coreos-metadata[1681]: Jan 29 11:03:14.846 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 29 11:03:14.856218 coreos-metadata[1681]: Jan 29 11:03:14.855 INFO Fetch successful Jan 29 11:03:14.858585 unknown[1681]: wrote ssh authorized keys file for user: core Jan 29 11:03:14.859653 containerd[1618]: time="2025-01-29T11:03:14.858432880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:03:14.862276 containerd[1618]: time="2025-01-29T11:03:14.860982840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:03:14.862276 containerd[1618]: time="2025-01-29T11:03:14.861040320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:03:14.862276 containerd[1618]: time="2025-01-29T11:03:14.861065000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:03:14.862276 containerd[1618]: time="2025-01-29T11:03:14.861699000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:03:14.862276 containerd[1618]: time="2025-01-29T11:03:14.861733200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:03:14.862276 containerd[1618]: time="2025-01-29T11:03:14.861854280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:03:14.862276 containerd[1618]: time="2025-01-29T11:03:14.861872160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:03:14.864335 containerd[1618]: time="2025-01-29T11:03:14.862849240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:03:14.864335 containerd[1618]: time="2025-01-29T11:03:14.862882280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:03:14.864335 containerd[1618]: time="2025-01-29T11:03:14.862903920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:03:14.864335 containerd[1618]: time="2025-01-29T11:03:14.862917200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:03:14.864335 containerd[1618]: time="2025-01-29T11:03:14.863057400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:03:14.864335 containerd[1618]: time="2025-01-29T11:03:14.863336400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:03:14.864536 containerd[1618]: time="2025-01-29T11:03:14.864356160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:03:14.864536 containerd[1618]: time="2025-01-29T11:03:14.864383520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:03:14.864536 containerd[1618]: time="2025-01-29T11:03:14.864523280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:03:14.864594 containerd[1618]: time="2025-01-29T11:03:14.864578320Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:03:14.882492 containerd[1618]: time="2025-01-29T11:03:14.881719880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:03:14.882492 containerd[1618]: time="2025-01-29T11:03:14.881839040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:03:14.882492 containerd[1618]: time="2025-01-29T11:03:14.881861520Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:03:14.882492 containerd[1618]: time="2025-01-29T11:03:14.881943280Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:03:14.882492 containerd[1618]: time="2025-01-29T11:03:14.881967320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:03:14.882492 containerd[1618]: time="2025-01-29T11:03:14.882182760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:03:14.887755 containerd[1618]: time="2025-01-29T11:03:14.884227960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:03:14.887755 containerd[1618]: time="2025-01-29T11:03:14.884502040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:03:14.887755 containerd[1618]: time="2025-01-29T11:03:14.884534480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:03:14.887755 containerd[1618]: time="2025-01-29T11:03:14.884555080Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:03:14.887755 containerd[1618]: time="2025-01-29T11:03:14.884573880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:03:14.887755 containerd[1618]: time="2025-01-29T11:03:14.884593400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:03:14.887755 containerd[1618]: time="2025-01-29T11:03:14.884612120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:03:14.887755 containerd[1618]: time="2025-01-29T11:03:14.884631320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:03:14.887755 containerd[1618]: time="2025-01-29T11:03:14.884651720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:03:14.887755 containerd[1618]: time="2025-01-29T11:03:14.884668120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:03:14.887755 containerd[1618]: time="2025-01-29T11:03:14.884686200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:03:14.887755 containerd[1618]: time="2025-01-29T11:03:14.884702160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:03:14.887755 containerd[1618]: time="2025-01-29T11:03:14.884730400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.887755 containerd[1618]: time="2025-01-29T11:03:14.886772000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.888142 containerd[1618]: time="2025-01-29T11:03:14.886821040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.888142 containerd[1618]: time="2025-01-29T11:03:14.886849160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.888142 containerd[1618]: time="2025-01-29T11:03:14.886866720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.888142 containerd[1618]: time="2025-01-29T11:03:14.886896880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.888142 containerd[1618]: time="2025-01-29T11:03:14.886914000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.888142 containerd[1618]: time="2025-01-29T11:03:14.886932400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.888142 containerd[1618]: time="2025-01-29T11:03:14.886949960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.888142 containerd[1618]: time="2025-01-29T11:03:14.886970840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.888142 containerd[1618]: time="2025-01-29T11:03:14.886987880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.888142 containerd[1618]: time="2025-01-29T11:03:14.887007000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.888142 containerd[1618]: time="2025-01-29T11:03:14.887024960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.888142 containerd[1618]: time="2025-01-29T11:03:14.887045080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:03:14.888142 containerd[1618]: time="2025-01-29T11:03:14.887081040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.888142 containerd[1618]: time="2025-01-29T11:03:14.887101760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.888142 containerd[1618]: time="2025-01-29T11:03:14.887116520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:03:14.888444 containerd[1618]: time="2025-01-29T11:03:14.887329160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:03:14.888444 containerd[1618]: time="2025-01-29T11:03:14.887357040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:03:14.888444 containerd[1618]: time="2025-01-29T11:03:14.887374840Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:03:14.888444 containerd[1618]: time="2025-01-29T11:03:14.887393440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:03:14.888444 containerd[1618]: time="2025-01-29T11:03:14.887407240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.888444 containerd[1618]: time="2025-01-29T11:03:14.887429680Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:03:14.888444 containerd[1618]: time="2025-01-29T11:03:14.887444360Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:03:14.888444 containerd[1618]: time="2025-01-29T11:03:14.887457800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:03:14.888589 containerd[1618]: time="2025-01-29T11:03:14.887925880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:03:14.888589 containerd[1618]: time="2025-01-29T11:03:14.887999240Z" level=info msg="Connect containerd service" Jan 29 11:03:14.888589 containerd[1618]: time="2025-01-29T11:03:14.888049920Z" level=info msg="using legacy CRI server" Jan 29 11:03:14.888589 containerd[1618]: time="2025-01-29T11:03:14.888064040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:03:14.888589 containerd[1618]: time="2025-01-29T11:03:14.888374200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:03:14.896789 containerd[1618]: time="2025-01-29T11:03:14.892109880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:03:14.896789 containerd[1618]: time="2025-01-29T11:03:14.893021240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:03:14.896789 containerd[1618]: time="2025-01-29T11:03:14.893083600Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:03:14.896789 containerd[1618]: time="2025-01-29T11:03:14.893195520Z" level=info msg="Start subscribing containerd event" Jan 29 11:03:14.896789 containerd[1618]: time="2025-01-29T11:03:14.893277160Z" level=info msg="Start recovering state" Jan 29 11:03:14.896789 containerd[1618]: time="2025-01-29T11:03:14.893387440Z" level=info msg="Start event monitor" Jan 29 11:03:14.896789 containerd[1618]: time="2025-01-29T11:03:14.893401240Z" level=info msg="Start snapshots syncer" Jan 29 11:03:14.896789 containerd[1618]: time="2025-01-29T11:03:14.893414480Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:03:14.896789 containerd[1618]: time="2025-01-29T11:03:14.893422200Z" level=info msg="Start streaming server" Jan 29 11:03:14.896789 containerd[1618]: time="2025-01-29T11:03:14.893573680Z" level=info msg="containerd successfully booted in 0.117861s" Jan 29 11:03:14.893923 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:03:14.912780 update-ssh-keys[1687]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:03:14.914310 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:03:14.924307 systemd[1]: Finished sshkeys.service. Jan 29 11:03:15.099634 locksmithd[1640]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:03:15.574525 tar[1611]: linux-arm64/LICENSE Jan 29 11:03:15.574525 tar[1611]: linux-arm64/README.md Jan 29 11:03:15.593322 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:03:15.606331 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:03:15.607075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:03:16.160396 kubelet[1713]: E0129 11:03:16.160367 1713 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:03:16.160673 sshd_keygen[1625]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:03:16.165867 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:03:16.166080 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:16.186563 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:03:16.198244 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:03:16.208329 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:03:16.208671 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:03:16.215241 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:03:16.229273 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:03:16.237235 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:03:16.247047 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:03:16.248659 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:03:16.249572 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:03:16.250452 systemd[1]: Startup finished in 6.919s (kernel) + 5.478s (userspace) = 12.397s. Jan 29 11:03:26.269485 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:03:26.277265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:03:26.407063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:03:26.411998 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:03:26.468878 kubelet[1759]: E0129 11:03:26.468815 1759 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:03:26.472193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:03:26.472432 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:36.519101 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:03:36.530183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:03:36.723142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:03:36.723502 (kubelet)[1780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:03:36.784432 kubelet[1780]: E0129 11:03:36.784095 1780 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:03:36.788370 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:03:36.788542 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:47.018826 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 11:03:47.030223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:03:47.157026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:03:47.168513 (kubelet)[1801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:03:47.219853 kubelet[1801]: E0129 11:03:47.219791 1801 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:03:47.222229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:03:47.222374 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:57.269398 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 11:03:57.285159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:03:57.410996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:03:57.421612 (kubelet)[1822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:03:57.473217 kubelet[1822]: E0129 11:03:57.473153 1822 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:03:57.477052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:03:57.477398 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:59.898166 update_engine[1600]: I20250129 11:03:59.898020 1600 update_attempter.cc:509] Updating boot flags... Jan 29 11:03:59.948827 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1840) Jan 29 11:04:00.019766 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1841) Jan 29 11:04:07.518850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 11:04:07.529133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:07.650021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:07.656077 (kubelet)[1861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:07.716501 kubelet[1861]: E0129 11:04:07.716397 1861 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:07.722965 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:07.723986 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:04:17.769185 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 11:04:17.778077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:17.894015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:17.908624 (kubelet)[1882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:17.955297 kubelet[1882]: E0129 11:04:17.955255 1882 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:17.957486 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:17.957752 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:04:28.019419 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 11:04:28.027162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:28.161077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:28.165601 (kubelet)[1903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:28.226222 kubelet[1903]: E0129 11:04:28.226153 1903 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:28.229276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:28.229517 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:04:29.204327 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:04:29.209311 systemd[1]: Started sshd@0-188.245.239.20:22-195.178.110.65:57630.service - OpenSSH per-connection server daemon (195.178.110.65:57630). Jan 29 11:04:29.251194 sshd[1912]: Connection closed by 195.178.110.65 port 57630 Jan 29 11:04:29.252703 systemd[1]: sshd@0-188.245.239.20:22-195.178.110.65:57630.service: Deactivated successfully. Jan 29 11:04:38.270300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 11:04:38.278264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:38.445209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:38.451271 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:38.511499 kubelet[1929]: E0129 11:04:38.511397 1929 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:38.514419 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:38.514619 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:04:48.519350 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 29 11:04:48.533802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:48.712160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:48.727577 (kubelet)[1949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:48.785263 kubelet[1949]: E0129 11:04:48.785129 1949 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:48.789175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:48.789639 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:04:59.019247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 29 11:04:59.040079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:59.242240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:59.254691 (kubelet)[1970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:59.324108 kubelet[1970]: E0129 11:04:59.323924 1970 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:59.327359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:59.327640 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:05:04.522060 systemd[1]: Started sshd@1-188.245.239.20:22-147.75.109.163:32944.service - OpenSSH per-connection server daemon (147.75.109.163:32944). Jan 29 11:05:05.540066 sshd[1979]: Accepted publickey for core from 147.75.109.163 port 32944 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:05:05.543946 sshd-session[1979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:05.572906 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:05:05.587367 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:05:05.596779 systemd-logind[1595]: New session 1 of user core. Jan 29 11:05:05.611022 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:05:05.624365 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:05:05.638132 (systemd)[1985]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:05:05.775322 systemd[1985]: Queued start job for default target default.target. Jan 29 11:05:05.777374 systemd[1985]: Created slice app.slice - User Application Slice. Jan 29 11:05:05.777551 systemd[1985]: Reached target paths.target - Paths. Jan 29 11:05:05.777575 systemd[1985]: Reached target timers.target - Timers. Jan 29 11:05:05.785246 systemd[1985]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:05:05.821483 systemd[1985]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:05:05.821653 systemd[1985]: Reached target sockets.target - Sockets. Jan 29 11:05:05.821670 systemd[1985]: Reached target basic.target - Basic System. Jan 29 11:05:05.821768 systemd[1985]: Reached target default.target - Main User Target. Jan 29 11:05:05.821815 systemd[1985]: Startup finished in 173ms. Jan 29 11:05:05.823267 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:05:05.830813 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:05:06.531414 systemd[1]: Started sshd@2-188.245.239.20:22-147.75.109.163:32960.service - OpenSSH per-connection server daemon (147.75.109.163:32960). Jan 29 11:05:07.531693 sshd[1997]: Accepted publickey for core from 147.75.109.163 port 32960 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:05:07.534964 sshd-session[1997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:07.545277 systemd-logind[1595]: New session 2 of user core. Jan 29 11:05:07.551264 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:05:08.218979 sshd[2000]: Connection closed by 147.75.109.163 port 32960 Jan 29 11:05:08.221181 sshd-session[1997]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:08.228336 systemd[1]: sshd@2-188.245.239.20:22-147.75.109.163:32960.service: Deactivated successfully. Jan 29 11:05:08.234853 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:05:08.237093 systemd-logind[1595]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:05:08.239664 systemd-logind[1595]: Removed session 2. Jan 29 11:05:08.389251 systemd[1]: Started sshd@3-188.245.239.20:22-147.75.109.163:47938.service - OpenSSH per-connection server daemon (147.75.109.163:47938). Jan 29 11:05:09.400706 sshd[2005]: Accepted publickey for core from 147.75.109.163 port 47938 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:05:09.403311 sshd-session[2005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:09.404412 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 29 11:05:09.411187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:09.417858 systemd-logind[1595]: New session 3 of user core. Jan 29 11:05:09.422241 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:05:09.609375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:09.625041 (kubelet)[2020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:05:09.698792 kubelet[2020]: E0129 11:05:09.698562 2020 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:05:09.703623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:05:09.704903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:05:10.080491 sshd[2012]: Connection closed by 147.75.109.163 port 47938 Jan 29 11:05:10.081970 sshd-session[2005]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:10.087995 systemd-logind[1595]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:05:10.089308 systemd[1]: sshd@3-188.245.239.20:22-147.75.109.163:47938.service: Deactivated successfully. Jan 29 11:05:10.097597 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:05:10.100232 systemd-logind[1595]: Removed session 3. Jan 29 11:05:10.254365 systemd[1]: Started sshd@4-188.245.239.20:22-147.75.109.163:47954.service - OpenSSH per-connection server daemon (147.75.109.163:47954). Jan 29 11:05:11.276652 sshd[2034]: Accepted publickey for core from 147.75.109.163 port 47954 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:05:11.280028 sshd-session[2034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:11.292574 systemd-logind[1595]: New session 4 of user core. Jan 29 11:05:11.301373 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:05:11.974524 sshd[2037]: Connection closed by 147.75.109.163 port 47954 Jan 29 11:05:11.976114 sshd-session[2034]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:11.983413 systemd[1]: sshd@4-188.245.239.20:22-147.75.109.163:47954.service: Deactivated successfully. Jan 29 11:05:11.992309 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:05:11.993952 systemd-logind[1595]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:05:11.997521 systemd-logind[1595]: Removed session 4. Jan 29 11:05:12.145394 systemd[1]: Started sshd@5-188.245.239.20:22-147.75.109.163:47968.service - OpenSSH per-connection server daemon (147.75.109.163:47968). Jan 29 11:05:13.143839 sshd[2042]: Accepted publickey for core from 147.75.109.163 port 47968 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:05:13.147281 sshd-session[2042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:13.156107 systemd-logind[1595]: New session 5 of user core. Jan 29 11:05:13.166476 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:05:13.689399 sudo[2046]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:05:13.689903 sudo[2046]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:05:13.713455 sudo[2046]: pam_unix(sudo:session): session closed for user root Jan 29 11:05:13.875138 sshd[2045]: Connection closed by 147.75.109.163 port 47968 Jan 29 11:05:13.876205 sshd-session[2042]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:13.884500 systemd[1]: sshd@5-188.245.239.20:22-147.75.109.163:47968.service: Deactivated successfully. Jan 29 11:05:13.886379 systemd-logind[1595]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:05:13.888616 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:05:13.892348 systemd-logind[1595]: Removed session 5. Jan 29 11:05:14.045245 systemd[1]: Started sshd@6-188.245.239.20:22-147.75.109.163:47984.service - OpenSSH per-connection server daemon (147.75.109.163:47984). Jan 29 11:05:15.060150 sshd[2051]: Accepted publickey for core from 147.75.109.163 port 47984 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:05:15.063822 sshd-session[2051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:15.071195 systemd-logind[1595]: New session 6 of user core. Jan 29 11:05:15.077330 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:05:15.589115 sudo[2056]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:05:15.590096 sudo[2056]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:05:15.597878 sudo[2056]: pam_unix(sudo:session): session closed for user root Jan 29 11:05:15.605177 sudo[2055]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:05:15.605467 sudo[2055]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:05:15.623433 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:05:15.654787 augenrules[2078]: No rules Jan 29 11:05:15.657020 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:05:15.657330 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:05:15.660169 sudo[2055]: pam_unix(sudo:session): session closed for user root Jan 29 11:05:15.821781 sshd[2054]: Connection closed by 147.75.109.163 port 47984 Jan 29 11:05:15.822702 sshd-session[2051]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:15.828901 systemd[1]: sshd@6-188.245.239.20:22-147.75.109.163:47984.service: Deactivated successfully. Jan 29 11:05:15.832981 systemd-logind[1595]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:05:15.834079 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:05:15.835135 systemd-logind[1595]: Removed session 6. Jan 29 11:05:15.987182 systemd[1]: Started sshd@7-188.245.239.20:22-147.75.109.163:47996.service - OpenSSH per-connection server daemon (147.75.109.163:47996). Jan 29 11:05:16.982630 sshd[2087]: Accepted publickey for core from 147.75.109.163 port 47996 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:05:16.985491 sshd-session[2087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:16.991362 systemd-logind[1595]: New session 7 of user core. Jan 29 11:05:17.007315 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:05:17.501707 sudo[2091]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:05:17.503121 sudo[2091]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:05:17.844111 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:05:17.853671 (dockerd)[2109]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:05:18.125868 dockerd[2109]: time="2025-01-29T11:05:18.125020501Z" level=info msg="Starting up" Jan 29 11:05:18.214821 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2641143967-merged.mount: Deactivated successfully. Jan 29 11:05:18.246714 dockerd[2109]: time="2025-01-29T11:05:18.246575257Z" level=info msg="Loading containers: start." Jan 29 11:05:18.436107 kernel: Initializing XFRM netlink socket Jan 29 11:05:18.529290 systemd-networkd[1240]: docker0: Link UP Jan 29 11:05:18.566201 dockerd[2109]: time="2025-01-29T11:05:18.565950789Z" level=info msg="Loading containers: done." Jan 29 11:05:18.588446 dockerd[2109]: time="2025-01-29T11:05:18.588357132Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:05:18.588694 dockerd[2109]: time="2025-01-29T11:05:18.588502655Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:05:18.588694 dockerd[2109]: time="2025-01-29T11:05:18.588642538Z" level=info msg="Daemon has completed initialization" Jan 29 11:05:18.637505 dockerd[2109]: time="2025-01-29T11:05:18.637377147Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:05:18.637737 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:05:19.212997 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1802679637-merged.mount: Deactivated successfully. Jan 29 11:05:19.768706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 29 11:05:19.779194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:19.812031 containerd[1618]: time="2025-01-29T11:05:19.810987541Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 11:05:19.915121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:19.925413 (kubelet)[2315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:05:19.989869 kubelet[2315]: E0129 11:05:19.989719 2315 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:05:19.992951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:05:19.993188 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:05:20.470373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2461852514.mount: Deactivated successfully. Jan 29 11:05:21.429802 containerd[1618]: time="2025-01-29T11:05:21.429715056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:21.431869 containerd[1618]: time="2025-01-29T11:05:21.431771340Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29865027" Jan 29 11:05:21.432196 containerd[1618]: time="2025-01-29T11:05:21.432142068Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:21.438789 containerd[1618]: time="2025-01-29T11:05:21.437258819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:21.438789 containerd[1618]: time="2025-01-29T11:05:21.438523566Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 1.627477904s" Jan 29 11:05:21.438789 containerd[1618]: time="2025-01-29T11:05:21.438591288Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 29 11:05:21.462920 containerd[1618]: time="2025-01-29T11:05:21.462848453Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 11:05:22.733355 containerd[1618]: time="2025-01-29T11:05:22.733282022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:22.735416 containerd[1618]: time="2025-01-29T11:05:22.735205344Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901581" Jan 29 11:05:22.737044 containerd[1618]: time="2025-01-29T11:05:22.736384210Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:22.741261 containerd[1618]: time="2025-01-29T11:05:22.741204115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:22.742637 containerd[1618]: time="2025-01-29T11:05:22.742581986Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.279686252s" Jan 29 11:05:22.742885 containerd[1618]: time="2025-01-29T11:05:22.742807711Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 29 11:05:22.769862 containerd[1618]: time="2025-01-29T11:05:22.769808662Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 11:05:23.770149 containerd[1618]: time="2025-01-29T11:05:23.770060320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:23.772020 containerd[1618]: time="2025-01-29T11:05:23.771616675Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164358" Jan 29 11:05:23.773795 containerd[1618]: time="2025-01-29T11:05:23.773415035Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:23.780709 containerd[1618]: time="2025-01-29T11:05:23.778954118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:23.780709 containerd[1618]: time="2025-01-29T11:05:23.780556313Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.010426524s" Jan 29 11:05:23.780709 containerd[1618]: time="2025-01-29T11:05:23.780599834Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 29 11:05:23.811313 containerd[1618]: time="2025-01-29T11:05:23.811270875Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:05:24.805688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount190896134.mount: Deactivated successfully. Jan 29 11:05:25.175148 containerd[1618]: time="2025-01-29T11:05:25.174221056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:25.178142 containerd[1618]: time="2025-01-29T11:05:25.177991741Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662738" Jan 29 11:05:25.180153 containerd[1618]: time="2025-01-29T11:05:25.180072149Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:25.184220 containerd[1618]: time="2025-01-29T11:05:25.183040136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:25.184220 containerd[1618]: time="2025-01-29T11:05:25.184059279Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.372738003s" Jan 29 11:05:25.184220 containerd[1618]: time="2025-01-29T11:05:25.184100040Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 11:05:25.209940 containerd[1618]: time="2025-01-29T11:05:25.209846186Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:05:25.805974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740882858.mount: Deactivated successfully. Jan 29 11:05:26.541097 containerd[1618]: time="2025-01-29T11:05:26.539777558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:26.543603 containerd[1618]: time="2025-01-29T11:05:26.543558085Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 29 11:05:26.545294 containerd[1618]: time="2025-01-29T11:05:26.545213563Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:26.551503 containerd[1618]: time="2025-01-29T11:05:26.551437226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:26.554225 containerd[1618]: time="2025-01-29T11:05:26.554135848Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.344113779s" Jan 29 11:05:26.554419 containerd[1618]: time="2025-01-29T11:05:26.554397174Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 11:05:26.581695 containerd[1618]: time="2025-01-29T11:05:26.581627760Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 11:05:27.091290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3216200555.mount: Deactivated successfully. Jan 29 11:05:27.103802 containerd[1618]: time="2025-01-29T11:05:27.101925227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:27.103802 containerd[1618]: time="2025-01-29T11:05:27.102950530Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Jan 29 11:05:27.104107 containerd[1618]: time="2025-01-29T11:05:27.103969074Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:27.109685 containerd[1618]: time="2025-01-29T11:05:27.109588645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:27.111236 containerd[1618]: time="2025-01-29T11:05:27.110485505Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 528.806944ms" Jan 29 11:05:27.111236 containerd[1618]: time="2025-01-29T11:05:27.110534107Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 29 11:05:27.135100 containerd[1618]: time="2025-01-29T11:05:27.135054796Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 11:05:27.715672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4243306854.mount: Deactivated successfully. Jan 29 11:05:29.129427 containerd[1618]: time="2025-01-29T11:05:29.129332277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:29.137045 containerd[1618]: time="2025-01-29T11:05:29.136375124Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Jan 29 11:05:29.139140 containerd[1618]: time="2025-01-29T11:05:29.139077548Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:29.148413 containerd[1618]: time="2025-01-29T11:05:29.148319367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:29.150040 containerd[1618]: time="2025-01-29T11:05:29.149734681Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.014633244s" Jan 29 11:05:29.150040 containerd[1618]: time="2025-01-29T11:05:29.149800603Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 29 11:05:30.011260 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 29 11:05:30.021913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:30.152275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:30.167363 (kubelet)[2565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:05:30.243069 kubelet[2565]: E0129 11:05:30.243010 2565 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:05:30.245638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:05:30.245856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:05:34.364388 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:34.373118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:34.408266 systemd[1]: Reloading requested from client PID 2601 ('systemctl') (unit session-7.scope)... Jan 29 11:05:34.408491 systemd[1]: Reloading... Jan 29 11:05:34.552905 zram_generator::config[2651]: No configuration found. Jan 29 11:05:34.657381 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:05:34.726600 systemd[1]: Reloading finished in 317 ms. Jan 29 11:05:34.787377 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:05:34.787457 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:05:34.788055 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:34.795666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:34.957099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:34.964170 (kubelet)[2701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:05:35.012588 kubelet[2701]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:05:35.012588 kubelet[2701]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:05:35.012588 kubelet[2701]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:05:35.013432 kubelet[2701]: I0129 11:05:35.012685 2701 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:05:35.606783 kubelet[2701]: I0129 11:05:35.605180 2701 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:05:35.606783 kubelet[2701]: I0129 11:05:35.605249 2701 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:05:35.606783 kubelet[2701]: I0129 11:05:35.605510 2701 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:05:35.627560 kubelet[2701]: I0129 11:05:35.627509 2701 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:05:35.627851 kubelet[2701]: E0129 11:05:35.627826 2701 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://188.245.239.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:35.640338 kubelet[2701]: I0129 11:05:35.640257 2701 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:05:35.644563 kubelet[2701]: I0129 11:05:35.644419 2701 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:05:35.644850 kubelet[2701]: I0129 11:05:35.644516 2701 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-0-5-7d4b33c67e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:05:35.644965 kubelet[2701]: I0129 11:05:35.644879 2701 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:05:35.644965 kubelet[2701]: I0129 11:05:35.644893 2701 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:05:35.645338 kubelet[2701]: I0129 11:05:35.645263 2701 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:05:35.646734 kubelet[2701]: I0129 11:05:35.646688 2701 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:05:35.646734 kubelet[2701]: I0129 11:05:35.646726 2701 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:05:35.646937 kubelet[2701]: I0129 11:05:35.646922 2701 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:05:35.647984 kubelet[2701]: I0129 11:05:35.647050 2701 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:05:35.651810 kubelet[2701]: I0129 11:05:35.650515 2701 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:05:35.651810 kubelet[2701]: I0129 11:05:35.651526 2701 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:05:35.651810 kubelet[2701]: W0129 11:05:35.651599 2701 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:05:35.655534 kubelet[2701]: W0129 11:05:35.655445 2701 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.239.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:35.655534 kubelet[2701]: E0129 11:05:35.655534 2701 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.239.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:35.655847 kubelet[2701]: W0129 11:05:35.655612 2701 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.239.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-5-7d4b33c67e&limit=500&resourceVersion=0": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:35.655847 kubelet[2701]: E0129 11:05:35.655641 2701 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.239.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-5-7d4b33c67e&limit=500&resourceVersion=0": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:35.656656 kubelet[2701]: I0129 11:05:35.656623 2701 server.go:1264] "Started kubelet" Jan 29 11:05:35.662843 kubelet[2701]: I0129 11:05:35.662324 2701 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:05:35.668991 kubelet[2701]: I0129 11:05:35.668306 2701 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:05:35.668991 kubelet[2701]: E0129 11:05:35.668620 2701 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.239.20:6443/api/v1/namespaces/default/events\": dial tcp 188.245.239.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-0-5-7d4b33c67e.181f2510e39afdfa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-5-7d4b33c67e,UID:ci-4152-2-0-5-7d4b33c67e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-5-7d4b33c67e,},FirstTimestamp:2025-01-29 11:05:35.656590842 +0000 UTC m=+0.688906427,LastTimestamp:2025-01-29 11:05:35.656590842 +0000 UTC m=+0.688906427,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-5-7d4b33c67e,}" Jan 29 11:05:35.669918 kubelet[2701]: I0129 11:05:35.669500 2701 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:05:35.670201 kubelet[2701]: I0129 11:05:35.670128 2701 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:05:35.670535 kubelet[2701]: I0129 11:05:35.670514 2701 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:05:35.670782 kubelet[2701]: I0129 11:05:35.670736 2701 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:05:35.674221 kubelet[2701]: I0129 11:05:35.674176 2701 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:05:35.674353 kubelet[2701]: I0129 11:05:35.674289 2701 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:05:35.675867 kubelet[2701]: E0129 11:05:35.674523 2701 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.239.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-5-7d4b33c67e?timeout=10s\": dial tcp 188.245.239.20:6443: connect: connection refused" interval="200ms" Jan 29 11:05:35.675867 kubelet[2701]: W0129 11:05:35.674942 2701 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.239.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:35.675867 kubelet[2701]: E0129 11:05:35.674980 2701 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.239.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:35.676917 kubelet[2701]: I0129 11:05:35.676882 2701 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:05:35.677208 kubelet[2701]: I0129 11:05:35.677153 2701 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:05:35.680004 kubelet[2701]: I0129 11:05:35.679911 2701 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:05:35.694934 kubelet[2701]: E0129 11:05:35.694890 2701 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:05:35.695105 kubelet[2701]: I0129 11:05:35.694966 2701 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:05:35.699016 kubelet[2701]: I0129 11:05:35.698975 2701 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:05:35.699211 kubelet[2701]: I0129 11:05:35.699199 2701 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:05:35.699246 kubelet[2701]: I0129 11:05:35.699225 2701 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:05:35.699299 kubelet[2701]: E0129 11:05:35.699276 2701 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:05:35.708173 kubelet[2701]: W0129 11:05:35.707942 2701 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.239.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:35.708173 kubelet[2701]: E0129 11:05:35.708015 2701 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.239.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:35.715090 kubelet[2701]: I0129 11:05:35.715011 2701 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:05:35.715090 kubelet[2701]: I0129 11:05:35.715069 2701 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:05:35.715090 kubelet[2701]: I0129 11:05:35.715101 2701 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:05:35.716841 kubelet[2701]: I0129 11:05:35.716794 2701 policy_none.go:49] "None policy: Start" Jan 29 11:05:35.717640 kubelet[2701]: I0129 11:05:35.717594 2701 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:05:35.717640 kubelet[2701]: I0129 11:05:35.717646 2701 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:05:35.725304 kubelet[2701]: I0129 11:05:35.725267 2701 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:05:35.725540 kubelet[2701]: I0129 11:05:35.725500 2701 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:05:35.725628 kubelet[2701]: I0129 11:05:35.725616 2701 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:05:35.729288 kubelet[2701]: E0129 11:05:35.729255 2701 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-0-5-7d4b33c67e\" not found" Jan 29 11:05:35.774714 kubelet[2701]: I0129 11:05:35.774620 2701 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:35.775232 kubelet[2701]: E0129 11:05:35.775192 2701 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.239.20:6443/api/v1/nodes\": dial tcp 188.245.239.20:6443: connect: connection refused" node="ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:35.799563 kubelet[2701]: I0129 11:05:35.799486 2701 topology_manager.go:215] "Topology Admit Handler" podUID="5b8136fb4548cc73c272fb8e5d53da68" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:35.803821 kubelet[2701]: I0129 11:05:35.803775 2701 topology_manager.go:215] "Topology Admit Handler" podUID="dab39dd5d056da05f7e92df3fe5df904" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:35.807647 kubelet[2701]: I0129 11:05:35.807595 2701 topology_manager.go:215] "Topology Admit Handler" podUID="d0a780924a6de311c9699509865026f3" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:35.875524 kubelet[2701]: E0129 11:05:35.875275 2701 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.239.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-5-7d4b33c67e?timeout=10s\": dial tcp 188.245.239.20:6443: connect: connection refused" interval="400ms" Jan 29 11:05:35.976991 kubelet[2701]: I0129 11:05:35.974850 2701 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b8136fb4548cc73c272fb8e5d53da68-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-5-7d4b33c67e\" (UID: \"5b8136fb4548cc73c272fb8e5d53da68\") " pod="kube-system/kube-apiserver-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:35.976991 kubelet[2701]: I0129 11:05:35.974960 2701 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b8136fb4548cc73c272fb8e5d53da68-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-5-7d4b33c67e\" (UID: \"5b8136fb4548cc73c272fb8e5d53da68\") " pod="kube-system/kube-apiserver-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:35.976991 kubelet[2701]: I0129 11:05:35.975003 2701 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dab39dd5d056da05f7e92df3fe5df904-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-5-7d4b33c67e\" (UID: \"dab39dd5d056da05f7e92df3fe5df904\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:35.976991 kubelet[2701]: I0129 11:05:35.975041 2701 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dab39dd5d056da05f7e92df3fe5df904-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-5-7d4b33c67e\" (UID: \"dab39dd5d056da05f7e92df3fe5df904\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:35.976991 kubelet[2701]: I0129 11:05:35.975137 2701 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dab39dd5d056da05f7e92df3fe5df904-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-5-7d4b33c67e\" (UID: \"dab39dd5d056da05f7e92df3fe5df904\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:35.977413 kubelet[2701]: I0129 11:05:35.975180 2701 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b8136fb4548cc73c272fb8e5d53da68-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-5-7d4b33c67e\" (UID: \"5b8136fb4548cc73c272fb8e5d53da68\") " pod="kube-system/kube-apiserver-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:35.977413 kubelet[2701]: I0129 11:05:35.975215 2701 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dab39dd5d056da05f7e92df3fe5df904-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-5-7d4b33c67e\" (UID: \"dab39dd5d056da05f7e92df3fe5df904\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:35.977413 kubelet[2701]: I0129 11:05:35.975251 2701 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dab39dd5d056da05f7e92df3fe5df904-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-5-7d4b33c67e\" (UID: \"dab39dd5d056da05f7e92df3fe5df904\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:35.977413 kubelet[2701]: I0129 11:05:35.975307 2701 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0a780924a6de311c9699509865026f3-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-5-7d4b33c67e\" (UID: \"d0a780924a6de311c9699509865026f3\") " pod="kube-system/kube-scheduler-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:35.983851 kubelet[2701]: I0129 11:05:35.983147 2701 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:35.983851 kubelet[2701]: E0129 11:05:35.983571 2701 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.239.20:6443/api/v1/nodes\": dial tcp 188.245.239.20:6443: connect: connection refused" node="ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:36.118333 containerd[1618]: time="2025-01-29T11:05:36.117992217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-5-7d4b33c67e,Uid:5b8136fb4548cc73c272fb8e5d53da68,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:36.122043 containerd[1618]: time="2025-01-29T11:05:36.121828273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-5-7d4b33c67e,Uid:dab39dd5d056da05f7e92df3fe5df904,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:36.126021 containerd[1618]: time="2025-01-29T11:05:36.125911776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-5-7d4b33c67e,Uid:d0a780924a6de311c9699509865026f3,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:36.276375 kubelet[2701]: E0129 11:05:36.276204 2701 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.239.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-5-7d4b33c67e?timeout=10s\": dial tcp 188.245.239.20:6443: connect: connection refused" interval="800ms" Jan 29 11:05:36.386837 kubelet[2701]: I0129 11:05:36.386151 2701 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:36.386837 kubelet[2701]: E0129 11:05:36.386569 2701 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.239.20:6443/api/v1/nodes\": dial tcp 188.245.239.20:6443: connect: connection refused" node="ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:36.677142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2236997038.mount: Deactivated successfully. Jan 29 11:05:36.682622 kubelet[2701]: W0129 11:05:36.682511 2701 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.239.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-5-7d4b33c67e&limit=500&resourceVersion=0": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:36.682622 kubelet[2701]: E0129 11:05:36.682590 2701 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.239.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-5-7d4b33c67e&limit=500&resourceVersion=0": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:36.688145 containerd[1618]: time="2025-01-29T11:05:36.688049469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:05:36.693960 containerd[1618]: time="2025-01-29T11:05:36.693874776Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 29 11:05:36.695357 containerd[1618]: time="2025-01-29T11:05:36.695300492Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:05:36.699311 containerd[1618]: time="2025-01-29T11:05:36.698108922Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:05:36.700544 containerd[1618]: time="2025-01-29T11:05:36.700487862Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:05:36.702279 containerd[1618]: time="2025-01-29T11:05:36.702229826Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:05:36.703481 containerd[1618]: time="2025-01-29T11:05:36.703419536Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:05:36.704776 containerd[1618]: time="2025-01-29T11:05:36.704603526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:05:36.705928 containerd[1618]: time="2025-01-29T11:05:36.705612871Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 583.694836ms" Jan 29 11:05:36.710993 containerd[1618]: time="2025-01-29T11:05:36.710925445Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.121462ms" Jan 29 11:05:36.733776 containerd[1618]: time="2025-01-29T11:05:36.732521868Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 605.953155ms" Jan 29 11:05:36.836007 kubelet[2701]: W0129 11:05:36.835925 2701 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.239.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:36.836266 kubelet[2701]: E0129 11:05:36.836252 2701 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.239.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:36.845022 containerd[1618]: time="2025-01-29T11:05:36.844905013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:36.845693 containerd[1618]: time="2025-01-29T11:05:36.845649872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:36.845944 containerd[1618]: time="2025-01-29T11:05:36.845896118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:36.846667 containerd[1618]: time="2025-01-29T11:05:36.846589815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:36.853276 containerd[1618]: time="2025-01-29T11:05:36.853154620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:36.853276 containerd[1618]: time="2025-01-29T11:05:36.853235143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:36.853499 containerd[1618]: time="2025-01-29T11:05:36.853252943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:36.853499 containerd[1618]: time="2025-01-29T11:05:36.853361506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:36.855732 containerd[1618]: time="2025-01-29T11:05:36.855608922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:36.856083 containerd[1618]: time="2025-01-29T11:05:36.855803327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:36.856083 containerd[1618]: time="2025-01-29T11:05:36.855819607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:36.856628 containerd[1618]: time="2025-01-29T11:05:36.856565386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:36.923876 kubelet[2701]: W0129 11:05:36.923700 2701 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.239.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:36.923876 kubelet[2701]: E0129 11:05:36.923844 2701 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.239.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:36.935306 kubelet[2701]: W0129 11:05:36.934387 2701 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.239.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:36.935306 kubelet[2701]: E0129 11:05:36.934469 2701 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.239.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.239.20:6443: connect: connection refused Jan 29 11:05:36.949109 containerd[1618]: time="2025-01-29T11:05:36.948904708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-5-7d4b33c67e,Uid:d0a780924a6de311c9699509865026f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9a5d4e2e4f1392fcea14eb0f0d0e6b2b4fd5af047fa8fc3a7b0b4720b398e0d\"" Jan 29 11:05:36.954817 containerd[1618]: time="2025-01-29T11:05:36.952983770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-5-7d4b33c67e,Uid:5b8136fb4548cc73c272fb8e5d53da68,Namespace:kube-system,Attempt:0,} returns sandbox id \"87082ad5270afdfa363720561cc47577f911cb5d08df8567ce43ea135a288bde\"" Jan 29 11:05:36.959092 containerd[1618]: time="2025-01-29T11:05:36.959039443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-5-7d4b33c67e,Uid:dab39dd5d056da05f7e92df3fe5df904,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8175fd0fad6a252c79432387d27f738698823d7fb1ea197e5b43a4d347390a7\"" Jan 29 11:05:36.961615 containerd[1618]: time="2025-01-29T11:05:36.961489704Z" level=info msg="CreateContainer within sandbox \"87082ad5270afdfa363720561cc47577f911cb5d08df8567ce43ea135a288bde\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:05:36.962206 containerd[1618]: time="2025-01-29T11:05:36.962096720Z" level=info msg="CreateContainer within sandbox \"f9a5d4e2e4f1392fcea14eb0f0d0e6b2b4fd5af047fa8fc3a7b0b4720b398e0d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:05:36.966764 containerd[1618]: time="2025-01-29T11:05:36.966699155Z" level=info msg="CreateContainer within sandbox \"d8175fd0fad6a252c79432387d27f738698823d7fb1ea197e5b43a4d347390a7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:05:36.991792 containerd[1618]: time="2025-01-29T11:05:36.991583901Z" level=info msg="CreateContainer within sandbox \"d8175fd0fad6a252c79432387d27f738698823d7fb1ea197e5b43a4d347390a7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7b82554d81d7107c79c04d30af3bdc35d61b50ad858157c5446679c387df6b76\"" Jan 29 11:05:36.993547 containerd[1618]: time="2025-01-29T11:05:36.993204462Z" level=info msg="CreateContainer within sandbox \"87082ad5270afdfa363720561cc47577f911cb5d08df8567ce43ea135a288bde\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e43d11260e50bae2a8779eb8d15c8656f81233932809ad66d66988687db1aa8f\"" Jan 29 11:05:36.994132 containerd[1618]: time="2025-01-29T11:05:36.994039523Z" level=info msg="StartContainer for \"e43d11260e50bae2a8779eb8d15c8656f81233932809ad66d66988687db1aa8f\"" Jan 29 11:05:36.995714 containerd[1618]: time="2025-01-29T11:05:36.995646883Z" level=info msg="CreateContainer within sandbox \"f9a5d4e2e4f1392fcea14eb0f0d0e6b2b4fd5af047fa8fc3a7b0b4720b398e0d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"92be2ca413da16295ebb03f6b4f2d463006980ab18d75e0a898f5c01b86c4e7e\"" Jan 29 11:05:36.995861 containerd[1618]: time="2025-01-29T11:05:36.995845328Z" level=info msg="StartContainer for \"7b82554d81d7107c79c04d30af3bdc35d61b50ad858157c5446679c387df6b76\"" Jan 29 11:05:37.005802 containerd[1618]: time="2025-01-29T11:05:37.004817314Z" level=info msg="StartContainer for \"92be2ca413da16295ebb03f6b4f2d463006980ab18d75e0a898f5c01b86c4e7e\"" Jan 29 11:05:37.080866 kubelet[2701]: E0129 11:05:37.080812 2701 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.239.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-5-7d4b33c67e?timeout=10s\": dial tcp 188.245.239.20:6443: connect: connection refused" interval="1.6s" Jan 29 11:05:37.089337 containerd[1618]: time="2025-01-29T11:05:37.089289893Z" level=info msg="StartContainer for \"e43d11260e50bae2a8779eb8d15c8656f81233932809ad66d66988687db1aa8f\" returns successfully" Jan 29 11:05:37.144502 containerd[1618]: time="2025-01-29T11:05:37.142960132Z" level=info msg="StartContainer for \"92be2ca413da16295ebb03f6b4f2d463006980ab18d75e0a898f5c01b86c4e7e\" returns successfully" Jan 29 11:05:37.160797 containerd[1618]: time="2025-01-29T11:05:37.157557542Z" level=info msg="StartContainer for \"7b82554d81d7107c79c04d30af3bdc35d61b50ad858157c5446679c387df6b76\" returns successfully" Jan 29 11:05:37.192205 kubelet[2701]: I0129 11:05:37.191978 2701 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:37.194035 kubelet[2701]: E0129 11:05:37.193975 2701 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.239.20:6443/api/v1/nodes\": dial tcp 188.245.239.20:6443: connect: connection refused" node="ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:38.800767 kubelet[2701]: I0129 11:05:38.799440 2701 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:39.382442 kubelet[2701]: E0129 11:05:39.382406 2701 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-0-5-7d4b33c67e\" not found" node="ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:39.513878 kubelet[2701]: I0129 11:05:39.512187 2701 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:39.650884 kubelet[2701]: I0129 11:05:39.650512 2701 apiserver.go:52] "Watching apiserver" Jan 29 11:05:39.676810 kubelet[2701]: I0129 11:05:39.675053 2701 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:05:39.776887 kubelet[2701]: E0129 11:05:39.775021 2701 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-0-5-7d4b33c67e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:42.427919 systemd[1]: Reloading requested from client PID 2975 ('systemctl') (unit session-7.scope)... Jan 29 11:05:42.427936 systemd[1]: Reloading... Jan 29 11:05:42.517785 zram_generator::config[3018]: No configuration found. Jan 29 11:05:42.650220 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:05:42.726266 systemd[1]: Reloading finished in 297 ms. Jan 29 11:05:42.761632 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:42.762053 kubelet[2701]: E0129 11:05:42.761613 2701 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4152-2-0-5-7d4b33c67e.181f2510e39afdfa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-5-7d4b33c67e,UID:ci-4152-2-0-5-7d4b33c67e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-5-7d4b33c67e,},FirstTimestamp:2025-01-29 11:05:35.656590842 +0000 UTC m=+0.688906427,LastTimestamp:2025-01-29 11:05:35.656590842 +0000 UTC m=+0.688906427,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-5-7d4b33c67e,}" Jan 29 11:05:42.762053 kubelet[2701]: I0129 11:05:42.761956 2701 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:05:42.778420 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:05:42.779270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:42.790268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:42.915061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:42.930004 (kubelet)[3069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:05:42.986005 kubelet[3069]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:05:42.986005 kubelet[3069]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:05:42.986005 kubelet[3069]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:05:42.986005 kubelet[3069]: I0129 11:05:42.985881 3069 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:05:42.991975 kubelet[3069]: I0129 11:05:42.991921 3069 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:05:42.991975 kubelet[3069]: I0129 11:05:42.991960 3069 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:05:42.992210 kubelet[3069]: I0129 11:05:42.992166 3069 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:05:42.993792 kubelet[3069]: I0129 11:05:42.993723 3069 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:05:42.995463 kubelet[3069]: I0129 11:05:42.995433 3069 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:05:43.004524 kubelet[3069]: I0129 11:05:43.003388 3069 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:05:43.004524 kubelet[3069]: I0129 11:05:43.004005 3069 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:05:43.004524 kubelet[3069]: I0129 11:05:43.004035 3069 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-0-5-7d4b33c67e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:05:43.004524 kubelet[3069]: I0129 11:05:43.004367 3069 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:05:43.004788 kubelet[3069]: I0129 11:05:43.004380 3069 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:05:43.004788 kubelet[3069]: I0129 11:05:43.004425 3069 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:05:43.004788 kubelet[3069]: I0129 11:05:43.004529 3069 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:05:43.004788 kubelet[3069]: I0129 11:05:43.004542 3069 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:05:43.004788 kubelet[3069]: I0129 11:05:43.004574 3069 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:05:43.004788 kubelet[3069]: I0129 11:05:43.004597 3069 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:05:43.013623 kubelet[3069]: I0129 11:05:43.012891 3069 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:05:43.013623 kubelet[3069]: I0129 11:05:43.013153 3069 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:05:43.013623 kubelet[3069]: I0129 11:05:43.013575 3069 server.go:1264] "Started kubelet" Jan 29 11:05:43.017840 kubelet[3069]: I0129 11:05:43.017610 3069 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:05:43.032895 kubelet[3069]: I0129 11:05:43.032833 3069 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:05:43.038276 kubelet[3069]: I0129 11:05:43.035920 3069 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:05:43.038276 kubelet[3069]: I0129 11:05:43.036319 3069 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:05:43.041821 kubelet[3069]: I0129 11:05:43.041787 3069 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:05:43.043094 kubelet[3069]: I0129 11:05:43.043064 3069 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:05:43.043278 kubelet[3069]: I0129 11:05:43.043263 3069 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:05:43.044810 kubelet[3069]: I0129 11:05:43.043923 3069 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:05:43.050896 kubelet[3069]: I0129 11:05:43.050863 3069 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:05:43.051238 kubelet[3069]: I0129 11:05:43.051211 3069 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:05:43.059451 kubelet[3069]: I0129 11:05:43.059405 3069 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:05:43.064436 kubelet[3069]: E0129 11:05:43.064273 3069 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:05:43.064893 kubelet[3069]: I0129 11:05:43.064846 3069 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:05:43.067087 kubelet[3069]: I0129 11:05:43.067047 3069 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:05:43.067087 kubelet[3069]: I0129 11:05:43.067096 3069 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:05:43.067534 kubelet[3069]: I0129 11:05:43.067112 3069 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:05:43.067534 kubelet[3069]: E0129 11:05:43.067153 3069 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:05:43.129476 kubelet[3069]: I0129 11:05:43.129449 3069 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:05:43.129675 kubelet[3069]: I0129 11:05:43.129658 3069 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:05:43.129789 kubelet[3069]: I0129 11:05:43.129778 3069 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:05:43.130171 kubelet[3069]: I0129 11:05:43.130096 3069 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:05:43.130171 kubelet[3069]: I0129 11:05:43.130115 3069 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:05:43.130171 kubelet[3069]: I0129 11:05:43.130138 3069 policy_none.go:49] "None policy: Start" Jan 29 11:05:43.131432 kubelet[3069]: I0129 11:05:43.131175 3069 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:05:43.131432 kubelet[3069]: I0129 11:05:43.131275 3069 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:05:43.131878 kubelet[3069]: I0129 11:05:43.131706 3069 state_mem.go:75] "Updated machine memory state" Jan 29 11:05:43.133228 kubelet[3069]: I0129 11:05:43.133207 3069 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:05:43.133835 kubelet[3069]: I0129 11:05:43.133731 3069 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:05:43.134087 kubelet[3069]: I0129 11:05:43.133986 3069 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:05:43.146950 kubelet[3069]: I0129 11:05:43.146903 3069 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:43.168133 kubelet[3069]: I0129 11:05:43.168050 3069 topology_manager.go:215] "Topology Admit Handler" podUID="5b8136fb4548cc73c272fb8e5d53da68" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:43.168300 kubelet[3069]: I0129 11:05:43.168261 3069 topology_manager.go:215] "Topology Admit Handler" podUID="dab39dd5d056da05f7e92df3fe5df904" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:43.168353 kubelet[3069]: I0129 11:05:43.168317 3069 topology_manager.go:215] "Topology Admit Handler" podUID="d0a780924a6de311c9699509865026f3" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:43.179927 kubelet[3069]: I0129 11:05:43.179691 3069 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:43.179927 kubelet[3069]: I0129 11:05:43.179830 3069 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:43.345380 kubelet[3069]: I0129 11:05:43.345284 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b8136fb4548cc73c272fb8e5d53da68-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-5-7d4b33c67e\" (UID: \"5b8136fb4548cc73c272fb8e5d53da68\") " pod="kube-system/kube-apiserver-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:43.345380 kubelet[3069]: I0129 11:05:43.345349 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dab39dd5d056da05f7e92df3fe5df904-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-5-7d4b33c67e\" (UID: \"dab39dd5d056da05f7e92df3fe5df904\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:43.345380 kubelet[3069]: I0129 11:05:43.345378 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dab39dd5d056da05f7e92df3fe5df904-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-5-7d4b33c67e\" (UID: \"dab39dd5d056da05f7e92df3fe5df904\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:43.345380 kubelet[3069]: I0129 11:05:43.345399 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dab39dd5d056da05f7e92df3fe5df904-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-5-7d4b33c67e\" (UID: \"dab39dd5d056da05f7e92df3fe5df904\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:43.345722 kubelet[3069]: I0129 11:05:43.345420 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dab39dd5d056da05f7e92df3fe5df904-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-5-7d4b33c67e\" (UID: \"dab39dd5d056da05f7e92df3fe5df904\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:43.345722 kubelet[3069]: I0129 11:05:43.345441 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b8136fb4548cc73c272fb8e5d53da68-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-5-7d4b33c67e\" (UID: \"5b8136fb4548cc73c272fb8e5d53da68\") " pod="kube-system/kube-apiserver-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:43.345722 kubelet[3069]: I0129 11:05:43.345465 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b8136fb4548cc73c272fb8e5d53da68-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-5-7d4b33c67e\" (UID: \"5b8136fb4548cc73c272fb8e5d53da68\") " pod="kube-system/kube-apiserver-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:43.345722 kubelet[3069]: I0129 11:05:43.345488 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dab39dd5d056da05f7e92df3fe5df904-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-5-7d4b33c67e\" (UID: \"dab39dd5d056da05f7e92df3fe5df904\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:43.345722 kubelet[3069]: I0129 11:05:43.345511 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0a780924a6de311c9699509865026f3-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-5-7d4b33c67e\" (UID: \"d0a780924a6de311c9699509865026f3\") " pod="kube-system/kube-scheduler-ci-4152-2-0-5-7d4b33c67e" Jan 29 11:05:43.413840 sudo[3100]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:05:43.414139 sudo[3100]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:05:43.911984 sudo[3100]: pam_unix(sudo:session): session closed for user root Jan 29 11:05:44.013878 kubelet[3069]: I0129 11:05:44.013575 3069 apiserver.go:52] "Watching apiserver" Jan 29 11:05:44.045260 kubelet[3069]: I0129 11:05:44.043778 3069 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:05:44.131095 kubelet[3069]: I0129 11:05:44.131011 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-0-5-7d4b33c67e" podStartSLOduration=1.130936345 podStartE2EDuration="1.130936345s" podCreationTimestamp="2025-01-29 11:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:05:44.130681978 +0000 UTC m=+1.195525560" watchObservedRunningTime="2025-01-29 11:05:44.130936345 +0000 UTC m=+1.195779927" Jan 29 11:05:44.174272 kubelet[3069]: I0129 11:05:44.173785 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-0-5-7d4b33c67e" podStartSLOduration=1.1737599969999999 podStartE2EDuration="1.173759997s" podCreationTimestamp="2025-01-29 11:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:05:44.152524956 +0000 UTC m=+1.217368538" watchObservedRunningTime="2025-01-29 11:05:44.173759997 +0000 UTC m=+1.238603619" Jan 29 11:05:44.191005 kubelet[3069]: I0129 11:05:44.190339 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-0-5-7d4b33c67e" podStartSLOduration=1.190316515 podStartE2EDuration="1.190316515s" podCreationTimestamp="2025-01-29 11:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:05:44.172614967 +0000 UTC m=+1.237458549" watchObservedRunningTime="2025-01-29 11:05:44.190316515 +0000 UTC m=+1.255160097" Jan 29 11:05:46.208150 sudo[2091]: pam_unix(sudo:session): session closed for user root Jan 29 11:05:46.366815 sshd[2090]: Connection closed by 147.75.109.163 port 47996 Jan 29 11:05:46.369000 sshd-session[2087]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:46.373092 systemd[1]: sshd@7-188.245.239.20:22-147.75.109.163:47996.service: Deactivated successfully. Jan 29 11:05:46.379199 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:05:46.382007 systemd-logind[1595]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:05:46.383655 systemd-logind[1595]: Removed session 7. Jan 29 11:05:57.486144 kubelet[3069]: I0129 11:05:57.486086 3069 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:05:57.488709 kubelet[3069]: I0129 11:05:57.487881 3069 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:05:57.488771 containerd[1618]: time="2025-01-29T11:05:57.487504370Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:05:58.428780 kubelet[3069]: I0129 11:05:58.423041 3069 topology_manager.go:215] "Topology Admit Handler" podUID="f4d6653d-b191-42a8-95a9-e46074297960" podNamespace="kube-system" podName="kube-proxy-bzgzk" Jan 29 11:05:58.445160 kubelet[3069]: I0129 11:05:58.445117 3069 topology_manager.go:215] "Topology Admit Handler" podUID="c337fed5-245e-44e2-949a-39bdbd3c0207" podNamespace="kube-system" podName="cilium-tdczl" Jan 29 11:05:58.457608 kubelet[3069]: I0129 11:05:58.457561 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f4d6653d-b191-42a8-95a9-e46074297960-kube-proxy\") pod \"kube-proxy-bzgzk\" (UID: \"f4d6653d-b191-42a8-95a9-e46074297960\") " pod="kube-system/kube-proxy-bzgzk" Jan 29 11:05:58.457608 kubelet[3069]: I0129 11:05:58.457600 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4d6653d-b191-42a8-95a9-e46074297960-xtables-lock\") pod \"kube-proxy-bzgzk\" (UID: \"f4d6653d-b191-42a8-95a9-e46074297960\") " pod="kube-system/kube-proxy-bzgzk" Jan 29 11:05:58.457790 kubelet[3069]: I0129 11:05:58.457624 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4d6653d-b191-42a8-95a9-e46074297960-lib-modules\") pod \"kube-proxy-bzgzk\" (UID: \"f4d6653d-b191-42a8-95a9-e46074297960\") " pod="kube-system/kube-proxy-bzgzk" Jan 29 11:05:58.457790 kubelet[3069]: I0129 11:05:58.457641 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bbjh\" (UniqueName: \"kubernetes.io/projected/f4d6653d-b191-42a8-95a9-e46074297960-kube-api-access-8bbjh\") pod \"kube-proxy-bzgzk\" (UID: \"f4d6653d-b191-42a8-95a9-e46074297960\") " pod="kube-system/kube-proxy-bzgzk" Jan 29 11:05:58.554221 kubelet[3069]: I0129 11:05:58.554167 3069 topology_manager.go:215] "Topology Admit Handler" podUID="1a0fd7d9-3ac9-4043-a6c3-52c444a0b277" podNamespace="kube-system" podName="cilium-operator-599987898-r76lr" Jan 29 11:05:58.560427 kubelet[3069]: I0129 11:05:58.557971 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c337fed5-245e-44e2-949a-39bdbd3c0207-hubble-tls\") pod \"cilium-tdczl\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " pod="kube-system/cilium-tdczl" Jan 29 11:05:58.560427 kubelet[3069]: I0129 11:05:58.558037 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-hostproc\") pod \"cilium-tdczl\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " pod="kube-system/cilium-tdczl" Jan 29 11:05:58.560427 kubelet[3069]: I0129 11:05:58.558058 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-xtables-lock\") pod \"cilium-tdczl\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " pod="kube-system/cilium-tdczl" Jan 29 11:05:58.560427 kubelet[3069]: I0129 11:05:58.558076 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-bpf-maps\") pod \"cilium-tdczl\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " pod="kube-system/cilium-tdczl" Jan 29 11:05:58.560427 kubelet[3069]: I0129 11:05:58.558094 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-lib-modules\") pod \"cilium-tdczl\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " pod="kube-system/cilium-tdczl" Jan 29 11:05:58.560427 kubelet[3069]: I0129 11:05:58.558110 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c337fed5-245e-44e2-949a-39bdbd3c0207-cilium-config-path\") pod \"cilium-tdczl\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " pod="kube-system/cilium-tdczl" Jan 29 11:05:58.560681 kubelet[3069]: I0129 11:05:58.558125 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-host-proc-sys-net\") pod \"cilium-tdczl\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " pod="kube-system/cilium-tdczl" Jan 29 11:05:58.560681 kubelet[3069]: I0129 11:05:58.558139 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-host-proc-sys-kernel\") pod \"cilium-tdczl\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " pod="kube-system/cilium-tdczl" Jan 29 11:05:58.560681 kubelet[3069]: I0129 11:05:58.558163 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-etc-cni-netd\") pod \"cilium-tdczl\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " pod="kube-system/cilium-tdczl" Jan 29 11:05:58.560681 kubelet[3069]: I0129 11:05:58.558196 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c337fed5-245e-44e2-949a-39bdbd3c0207-clustermesh-secrets\") pod \"cilium-tdczl\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " pod="kube-system/cilium-tdczl" Jan 29 11:05:58.560681 kubelet[3069]: I0129 11:05:58.558210 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-687zx\" (UniqueName: \"kubernetes.io/projected/c337fed5-245e-44e2-949a-39bdbd3c0207-kube-api-access-687zx\") pod \"cilium-tdczl\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " pod="kube-system/cilium-tdczl" Jan 29 11:05:58.560812 kubelet[3069]: I0129 11:05:58.558235 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-cilium-run\") pod \"cilium-tdczl\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " pod="kube-system/cilium-tdczl" Jan 29 11:05:58.560812 kubelet[3069]: I0129 11:05:58.558252 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-cilium-cgroup\") pod \"cilium-tdczl\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " pod="kube-system/cilium-tdczl" Jan 29 11:05:58.560812 kubelet[3069]: I0129 11:05:58.558270 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-cni-path\") pod \"cilium-tdczl\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " pod="kube-system/cilium-tdczl" Jan 29 11:05:58.659202 kubelet[3069]: I0129 11:05:58.659155 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg4ps\" (UniqueName: \"kubernetes.io/projected/1a0fd7d9-3ac9-4043-a6c3-52c444a0b277-kube-api-access-kg4ps\") pod \"cilium-operator-599987898-r76lr\" (UID: \"1a0fd7d9-3ac9-4043-a6c3-52c444a0b277\") " pod="kube-system/cilium-operator-599987898-r76lr" Jan 29 11:05:58.659736 kubelet[3069]: I0129 11:05:58.659665 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a0fd7d9-3ac9-4043-a6c3-52c444a0b277-cilium-config-path\") pod \"cilium-operator-599987898-r76lr\" (UID: \"1a0fd7d9-3ac9-4043-a6c3-52c444a0b277\") " pod="kube-system/cilium-operator-599987898-r76lr" Jan 29 11:05:58.741223 containerd[1618]: time="2025-01-29T11:05:58.741054069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzgzk,Uid:f4d6653d-b191-42a8-95a9-e46074297960,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:58.755463 containerd[1618]: time="2025-01-29T11:05:58.754915378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tdczl,Uid:c337fed5-245e-44e2-949a-39bdbd3c0207,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:58.784548 containerd[1618]: time="2025-01-29T11:05:58.784252920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:58.784774 containerd[1618]: time="2025-01-29T11:05:58.784432485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:58.785309 containerd[1618]: time="2025-01-29T11:05:58.785132905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:58.785309 containerd[1618]: time="2025-01-29T11:05:58.785272589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:58.797791 containerd[1618]: time="2025-01-29T11:05:58.797670537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:58.797982 containerd[1618]: time="2025-01-29T11:05:58.797753019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:58.797982 containerd[1618]: time="2025-01-29T11:05:58.797777420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:58.797982 containerd[1618]: time="2025-01-29T11:05:58.797867382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:58.842846 containerd[1618]: time="2025-01-29T11:05:58.842808562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzgzk,Uid:f4d6653d-b191-42a8-95a9-e46074297960,Namespace:kube-system,Attempt:0,} returns sandbox id \"80a8b67da30bf72a76a013759489adb6998950aafa9140e34714349c5ec9299e\"" Jan 29 11:05:58.847979 containerd[1618]: time="2025-01-29T11:05:58.847929506Z" level=info msg="CreateContainer within sandbox \"80a8b67da30bf72a76a013759489adb6998950aafa9140e34714349c5ec9299e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:05:58.851918 containerd[1618]: time="2025-01-29T11:05:58.851881257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tdczl,Uid:c337fed5-245e-44e2-949a-39bdbd3c0207,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\"" Jan 29 11:05:58.856720 containerd[1618]: time="2025-01-29T11:05:58.856275140Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:05:58.864116 containerd[1618]: time="2025-01-29T11:05:58.864070798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-r76lr,Uid:1a0fd7d9-3ac9-4043-a6c3-52c444a0b277,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:58.872147 containerd[1618]: time="2025-01-29T11:05:58.872102023Z" level=info msg="CreateContainer within sandbox \"80a8b67da30bf72a76a013759489adb6998950aafa9140e34714349c5ec9299e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"37faad9f2282a9463b77245da51d4a61260b81697f45870996702065946ae413\"" Jan 29 11:05:58.874578 containerd[1618]: time="2025-01-29T11:05:58.873205494Z" level=info msg="StartContainer for \"37faad9f2282a9463b77245da51d4a61260b81697f45870996702065946ae413\"" Jan 29 11:05:58.898161 containerd[1618]: time="2025-01-29T11:05:58.898068272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:58.898371 containerd[1618]: time="2025-01-29T11:05:58.898343079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:58.898555 containerd[1618]: time="2025-01-29T11:05:58.898438082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:58.898849 containerd[1618]: time="2025-01-29T11:05:58.898719410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:58.966023 containerd[1618]: time="2025-01-29T11:05:58.963990600Z" level=info msg="StartContainer for \"37faad9f2282a9463b77245da51d4a61260b81697f45870996702065946ae413\" returns successfully" Jan 29 11:05:58.972161 containerd[1618]: time="2025-01-29T11:05:58.972123868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-r76lr,Uid:1a0fd7d9-3ac9-4043-a6c3-52c444a0b277,Namespace:kube-system,Attempt:0,} returns sandbox id \"e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9\"" Jan 29 11:05:59.156710 kubelet[3069]: I0129 11:05:59.155963 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bzgzk" podStartSLOduration=1.155937796 podStartE2EDuration="1.155937796s" podCreationTimestamp="2025-01-29 11:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:05:59.154087304 +0000 UTC m=+16.218930886" watchObservedRunningTime="2025-01-29 11:05:59.155937796 +0000 UTC m=+16.220781418" Jan 29 11:06:02.286496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2150046307.mount: Deactivated successfully. Jan 29 11:06:03.833632 containerd[1618]: time="2025-01-29T11:06:03.833551991Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:06:03.835904 containerd[1618]: time="2025-01-29T11:06:03.835821895Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 11:06:03.836813 containerd[1618]: time="2025-01-29T11:06:03.836771242Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:06:03.841427 containerd[1618]: time="2025-01-29T11:06:03.841375773Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.985039712s" Jan 29 11:06:03.841808 containerd[1618]: time="2025-01-29T11:06:03.841598340Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 11:06:03.843824 containerd[1618]: time="2025-01-29T11:06:03.843418911Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:06:03.848765 containerd[1618]: time="2025-01-29T11:06:03.847476707Z" level=info msg="CreateContainer within sandbox \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:06:03.869115 containerd[1618]: time="2025-01-29T11:06:03.869013200Z" level=info msg="CreateContainer within sandbox \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814\"" Jan 29 11:06:03.871054 containerd[1618]: time="2025-01-29T11:06:03.871017817Z" level=info msg="StartContainer for \"6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814\"" Jan 29 11:06:03.936501 containerd[1618]: time="2025-01-29T11:06:03.936358156Z" level=info msg="StartContainer for \"6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814\" returns successfully" Jan 29 11:06:03.973879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814-rootfs.mount: Deactivated successfully. Jan 29 11:06:04.107162 containerd[1618]: time="2025-01-29T11:06:04.106907858Z" level=info msg="shim disconnected" id=6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814 namespace=k8s.io Jan 29 11:06:04.107162 containerd[1618]: time="2025-01-29T11:06:04.106975500Z" level=warning msg="cleaning up after shim disconnected" id=6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814 namespace=k8s.io Jan 29 11:06:04.107162 containerd[1618]: time="2025-01-29T11:06:04.106984420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:06:04.160205 containerd[1618]: time="2025-01-29T11:06:04.160062335Z" level=info msg="CreateContainer within sandbox \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:06:04.182105 containerd[1618]: time="2025-01-29T11:06:04.181972520Z" level=info msg="CreateContainer within sandbox \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c\"" Jan 29 11:06:04.187261 containerd[1618]: time="2025-01-29T11:06:04.187213989Z" level=info msg="StartContainer for \"11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c\"" Jan 29 11:06:04.248691 containerd[1618]: time="2025-01-29T11:06:04.248614262Z" level=info msg="StartContainer for \"11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c\" returns successfully" Jan 29 11:06:04.259678 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:06:04.260004 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:06:04.260073 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:06:04.268298 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:06:04.293484 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:06:04.303206 containerd[1618]: time="2025-01-29T11:06:04.302996533Z" level=info msg="shim disconnected" id=11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c namespace=k8s.io Jan 29 11:06:04.303206 containerd[1618]: time="2025-01-29T11:06:04.303070935Z" level=warning msg="cleaning up after shim disconnected" id=11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c namespace=k8s.io Jan 29 11:06:04.303206 containerd[1618]: time="2025-01-29T11:06:04.303079976Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:06:05.165307 containerd[1618]: time="2025-01-29T11:06:05.165157227Z" level=info msg="CreateContainer within sandbox \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:06:05.189194 containerd[1618]: time="2025-01-29T11:06:05.189057391Z" level=info msg="CreateContainer within sandbox \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe\"" Jan 29 11:06:05.190728 containerd[1618]: time="2025-01-29T11:06:05.190677797Z" level=info msg="StartContainer for \"ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe\"" Jan 29 11:06:05.259579 containerd[1618]: time="2025-01-29T11:06:05.259534087Z" level=info msg="StartContainer for \"ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe\" returns successfully" Jan 29 11:06:05.287572 containerd[1618]: time="2025-01-29T11:06:05.287509048Z" level=info msg="shim disconnected" id=ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe namespace=k8s.io Jan 29 11:06:05.288091 containerd[1618]: time="2025-01-29T11:06:05.287899939Z" level=warning msg="cleaning up after shim disconnected" id=ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe namespace=k8s.io Jan 29 11:06:05.288091 containerd[1618]: time="2025-01-29T11:06:05.287920099Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:06:05.858926 systemd[1]: run-containerd-runc-k8s.io-ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe-runc.VOcftr.mount: Deactivated successfully. Jan 29 11:06:05.859199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe-rootfs.mount: Deactivated successfully. Jan 29 11:06:06.172304 containerd[1618]: time="2025-01-29T11:06:06.171959683Z" level=info msg="CreateContainer within sandbox \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:06:06.221588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3274166977.mount: Deactivated successfully. Jan 29 11:06:06.226422 containerd[1618]: time="2025-01-29T11:06:06.226370524Z" level=info msg="CreateContainer within sandbox \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691\"" Jan 29 11:06:06.228770 containerd[1618]: time="2025-01-29T11:06:06.228589827Z" level=info msg="StartContainer for \"4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691\"" Jan 29 11:06:06.326125 containerd[1618]: time="2025-01-29T11:06:06.325925339Z" level=info msg="StartContainer for \"4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691\" returns successfully" Jan 29 11:06:06.369718 containerd[1618]: time="2025-01-29T11:06:06.369448827Z" level=info msg="shim disconnected" id=4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691 namespace=k8s.io Jan 29 11:06:06.369945 containerd[1618]: time="2025-01-29T11:06:06.369724595Z" level=warning msg="cleaning up after shim disconnected" id=4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691 namespace=k8s.io Jan 29 11:06:06.369945 containerd[1618]: time="2025-01-29T11:06:06.369755556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:06:06.612550 containerd[1618]: time="2025-01-29T11:06:06.612454517Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:06:06.614632 containerd[1618]: time="2025-01-29T11:06:06.614533016Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 11:06:06.615834 containerd[1618]: time="2025-01-29T11:06:06.615794172Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:06:06.619776 containerd[1618]: time="2025-01-29T11:06:06.619699484Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.776183171s" Jan 29 11:06:06.620396 containerd[1618]: time="2025-01-29T11:06:06.620009933Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 11:06:06.624778 containerd[1618]: time="2025-01-29T11:06:06.624590865Z" level=info msg="CreateContainer within sandbox \"e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:06:06.640651 containerd[1618]: time="2025-01-29T11:06:06.640598884Z" level=info msg="CreateContainer within sandbox \"e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62\"" Jan 29 11:06:06.642420 containerd[1618]: time="2025-01-29T11:06:06.642295252Z" level=info msg="StartContainer for \"1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62\"" Jan 29 11:06:06.702382 containerd[1618]: time="2025-01-29T11:06:06.702241452Z" level=info msg="StartContainer for \"1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62\" returns successfully" Jan 29 11:06:06.864781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691-rootfs.mount: Deactivated successfully. Jan 29 11:06:07.186128 containerd[1618]: time="2025-01-29T11:06:07.184574058Z" level=info msg="CreateContainer within sandbox \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:06:07.211348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010193355.mount: Deactivated successfully. Jan 29 11:06:07.215805 containerd[1618]: time="2025-01-29T11:06:07.215712433Z" level=info msg="CreateContainer within sandbox \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705\"" Jan 29 11:06:07.218511 containerd[1618]: time="2025-01-29T11:06:07.218343149Z" level=info msg="StartContainer for \"4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705\"" Jan 29 11:06:07.250940 kubelet[3069]: I0129 11:06:07.248250 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-r76lr" podStartSLOduration=1.601118004 podStartE2EDuration="9.248228688s" podCreationTimestamp="2025-01-29 11:05:58 +0000 UTC" firstStartedPulling="2025-01-29 11:05:58.973898758 +0000 UTC m=+16.038742340" lastFinishedPulling="2025-01-29 11:06:06.621009442 +0000 UTC m=+23.685853024" observedRunningTime="2025-01-29 11:06:07.245922702 +0000 UTC m=+24.310766284" watchObservedRunningTime="2025-01-29 11:06:07.248228688 +0000 UTC m=+24.313072270" Jan 29 11:06:07.392730 containerd[1618]: time="2025-01-29T11:06:07.392661881Z" level=info msg="StartContainer for \"4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705\" returns successfully" Jan 29 11:06:07.587725 kubelet[3069]: I0129 11:06:07.587615 3069 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:06:07.654197 kubelet[3069]: I0129 11:06:07.651669 3069 topology_manager.go:215] "Topology Admit Handler" podUID="70b23573-3f75-42e0-a6d8-a791633a13e9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hpshv" Jan 29 11:06:07.657902 kubelet[3069]: I0129 11:06:07.657341 3069 topology_manager.go:215] "Topology Admit Handler" podUID="02432861-9ac8-4cf2-91d4-9c2eefcf32a0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kv5rd" Jan 29 11:06:07.724850 kubelet[3069]: I0129 11:06:07.724808 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwnfv\" (UniqueName: \"kubernetes.io/projected/02432861-9ac8-4cf2-91d4-9c2eefcf32a0-kube-api-access-lwnfv\") pod \"coredns-7db6d8ff4d-kv5rd\" (UID: \"02432861-9ac8-4cf2-91d4-9c2eefcf32a0\") " pod="kube-system/coredns-7db6d8ff4d-kv5rd" Jan 29 11:06:07.725195 kubelet[3069]: I0129 11:06:07.725177 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02432861-9ac8-4cf2-91d4-9c2eefcf32a0-config-volume\") pod \"coredns-7db6d8ff4d-kv5rd\" (UID: \"02432861-9ac8-4cf2-91d4-9c2eefcf32a0\") " pod="kube-system/coredns-7db6d8ff4d-kv5rd" Jan 29 11:06:07.726491 kubelet[3069]: I0129 11:06:07.726466 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6wlt\" (UniqueName: \"kubernetes.io/projected/70b23573-3f75-42e0-a6d8-a791633a13e9-kube-api-access-h6wlt\") pod \"coredns-7db6d8ff4d-hpshv\" (UID: \"70b23573-3f75-42e0-a6d8-a791633a13e9\") " pod="kube-system/coredns-7db6d8ff4d-hpshv" Jan 29 11:06:07.726756 kubelet[3069]: I0129 11:06:07.726683 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b23573-3f75-42e0-a6d8-a791633a13e9-config-volume\") pod \"coredns-7db6d8ff4d-hpshv\" (UID: \"70b23573-3f75-42e0-a6d8-a791633a13e9\") " pod="kube-system/coredns-7db6d8ff4d-hpshv" Jan 29 11:06:07.968786 containerd[1618]: time="2025-01-29T11:06:07.968647640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hpshv,Uid:70b23573-3f75-42e0-a6d8-a791633a13e9,Namespace:kube-system,Attempt:0,}" Jan 29 11:06:07.970705 containerd[1618]: time="2025-01-29T11:06:07.970616977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kv5rd,Uid:02432861-9ac8-4cf2-91d4-9c2eefcf32a0,Namespace:kube-system,Attempt:0,}" Jan 29 11:06:08.218290 kubelet[3069]: I0129 11:06:08.218110 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tdczl" podStartSLOduration=5.228927439 podStartE2EDuration="10.218093067s" podCreationTimestamp="2025-01-29 11:05:58 +0000 UTC" firstStartedPulling="2025-01-29 11:05:58.853694267 +0000 UTC m=+15.918537849" lastFinishedPulling="2025-01-29 11:06:03.842859895 +0000 UTC m=+20.907703477" observedRunningTime="2025-01-29 11:06:08.217974863 +0000 UTC m=+25.282818445" watchObservedRunningTime="2025-01-29 11:06:08.218093067 +0000 UTC m=+25.282936649" Jan 29 11:06:09.811533 systemd-networkd[1240]: cilium_host: Link UP Jan 29 11:06:09.811644 systemd-networkd[1240]: cilium_net: Link UP Jan 29 11:06:09.811789 systemd-networkd[1240]: cilium_net: Gained carrier Jan 29 11:06:09.811940 systemd-networkd[1240]: cilium_host: Gained carrier Jan 29 11:06:09.916420 systemd-networkd[1240]: cilium_vxlan: Link UP Jan 29 11:06:09.916656 systemd-networkd[1240]: cilium_vxlan: Gained carrier Jan 29 11:06:09.996197 systemd-networkd[1240]: cilium_host: Gained IPv6LL Jan 29 11:06:10.217798 kernel: NET: Registered PF_ALG protocol family Jan 29 11:06:10.379945 systemd-networkd[1240]: cilium_net: Gained IPv6LL Jan 29 11:06:10.958025 systemd-networkd[1240]: lxc_health: Link UP Jan 29 11:06:10.968895 systemd-networkd[1240]: lxc_health: Gained carrier Jan 29 11:06:11.570958 systemd-networkd[1240]: lxce5a18f5bf419: Link UP Jan 29 11:06:11.578114 kernel: eth0: renamed from tmpb7813 Jan 29 11:06:11.580610 systemd-networkd[1240]: lxc29a087c9bc3f: Link UP Jan 29 11:06:11.587930 kernel: eth0: renamed from tmp5514f Jan 29 11:06:11.600670 systemd-networkd[1240]: lxce5a18f5bf419: Gained carrier Jan 29 11:06:11.604088 systemd-networkd[1240]: lxc29a087c9bc3f: Gained carrier Jan 29 11:06:11.661352 systemd-networkd[1240]: cilium_vxlan: Gained IPv6LL Jan 29 11:06:12.684738 systemd-networkd[1240]: lxc29a087c9bc3f: Gained IPv6LL Jan 29 11:06:12.812706 systemd-networkd[1240]: lxc_health: Gained IPv6LL Jan 29 11:06:13.260194 systemd-networkd[1240]: lxce5a18f5bf419: Gained IPv6LL Jan 29 11:06:15.647714 containerd[1618]: time="2025-01-29T11:06:15.638090056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:06:15.647714 containerd[1618]: time="2025-01-29T11:06:15.638188259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:06:15.647714 containerd[1618]: time="2025-01-29T11:06:15.638201419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:06:15.647714 containerd[1618]: time="2025-01-29T11:06:15.638574150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:06:15.654782 containerd[1618]: time="2025-01-29T11:06:15.653202058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:06:15.654782 containerd[1618]: time="2025-01-29T11:06:15.653463785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:06:15.654782 containerd[1618]: time="2025-01-29T11:06:15.653969440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:06:15.654782 containerd[1618]: time="2025-01-29T11:06:15.654189006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:06:15.758217 containerd[1618]: time="2025-01-29T11:06:15.758174246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kv5rd,Uid:02432861-9ac8-4cf2-91d4-9c2eefcf32a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b78130e2a3b6b67099c10a37640731536d6090ebcd8bc066158684601a58ec13\"" Jan 29 11:06:15.768439 containerd[1618]: time="2025-01-29T11:06:15.768197299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hpshv,Uid:70b23573-3f75-42e0-a6d8-a791633a13e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5514f4acae716405ae8b641fa721960e6cd8c7b2abd4e040efb002a06b59e6ef\"" Jan 29 11:06:15.768439 containerd[1618]: time="2025-01-29T11:06:15.768273261Z" level=info msg="CreateContainer within sandbox \"b78130e2a3b6b67099c10a37640731536d6090ebcd8bc066158684601a58ec13\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:06:15.772685 containerd[1618]: time="2025-01-29T11:06:15.772577747Z" level=info msg="CreateContainer within sandbox \"5514f4acae716405ae8b641fa721960e6cd8c7b2abd4e040efb002a06b59e6ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:06:15.802079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075934575.mount: Deactivated successfully. Jan 29 11:06:15.810846 containerd[1618]: time="2025-01-29T11:06:15.809105575Z" level=info msg="CreateContainer within sandbox \"5514f4acae716405ae8b641fa721960e6cd8c7b2abd4e040efb002a06b59e6ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"359690cc97d753c3c52063a4d6681c2199e762ede9c9768516f186e48a9b842a\"" Jan 29 11:06:15.813371 containerd[1618]: time="2025-01-29T11:06:15.812685080Z" level=info msg="StartContainer for \"359690cc97d753c3c52063a4d6681c2199e762ede9c9768516f186e48a9b842a\"" Jan 29 11:06:15.819708 containerd[1618]: time="2025-01-29T11:06:15.819659444Z" level=info msg="CreateContainer within sandbox \"b78130e2a3b6b67099c10a37640731536d6090ebcd8bc066158684601a58ec13\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b745ee96f35a06f0369e9a1c70d6bc6056c4ca3f391c945aca21da55ac710d2f\"" Jan 29 11:06:15.820913 containerd[1618]: time="2025-01-29T11:06:15.820854919Z" level=info msg="StartContainer for \"b745ee96f35a06f0369e9a1c70d6bc6056c4ca3f391c945aca21da55ac710d2f\"" Jan 29 11:06:15.903912 containerd[1618]: time="2025-01-29T11:06:15.903113283Z" level=info msg="StartContainer for \"359690cc97d753c3c52063a4d6681c2199e762ede9c9768516f186e48a9b842a\" returns successfully" Jan 29 11:06:15.906330 containerd[1618]: time="2025-01-29T11:06:15.906245975Z" level=info msg="StartContainer for \"b745ee96f35a06f0369e9a1c70d6bc6056c4ca3f391c945aca21da55ac710d2f\" returns successfully" Jan 29 11:06:16.235404 kubelet[3069]: I0129 11:06:16.235259 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kv5rd" podStartSLOduration=18.235240924 podStartE2EDuration="18.235240924s" podCreationTimestamp="2025-01-29 11:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:06:16.233136783 +0000 UTC m=+33.297980405" watchObservedRunningTime="2025-01-29 11:06:16.235240924 +0000 UTC m=+33.300084506" Jan 29 11:06:16.257661 kubelet[3069]: I0129 11:06:16.257578 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hpshv" podStartSLOduration=18.257560978 podStartE2EDuration="18.257560978s" podCreationTimestamp="2025-01-29 11:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:06:16.255994052 +0000 UTC m=+33.320837634" watchObservedRunningTime="2025-01-29 11:06:16.257560978 +0000 UTC m=+33.322404560" Jan 29 11:06:27.691072 kubelet[3069]: I0129 11:06:27.690716 3069 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:06:39.962127 systemd[1]: Started sshd@8-188.245.239.20:22-195.178.110.65:44572.service - OpenSSH per-connection server daemon (195.178.110.65:44572). Jan 29 11:06:40.043989 sshd[4445]: Invalid user ubuntu from 195.178.110.65 port 44572 Jan 29 11:06:40.059033 sshd[4445]: Connection closed by invalid user ubuntu 195.178.110.65 port 44572 [preauth] Jan 29 11:06:40.063895 systemd[1]: sshd@8-188.245.239.20:22-195.178.110.65:44572.service: Deactivated successfully. Jan 29 11:10:32.125043 systemd[1]: Started sshd@9-188.245.239.20:22-147.75.109.163:48962.service - OpenSSH per-connection server daemon (147.75.109.163:48962). Jan 29 11:10:33.131331 sshd[4488]: Accepted publickey for core from 147.75.109.163 port 48962 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:33.133595 sshd-session[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:33.142412 systemd-logind[1595]: New session 8 of user core. Jan 29 11:10:33.148097 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:10:33.922049 sshd[4491]: Connection closed by 147.75.109.163 port 48962 Jan 29 11:10:33.922843 sshd-session[4488]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:33.931402 systemd[1]: sshd@9-188.245.239.20:22-147.75.109.163:48962.service: Deactivated successfully. Jan 29 11:10:33.935196 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:10:33.936187 systemd-logind[1595]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:10:33.937195 systemd-logind[1595]: Removed session 8. Jan 29 11:10:39.091440 systemd[1]: Started sshd@10-188.245.239.20:22-147.75.109.163:33312.service - OpenSSH per-connection server daemon (147.75.109.163:33312). Jan 29 11:10:40.083491 sshd[4503]: Accepted publickey for core from 147.75.109.163 port 33312 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:40.086359 sshd-session[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:40.093448 systemd-logind[1595]: New session 9 of user core. Jan 29 11:10:40.099175 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:10:40.878288 sshd[4506]: Connection closed by 147.75.109.163 port 33312 Jan 29 11:10:40.879984 sshd-session[4503]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:40.884023 systemd[1]: sshd@10-188.245.239.20:22-147.75.109.163:33312.service: Deactivated successfully. Jan 29 11:10:40.890268 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:10:40.891056 systemd-logind[1595]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:10:40.892328 systemd-logind[1595]: Removed session 9. Jan 29 11:10:46.049051 systemd[1]: Started sshd@11-188.245.239.20:22-147.75.109.163:33316.service - OpenSSH per-connection server daemon (147.75.109.163:33316). Jan 29 11:10:47.040423 sshd[4520]: Accepted publickey for core from 147.75.109.163 port 33316 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:47.041774 sshd-session[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:47.047002 systemd-logind[1595]: New session 10 of user core. Jan 29 11:10:47.052130 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:10:47.808235 sshd[4523]: Connection closed by 147.75.109.163 port 33316 Jan 29 11:10:47.810384 sshd-session[4520]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:47.815093 systemd[1]: sshd@11-188.245.239.20:22-147.75.109.163:33316.service: Deactivated successfully. Jan 29 11:10:47.823458 systemd-logind[1595]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:10:47.824551 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:10:47.826953 systemd-logind[1595]: Removed session 10. Jan 29 11:10:47.981609 systemd[1]: Started sshd@12-188.245.239.20:22-147.75.109.163:57940.service - OpenSSH per-connection server daemon (147.75.109.163:57940). Jan 29 11:10:48.980957 sshd[4534]: Accepted publickey for core from 147.75.109.163 port 57940 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:48.980521 sshd-session[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:48.994816 systemd-logind[1595]: New session 11 of user core. Jan 29 11:10:49.002408 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:10:49.779834 sshd[4537]: Connection closed by 147.75.109.163 port 57940 Jan 29 11:10:49.780640 sshd-session[4534]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:49.793225 systemd[1]: sshd@12-188.245.239.20:22-147.75.109.163:57940.service: Deactivated successfully. Jan 29 11:10:49.793479 systemd-logind[1595]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:10:49.799592 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:10:49.804852 systemd-logind[1595]: Removed session 11. Jan 29 11:10:49.946243 systemd[1]: Started sshd@13-188.245.239.20:22-147.75.109.163:57952.service - OpenSSH per-connection server daemon (147.75.109.163:57952). Jan 29 11:10:50.937961 sshd[4546]: Accepted publickey for core from 147.75.109.163 port 57952 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:50.940568 sshd-session[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:50.949215 systemd-logind[1595]: New session 12 of user core. Jan 29 11:10:50.955266 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:10:51.717333 sshd[4549]: Connection closed by 147.75.109.163 port 57952 Jan 29 11:10:51.718250 sshd-session[4546]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:51.730135 systemd[1]: sshd@13-188.245.239.20:22-147.75.109.163:57952.service: Deactivated successfully. Jan 29 11:10:51.735421 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:10:51.737476 systemd-logind[1595]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:10:51.738648 systemd-logind[1595]: Removed session 12. Jan 29 11:10:56.893337 systemd[1]: Started sshd@14-188.245.239.20:22-147.75.109.163:57954.service - OpenSSH per-connection server daemon (147.75.109.163:57954). Jan 29 11:10:57.907820 sshd[4560]: Accepted publickey for core from 147.75.109.163 port 57954 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:57.909600 sshd-session[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:57.915780 systemd-logind[1595]: New session 13 of user core. Jan 29 11:10:57.923630 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:10:58.682856 sshd[4563]: Connection closed by 147.75.109.163 port 57954 Jan 29 11:10:58.682511 sshd-session[4560]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:58.687072 systemd-logind[1595]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:10:58.688725 systemd[1]: sshd@14-188.245.239.20:22-147.75.109.163:57954.service: Deactivated successfully. Jan 29 11:10:58.692704 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:10:58.696668 systemd-logind[1595]: Removed session 13. Jan 29 11:10:58.852106 systemd[1]: Started sshd@15-188.245.239.20:22-147.75.109.163:47050.service - OpenSSH per-connection server daemon (147.75.109.163:47050). Jan 29 11:10:59.833144 sshd[4573]: Accepted publickey for core from 147.75.109.163 port 47050 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:59.836098 sshd-session[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:59.844864 systemd-logind[1595]: New session 14 of user core. Jan 29 11:10:59.850237 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:11:00.632822 sshd[4578]: Connection closed by 147.75.109.163 port 47050 Jan 29 11:11:00.633012 sshd-session[4573]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:00.642414 systemd[1]: sshd@15-188.245.239.20:22-147.75.109.163:47050.service: Deactivated successfully. Jan 29 11:11:00.643017 systemd-logind[1595]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:11:00.648298 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:11:00.650632 systemd-logind[1595]: Removed session 14. Jan 29 11:11:00.800368 systemd[1]: Started sshd@16-188.245.239.20:22-147.75.109.163:47064.service - OpenSSH per-connection server daemon (147.75.109.163:47064). Jan 29 11:11:01.786019 sshd[4587]: Accepted publickey for core from 147.75.109.163 port 47064 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:01.789209 sshd-session[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:01.797026 systemd-logind[1595]: New session 15 of user core. Jan 29 11:11:01.805229 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:11:04.243061 sshd[4590]: Connection closed by 147.75.109.163 port 47064 Jan 29 11:11:04.243929 sshd-session[4587]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:04.250472 systemd[1]: sshd@16-188.245.239.20:22-147.75.109.163:47064.service: Deactivated successfully. Jan 29 11:11:04.254658 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:11:04.256770 systemd-logind[1595]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:11:04.258546 systemd-logind[1595]: Removed session 15. Jan 29 11:11:04.413284 systemd[1]: Started sshd@17-188.245.239.20:22-147.75.109.163:47068.service - OpenSSH per-connection server daemon (147.75.109.163:47068). Jan 29 11:11:05.422561 sshd[4607]: Accepted publickey for core from 147.75.109.163 port 47068 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:05.424678 sshd-session[4607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:05.434090 systemd-logind[1595]: New session 16 of user core. Jan 29 11:11:05.442773 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:11:06.313957 sshd[4610]: Connection closed by 147.75.109.163 port 47068 Jan 29 11:11:06.315820 sshd-session[4607]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:06.320697 systemd[1]: sshd@17-188.245.239.20:22-147.75.109.163:47068.service: Deactivated successfully. Jan 29 11:11:06.327274 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:11:06.329528 systemd-logind[1595]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:11:06.332170 systemd-logind[1595]: Removed session 16. Jan 29 11:11:06.489030 systemd[1]: Started sshd@18-188.245.239.20:22-147.75.109.163:47078.service - OpenSSH per-connection server daemon (147.75.109.163:47078). Jan 29 11:11:07.489720 sshd[4619]: Accepted publickey for core from 147.75.109.163 port 47078 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:07.493372 sshd-session[4619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:07.500081 systemd-logind[1595]: New session 17 of user core. Jan 29 11:11:07.508702 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:11:08.258798 sshd[4622]: Connection closed by 147.75.109.163 port 47078 Jan 29 11:11:08.259533 sshd-session[4619]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:08.266591 systemd[1]: sshd@18-188.245.239.20:22-147.75.109.163:47078.service: Deactivated successfully. Jan 29 11:11:08.272868 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:11:08.275624 systemd-logind[1595]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:11:08.278388 systemd-logind[1595]: Removed session 17. Jan 29 11:11:13.428129 systemd[1]: Started sshd@19-188.245.239.20:22-147.75.109.163:55318.service - OpenSSH per-connection server daemon (147.75.109.163:55318). Jan 29 11:11:14.439903 sshd[4637]: Accepted publickey for core from 147.75.109.163 port 55318 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:14.441809 sshd-session[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:14.452673 systemd-logind[1595]: New session 18 of user core. Jan 29 11:11:14.457287 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:11:15.205064 sshd[4640]: Connection closed by 147.75.109.163 port 55318 Jan 29 11:11:15.205823 sshd-session[4637]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:15.212366 systemd[1]: sshd@19-188.245.239.20:22-147.75.109.163:55318.service: Deactivated successfully. Jan 29 11:11:15.218442 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:11:15.220276 systemd-logind[1595]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:11:15.221551 systemd-logind[1595]: Removed session 18. Jan 29 11:11:20.372280 systemd[1]: Started sshd@20-188.245.239.20:22-147.75.109.163:41612.service - OpenSSH per-connection server daemon (147.75.109.163:41612). Jan 29 11:11:21.379579 sshd[4651]: Accepted publickey for core from 147.75.109.163 port 41612 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:21.380893 sshd-session[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:21.387118 systemd-logind[1595]: New session 19 of user core. Jan 29 11:11:21.396540 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:11:22.137536 sshd[4654]: Connection closed by 147.75.109.163 port 41612 Jan 29 11:11:22.137406 sshd-session[4651]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:22.145218 systemd[1]: sshd@20-188.245.239.20:22-147.75.109.163:41612.service: Deactivated successfully. Jan 29 11:11:22.149541 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:11:22.152610 systemd-logind[1595]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:11:22.153991 systemd-logind[1595]: Removed session 19. Jan 29 11:11:22.301217 systemd[1]: Started sshd@21-188.245.239.20:22-147.75.109.163:41614.service - OpenSSH per-connection server daemon (147.75.109.163:41614). Jan 29 11:11:23.307367 sshd[4665]: Accepted publickey for core from 147.75.109.163 port 41614 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:23.310242 sshd-session[4665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:23.316156 systemd-logind[1595]: New session 20 of user core. Jan 29 11:11:23.323171 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:11:26.280120 containerd[1618]: time="2025-01-29T11:11:26.280006859Z" level=info msg="StopContainer for \"1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62\" with timeout 30 (s)" Jan 29 11:11:26.282635 containerd[1618]: time="2025-01-29T11:11:26.282159975Z" level=info msg="Stop container \"1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62\" with signal terminated" Jan 29 11:11:26.296973 containerd[1618]: time="2025-01-29T11:11:26.296558549Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:11:26.309721 containerd[1618]: time="2025-01-29T11:11:26.309458045Z" level=info msg="StopContainer for \"4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705\" with timeout 2 (s)" Jan 29 11:11:26.310473 containerd[1618]: time="2025-01-29T11:11:26.310338043Z" level=info msg="Stop container \"4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705\" with signal terminated" Jan 29 11:11:26.323594 systemd-networkd[1240]: lxc_health: Link DOWN Jan 29 11:11:26.323603 systemd-networkd[1240]: lxc_health: Lost carrier Jan 29 11:11:26.353859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62-rootfs.mount: Deactivated successfully. Jan 29 11:11:26.378475 containerd[1618]: time="2025-01-29T11:11:26.377241599Z" level=info msg="shim disconnected" id=1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62 namespace=k8s.io Jan 29 11:11:26.378475 containerd[1618]: time="2025-01-29T11:11:26.377385719Z" level=warning msg="cleaning up after shim disconnected" id=1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62 namespace=k8s.io Jan 29 11:11:26.378475 containerd[1618]: time="2025-01-29T11:11:26.377396959Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:26.380823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705-rootfs.mount: Deactivated successfully. Jan 29 11:11:26.383103 containerd[1618]: time="2025-01-29T11:11:26.382596509Z" level=info msg="shim disconnected" id=4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705 namespace=k8s.io Jan 29 11:11:26.383397 containerd[1618]: time="2025-01-29T11:11:26.383236188Z" level=warning msg="cleaning up after shim disconnected" id=4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705 namespace=k8s.io Jan 29 11:11:26.383397 containerd[1618]: time="2025-01-29T11:11:26.383261828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:26.403582 containerd[1618]: time="2025-01-29T11:11:26.403529070Z" level=info msg="StopContainer for \"1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62\" returns successfully" Jan 29 11:11:26.406900 containerd[1618]: time="2025-01-29T11:11:26.404253429Z" level=info msg="StopPodSandbox for \"e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9\"" Jan 29 11:11:26.406900 containerd[1618]: time="2025-01-29T11:11:26.404306389Z" level=info msg="Container to stop \"1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:26.409214 containerd[1618]: time="2025-01-29T11:11:26.408840460Z" level=info msg="StopContainer for \"4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705\" returns successfully" Jan 29 11:11:26.409214 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9-shm.mount: Deactivated successfully. Jan 29 11:11:26.413631 containerd[1618]: time="2025-01-29T11:11:26.412788333Z" level=info msg="StopPodSandbox for \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\"" Jan 29 11:11:26.413951 containerd[1618]: time="2025-01-29T11:11:26.413907851Z" level=info msg="Container to stop \"11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:26.414087 containerd[1618]: time="2025-01-29T11:11:26.414059931Z" level=info msg="Container to stop \"ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:26.414242 containerd[1618]: time="2025-01-29T11:11:26.414215250Z" level=info msg="Container to stop \"4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:26.414385 containerd[1618]: time="2025-01-29T11:11:26.414359850Z" level=info msg="Container to stop \"6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:26.414809 containerd[1618]: time="2025-01-29T11:11:26.414780209Z" level=info msg="Container to stop \"4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:26.465007 containerd[1618]: time="2025-01-29T11:11:26.464928116Z" level=info msg="shim disconnected" id=e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9 namespace=k8s.io Jan 29 11:11:26.465007 containerd[1618]: time="2025-01-29T11:11:26.464998436Z" level=warning msg="cleaning up after shim disconnected" id=e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9 namespace=k8s.io Jan 29 11:11:26.465007 containerd[1618]: time="2025-01-29T11:11:26.465006636Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:26.466535 containerd[1618]: time="2025-01-29T11:11:26.466337114Z" level=info msg="shim disconnected" id=6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d namespace=k8s.io Jan 29 11:11:26.466535 containerd[1618]: time="2025-01-29T11:11:26.466394713Z" level=warning msg="cleaning up after shim disconnected" id=6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d namespace=k8s.io Jan 29 11:11:26.466535 containerd[1618]: time="2025-01-29T11:11:26.466405713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:26.485169 containerd[1618]: time="2025-01-29T11:11:26.485027079Z" level=info msg="TearDown network for sandbox \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\" successfully" Jan 29 11:11:26.485169 containerd[1618]: time="2025-01-29T11:11:26.485092359Z" level=info msg="StopPodSandbox for \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\" returns successfully" Jan 29 11:11:26.486249 containerd[1618]: time="2025-01-29T11:11:26.486021597Z" level=info msg="TearDown network for sandbox \"e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9\" successfully" Jan 29 11:11:26.486249 containerd[1618]: time="2025-01-29T11:11:26.486063557Z" level=info msg="StopPodSandbox for \"e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9\" returns successfully" Jan 29 11:11:26.522930 kubelet[3069]: I0129 11:11:26.522886 3069 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg4ps\" (UniqueName: \"kubernetes.io/projected/1a0fd7d9-3ac9-4043-a6c3-52c444a0b277-kube-api-access-kg4ps\") pod \"1a0fd7d9-3ac9-4043-a6c3-52c444a0b277\" (UID: \"1a0fd7d9-3ac9-4043-a6c3-52c444a0b277\") " Jan 29 11:11:26.523492 kubelet[3069]: I0129 11:11:26.522945 3069 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a0fd7d9-3ac9-4043-a6c3-52c444a0b277-cilium-config-path\") pod \"1a0fd7d9-3ac9-4043-a6c3-52c444a0b277\" (UID: \"1a0fd7d9-3ac9-4043-a6c3-52c444a0b277\") " Jan 29 11:11:26.525795 kubelet[3069]: I0129 11:11:26.525050 3069 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a0fd7d9-3ac9-4043-a6c3-52c444a0b277-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1a0fd7d9-3ac9-4043-a6c3-52c444a0b277" (UID: "1a0fd7d9-3ac9-4043-a6c3-52c444a0b277"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:11:26.528792 kubelet[3069]: I0129 11:11:26.528541 3069 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a0fd7d9-3ac9-4043-a6c3-52c444a0b277-kube-api-access-kg4ps" (OuterVolumeSpecName: "kube-api-access-kg4ps") pod "1a0fd7d9-3ac9-4043-a6c3-52c444a0b277" (UID: "1a0fd7d9-3ac9-4043-a6c3-52c444a0b277"). InnerVolumeSpecName "kube-api-access-kg4ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:11:26.623596 kubelet[3069]: I0129 11:11:26.623450 3069 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-lib-modules\") pod \"c337fed5-245e-44e2-949a-39bdbd3c0207\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " Jan 29 11:11:26.623596 kubelet[3069]: I0129 11:11:26.623505 3069 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-bpf-maps\") pod \"c337fed5-245e-44e2-949a-39bdbd3c0207\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " Jan 29 11:11:26.623596 kubelet[3069]: I0129 11:11:26.623532 3069 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c337fed5-245e-44e2-949a-39bdbd3c0207-cilium-config-path\") pod \"c337fed5-245e-44e2-949a-39bdbd3c0207\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " Jan 29 11:11:26.623596 kubelet[3069]: I0129 11:11:26.623560 3069 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c337fed5-245e-44e2-949a-39bdbd3c0207-hubble-tls\") pod \"c337fed5-245e-44e2-949a-39bdbd3c0207\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " Jan 29 11:11:26.623596 kubelet[3069]: I0129 11:11:26.623584 3069 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-etc-cni-netd\") pod \"c337fed5-245e-44e2-949a-39bdbd3c0207\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " Jan 29 11:11:26.623596 kubelet[3069]: I0129 11:11:26.623605 3069 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-cilium-run\") pod \"c337fed5-245e-44e2-949a-39bdbd3c0207\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " Jan 29 11:11:26.625818 kubelet[3069]: I0129 11:11:26.623626 3069 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-hostproc\") pod \"c337fed5-245e-44e2-949a-39bdbd3c0207\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " Jan 29 11:11:26.625818 kubelet[3069]: I0129 11:11:26.623649 3069 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c337fed5-245e-44e2-949a-39bdbd3c0207-clustermesh-secrets\") pod \"c337fed5-245e-44e2-949a-39bdbd3c0207\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " Jan 29 11:11:26.625818 kubelet[3069]: I0129 11:11:26.623669 3069 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-cilium-cgroup\") pod \"c337fed5-245e-44e2-949a-39bdbd3c0207\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " Jan 29 11:11:26.625818 kubelet[3069]: I0129 11:11:26.623690 3069 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-xtables-lock\") pod \"c337fed5-245e-44e2-949a-39bdbd3c0207\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " Jan 29 11:11:26.625818 kubelet[3069]: I0129 11:11:26.623711 3069 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-host-proc-sys-net\") pod \"c337fed5-245e-44e2-949a-39bdbd3c0207\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " Jan 29 11:11:26.625818 kubelet[3069]: I0129 11:11:26.623730 3069 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-host-proc-sys-kernel\") pod \"c337fed5-245e-44e2-949a-39bdbd3c0207\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " Jan 29 11:11:26.626209 kubelet[3069]: I0129 11:11:26.623788 3069 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-687zx\" (UniqueName: \"kubernetes.io/projected/c337fed5-245e-44e2-949a-39bdbd3c0207-kube-api-access-687zx\") pod \"c337fed5-245e-44e2-949a-39bdbd3c0207\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " Jan 29 11:11:26.626209 kubelet[3069]: I0129 11:11:26.623815 3069 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-cni-path\") pod \"c337fed5-245e-44e2-949a-39bdbd3c0207\" (UID: \"c337fed5-245e-44e2-949a-39bdbd3c0207\") " Jan 29 11:11:26.626209 kubelet[3069]: I0129 11:11:26.623860 3069 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a0fd7d9-3ac9-4043-a6c3-52c444a0b277-cilium-config-path\") on node \"ci-4152-2-0-5-7d4b33c67e\" DevicePath \"\"" Jan 29 11:11:26.626209 kubelet[3069]: I0129 11:11:26.623873 3069 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kg4ps\" (UniqueName: \"kubernetes.io/projected/1a0fd7d9-3ac9-4043-a6c3-52c444a0b277-kube-api-access-kg4ps\") on node \"ci-4152-2-0-5-7d4b33c67e\" DevicePath \"\"" Jan 29 11:11:26.626209 kubelet[3069]: I0129 11:11:26.623899 3069 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-cni-path" (OuterVolumeSpecName: "cni-path") pod "c337fed5-245e-44e2-949a-39bdbd3c0207" (UID: "c337fed5-245e-44e2-949a-39bdbd3c0207"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:26.626209 kubelet[3069]: I0129 11:11:26.623444 3069 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c337fed5-245e-44e2-949a-39bdbd3c0207" (UID: "c337fed5-245e-44e2-949a-39bdbd3c0207"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:26.626490 kubelet[3069]: I0129 11:11:26.623949 3069 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c337fed5-245e-44e2-949a-39bdbd3c0207" (UID: "c337fed5-245e-44e2-949a-39bdbd3c0207"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:26.626490 kubelet[3069]: I0129 11:11:26.625731 3069 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c337fed5-245e-44e2-949a-39bdbd3c0207" (UID: "c337fed5-245e-44e2-949a-39bdbd3c0207"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:26.626490 kubelet[3069]: I0129 11:11:26.625845 3069 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c337fed5-245e-44e2-949a-39bdbd3c0207" (UID: "c337fed5-245e-44e2-949a-39bdbd3c0207"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:26.626490 kubelet[3069]: I0129 11:11:26.625866 3069 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c337fed5-245e-44e2-949a-39bdbd3c0207" (UID: "c337fed5-245e-44e2-949a-39bdbd3c0207"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:26.626490 kubelet[3069]: I0129 11:11:26.625886 3069 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-hostproc" (OuterVolumeSpecName: "hostproc") pod "c337fed5-245e-44e2-949a-39bdbd3c0207" (UID: "c337fed5-245e-44e2-949a-39bdbd3c0207"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:26.630279 kubelet[3069]: I0129 11:11:26.629418 3069 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c337fed5-245e-44e2-949a-39bdbd3c0207-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c337fed5-245e-44e2-949a-39bdbd3c0207" (UID: "c337fed5-245e-44e2-949a-39bdbd3c0207"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:11:26.630279 kubelet[3069]: I0129 11:11:26.629641 3069 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c337fed5-245e-44e2-949a-39bdbd3c0207" (UID: "c337fed5-245e-44e2-949a-39bdbd3c0207"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:26.630279 kubelet[3069]: I0129 11:11:26.629707 3069 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c337fed5-245e-44e2-949a-39bdbd3c0207" (UID: "c337fed5-245e-44e2-949a-39bdbd3c0207"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:26.630279 kubelet[3069]: I0129 11:11:26.629734 3069 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c337fed5-245e-44e2-949a-39bdbd3c0207" (UID: "c337fed5-245e-44e2-949a-39bdbd3c0207"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:26.631429 kubelet[3069]: I0129 11:11:26.631253 3069 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c337fed5-245e-44e2-949a-39bdbd3c0207-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c337fed5-245e-44e2-949a-39bdbd3c0207" (UID: "c337fed5-245e-44e2-949a-39bdbd3c0207"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:11:26.633694 kubelet[3069]: I0129 11:11:26.633636 3069 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c337fed5-245e-44e2-949a-39bdbd3c0207-kube-api-access-687zx" (OuterVolumeSpecName: "kube-api-access-687zx") pod "c337fed5-245e-44e2-949a-39bdbd3c0207" (UID: "c337fed5-245e-44e2-949a-39bdbd3c0207"). InnerVolumeSpecName "kube-api-access-687zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:11:26.633880 kubelet[3069]: I0129 11:11:26.633842 3069 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c337fed5-245e-44e2-949a-39bdbd3c0207-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c337fed5-245e-44e2-949a-39bdbd3c0207" (UID: "c337fed5-245e-44e2-949a-39bdbd3c0207"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:11:26.725384 kubelet[3069]: I0129 11:11:26.725070 3069 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-hostproc\") on node \"ci-4152-2-0-5-7d4b33c67e\" DevicePath \"\"" Jan 29 11:11:26.725384 kubelet[3069]: I0129 11:11:26.725116 3069 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c337fed5-245e-44e2-949a-39bdbd3c0207-clustermesh-secrets\") on node \"ci-4152-2-0-5-7d4b33c67e\" DevicePath \"\"" Jan 29 11:11:26.725384 kubelet[3069]: I0129 11:11:26.725137 3069 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-cilium-cgroup\") on node \"ci-4152-2-0-5-7d4b33c67e\" DevicePath \"\"" Jan 29 11:11:26.725384 kubelet[3069]: I0129 11:11:26.725156 3069 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-xtables-lock\") on node \"ci-4152-2-0-5-7d4b33c67e\" DevicePath \"\"" Jan 29 11:11:26.725384 kubelet[3069]: I0129 11:11:26.725172 3069 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-host-proc-sys-net\") on node \"ci-4152-2-0-5-7d4b33c67e\" DevicePath \"\"" Jan 29 11:11:26.725384 kubelet[3069]: I0129 11:11:26.725189 3069 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-host-proc-sys-kernel\") on node \"ci-4152-2-0-5-7d4b33c67e\" DevicePath \"\"" Jan 29 11:11:26.725384 kubelet[3069]: I0129 11:11:26.725206 3069 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-687zx\" (UniqueName: \"kubernetes.io/projected/c337fed5-245e-44e2-949a-39bdbd3c0207-kube-api-access-687zx\") on node \"ci-4152-2-0-5-7d4b33c67e\" DevicePath \"\"" Jan 29 11:11:26.725384 kubelet[3069]: I0129 11:11:26.725222 3069 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-cni-path\") on node \"ci-4152-2-0-5-7d4b33c67e\" DevicePath \"\"" Jan 29 11:11:26.725945 kubelet[3069]: I0129 11:11:26.725240 3069 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-lib-modules\") on node \"ci-4152-2-0-5-7d4b33c67e\" DevicePath \"\"" Jan 29 11:11:26.725945 kubelet[3069]: I0129 11:11:26.725256 3069 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-bpf-maps\") on node \"ci-4152-2-0-5-7d4b33c67e\" DevicePath \"\"" Jan 29 11:11:26.725945 kubelet[3069]: I0129 11:11:26.725271 3069 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c337fed5-245e-44e2-949a-39bdbd3c0207-cilium-config-path\") on node \"ci-4152-2-0-5-7d4b33c67e\" DevicePath \"\"" Jan 29 11:11:26.725945 kubelet[3069]: I0129 11:11:26.725288 3069 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c337fed5-245e-44e2-949a-39bdbd3c0207-hubble-tls\") on node \"ci-4152-2-0-5-7d4b33c67e\" DevicePath \"\"" Jan 29 11:11:26.725945 kubelet[3069]: I0129 11:11:26.725303 3069 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-etc-cni-netd\") on node \"ci-4152-2-0-5-7d4b33c67e\" DevicePath \"\"" Jan 29 11:11:26.725945 kubelet[3069]: I0129 11:11:26.725318 3069 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c337fed5-245e-44e2-949a-39bdbd3c0207-cilium-run\") on node \"ci-4152-2-0-5-7d4b33c67e\" DevicePath \"\"" Jan 29 11:11:27.062522 kubelet[3069]: I0129 11:11:27.062124 3069 scope.go:117] "RemoveContainer" containerID="1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62" Jan 29 11:11:27.073006 containerd[1618]: time="2025-01-29T11:11:27.072956120Z" level=info msg="RemoveContainer for \"1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62\"" Jan 29 11:11:27.079771 containerd[1618]: time="2025-01-29T11:11:27.078576431Z" level=info msg="RemoveContainer for \"1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62\" returns successfully" Jan 29 11:11:27.091142 kubelet[3069]: I0129 11:11:27.091023 3069 scope.go:117] "RemoveContainer" containerID="1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62" Jan 29 11:11:27.092322 containerd[1618]: time="2025-01-29T11:11:27.092240088Z" level=error msg="ContainerStatus for \"1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62\": not found" Jan 29 11:11:27.096895 kubelet[3069]: E0129 11:11:27.096601 3069 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62\": not found" containerID="1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62" Jan 29 11:11:27.096895 kubelet[3069]: I0129 11:11:27.096664 3069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62"} err="failed to get container status \"1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62\": rpc error: code = NotFound desc = an error occurred when try to find container \"1001063fe302f915c1e5c1368e0eadd77042ae19d71b5415da68127159c8eb62\": not found" Jan 29 11:11:27.096895 kubelet[3069]: I0129 11:11:27.096780 3069 scope.go:117] "RemoveContainer" containerID="4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705" Jan 29 11:11:27.102614 containerd[1618]: time="2025-01-29T11:11:27.102568150Z" level=info msg="RemoveContainer for \"4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705\"" Jan 29 11:11:27.106822 containerd[1618]: time="2025-01-29T11:11:27.106733863Z" level=info msg="RemoveContainer for \"4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705\" returns successfully" Jan 29 11:11:27.107397 kubelet[3069]: I0129 11:11:27.107235 3069 scope.go:117] "RemoveContainer" containerID="4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691" Jan 29 11:11:27.108926 containerd[1618]: time="2025-01-29T11:11:27.108894380Z" level=info msg="RemoveContainer for \"4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691\"" Jan 29 11:11:27.114672 containerd[1618]: time="2025-01-29T11:11:27.114624570Z" level=info msg="RemoveContainer for \"4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691\" returns successfully" Jan 29 11:11:27.117002 kubelet[3069]: I0129 11:11:27.116710 3069 scope.go:117] "RemoveContainer" containerID="ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe" Jan 29 11:11:27.118408 containerd[1618]: time="2025-01-29T11:11:27.118375084Z" level=info msg="RemoveContainer for \"ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe\"" Jan 29 11:11:27.122150 containerd[1618]: time="2025-01-29T11:11:27.122111957Z" level=info msg="RemoveContainer for \"ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe\" returns successfully" Jan 29 11:11:27.122467 kubelet[3069]: I0129 11:11:27.122345 3069 scope.go:117] "RemoveContainer" containerID="11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c" Jan 29 11:11:27.123762 containerd[1618]: time="2025-01-29T11:11:27.123719635Z" level=info msg="RemoveContainer for \"11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c\"" Jan 29 11:11:27.126700 containerd[1618]: time="2025-01-29T11:11:27.126623030Z" level=info msg="RemoveContainer for \"11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c\" returns successfully" Jan 29 11:11:27.127231 kubelet[3069]: I0129 11:11:27.127148 3069 scope.go:117] "RemoveContainer" containerID="6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814" Jan 29 11:11:27.128685 containerd[1618]: time="2025-01-29T11:11:27.128595546Z" level=info msg="RemoveContainer for \"6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814\"" Jan 29 11:11:27.131992 containerd[1618]: time="2025-01-29T11:11:27.131961621Z" level=info msg="RemoveContainer for \"6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814\" returns successfully" Jan 29 11:11:27.132387 kubelet[3069]: I0129 11:11:27.132297 3069 scope.go:117] "RemoveContainer" containerID="4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705" Jan 29 11:11:27.132761 containerd[1618]: time="2025-01-29T11:11:27.132602300Z" level=error msg="ContainerStatus for \"4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705\": not found" Jan 29 11:11:27.132964 kubelet[3069]: E0129 11:11:27.132858 3069 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705\": not found" containerID="4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705" Jan 29 11:11:27.132964 kubelet[3069]: I0129 11:11:27.132885 3069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705"} err="failed to get container status \"4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b423d49fbaf25de5a08a0c64a25ef8bbcd8636f80b9ee1db2575216d6a72705\": not found" Jan 29 11:11:27.132964 kubelet[3069]: I0129 11:11:27.132908 3069 scope.go:117] "RemoveContainer" containerID="4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691" Jan 29 11:11:27.133640 containerd[1618]: time="2025-01-29T11:11:27.133516658Z" level=error msg="ContainerStatus for \"4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691\": not found" Jan 29 11:11:27.133827 kubelet[3069]: E0129 11:11:27.133806 3069 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691\": not found" containerID="4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691" Jan 29 11:11:27.133958 kubelet[3069]: I0129 11:11:27.133895 3069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691"} err="failed to get container status \"4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691\": rpc error: code = NotFound desc = an error occurred when try to find container \"4675d0f429e47371cbd36f4bbbf1652da0186cbf557765a32d311d5effca7691\": not found" Jan 29 11:11:27.133958 kubelet[3069]: I0129 11:11:27.133920 3069 scope.go:117] "RemoveContainer" containerID="ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe" Jan 29 11:11:27.134438 containerd[1618]: time="2025-01-29T11:11:27.134223657Z" level=error msg="ContainerStatus for \"ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe\": not found" Jan 29 11:11:27.134514 kubelet[3069]: E0129 11:11:27.134341 3069 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe\": not found" containerID="ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe" Jan 29 11:11:27.134514 kubelet[3069]: I0129 11:11:27.134360 3069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe"} err="failed to get container status \"ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac6722249b28e54e3ba92778dc7835d000623377a806b5446bd66649d34eb8fe\": not found" Jan 29 11:11:27.134514 kubelet[3069]: I0129 11:11:27.134376 3069 scope.go:117] "RemoveContainer" containerID="11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c" Jan 29 11:11:27.134584 containerd[1618]: time="2025-01-29T11:11:27.134531896Z" level=error msg="ContainerStatus for \"11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c\": not found" Jan 29 11:11:27.134813 kubelet[3069]: E0129 11:11:27.134683 3069 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c\": not found" containerID="11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c" Jan 29 11:11:27.134813 kubelet[3069]: I0129 11:11:27.134768 3069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c"} err="failed to get container status \"11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c\": rpc error: code = NotFound desc = an error occurred when try to find container \"11fb70b62a0857519f6f69a01becf929aeeaa17b8d60b8a43f8813ce41f8d89c\": not found" Jan 29 11:11:27.134813 kubelet[3069]: I0129 11:11:27.134784 3069 scope.go:117] "RemoveContainer" containerID="6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814" Jan 29 11:11:27.135190 containerd[1618]: time="2025-01-29T11:11:27.135128735Z" level=error msg="ContainerStatus for \"6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814\": not found" Jan 29 11:11:27.135372 kubelet[3069]: E0129 11:11:27.135323 3069 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814\": not found" containerID="6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814" Jan 29 11:11:27.135372 kubelet[3069]: I0129 11:11:27.135343 3069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814"} err="failed to get container status \"6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d5075517735e823a4c69b68e1ea2c457f8891af3606e47e55d005cfb1fcd814\": not found" Jan 29 11:11:27.271825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9-rootfs.mount: Deactivated successfully. Jan 29 11:11:27.272205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d-rootfs.mount: Deactivated successfully. Jan 29 11:11:27.272367 systemd[1]: var-lib-kubelet-pods-1a0fd7d9\x2d3ac9\x2d4043\x2da6c3\x2d52c444a0b277-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkg4ps.mount: Deactivated successfully. Jan 29 11:11:27.272521 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d-shm.mount: Deactivated successfully. Jan 29 11:11:27.272680 systemd[1]: var-lib-kubelet-pods-c337fed5\x2d245e\x2d44e2\x2d949a\x2d39bdbd3c0207-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d687zx.mount: Deactivated successfully. Jan 29 11:11:27.272869 systemd[1]: var-lib-kubelet-pods-c337fed5\x2d245e\x2d44e2\x2d949a\x2d39bdbd3c0207-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:11:27.273042 systemd[1]: var-lib-kubelet-pods-c337fed5\x2d245e\x2d44e2\x2d949a\x2d39bdbd3c0207-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:11:28.231677 kubelet[3069]: E0129 11:11:28.231607 3069 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:11:28.337275 sshd[4668]: Connection closed by 147.75.109.163 port 41614 Jan 29 11:11:28.338275 sshd-session[4665]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:28.345356 systemd[1]: sshd@21-188.245.239.20:22-147.75.109.163:41614.service: Deactivated successfully. Jan 29 11:11:28.348760 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:11:28.350141 systemd-logind[1595]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:11:28.351331 systemd-logind[1595]: Removed session 20. Jan 29 11:11:28.504225 systemd[1]: Started sshd@22-188.245.239.20:22-147.75.109.163:47426.service - OpenSSH per-connection server daemon (147.75.109.163:47426). Jan 29 11:11:29.071592 kubelet[3069]: I0129 11:11:29.071549 3069 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a0fd7d9-3ac9-4043-a6c3-52c444a0b277" path="/var/lib/kubelet/pods/1a0fd7d9-3ac9-4043-a6c3-52c444a0b277/volumes" Jan 29 11:11:29.072569 kubelet[3069]: I0129 11:11:29.072541 3069 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c337fed5-245e-44e2-949a-39bdbd3c0207" path="/var/lib/kubelet/pods/c337fed5-245e-44e2-949a-39bdbd3c0207/volumes" Jan 29 11:11:29.492516 sshd[4835]: Accepted publickey for core from 147.75.109.163 port 47426 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:29.494909 sshd-session[4835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:29.499817 systemd-logind[1595]: New session 21 of user core. Jan 29 11:11:29.504114 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:11:30.289166 kubelet[3069]: I0129 11:11:30.289119 3069 setters.go:580] "Node became not ready" node="ci-4152-2-0-5-7d4b33c67e" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T11:11:30Z","lastTransitionTime":"2025-01-29T11:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 11:11:31.230594 kubelet[3069]: I0129 11:11:31.230525 3069 topology_manager.go:215] "Topology Admit Handler" podUID="dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe" podNamespace="kube-system" podName="cilium-vtfzj" Jan 29 11:11:31.230771 kubelet[3069]: E0129 11:11:31.230605 3069 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c337fed5-245e-44e2-949a-39bdbd3c0207" containerName="mount-bpf-fs" Jan 29 11:11:31.230771 kubelet[3069]: E0129 11:11:31.230617 3069 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c337fed5-245e-44e2-949a-39bdbd3c0207" containerName="clean-cilium-state" Jan 29 11:11:31.230771 kubelet[3069]: E0129 11:11:31.230632 3069 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c337fed5-245e-44e2-949a-39bdbd3c0207" containerName="mount-cgroup" Jan 29 11:11:31.230771 kubelet[3069]: E0129 11:11:31.230643 3069 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c337fed5-245e-44e2-949a-39bdbd3c0207" containerName="apply-sysctl-overwrites" Jan 29 11:11:31.230771 kubelet[3069]: E0129 11:11:31.230653 3069 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a0fd7d9-3ac9-4043-a6c3-52c444a0b277" containerName="cilium-operator" Jan 29 11:11:31.230771 kubelet[3069]: E0129 11:11:31.230661 3069 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c337fed5-245e-44e2-949a-39bdbd3c0207" containerName="cilium-agent" Jan 29 11:11:31.230771 kubelet[3069]: I0129 11:11:31.230703 3069 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a0fd7d9-3ac9-4043-a6c3-52c444a0b277" containerName="cilium-operator" Jan 29 11:11:31.230771 kubelet[3069]: I0129 11:11:31.230711 3069 memory_manager.go:354] "RemoveStaleState removing state" podUID="c337fed5-245e-44e2-949a-39bdbd3c0207" containerName="cilium-agent" Jan 29 11:11:31.247815 kubelet[3069]: W0129 11:11:31.246574 3069 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4152-2-0-5-7d4b33c67e" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-5-7d4b33c67e' and this object Jan 29 11:11:31.247815 kubelet[3069]: E0129 11:11:31.246623 3069 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4152-2-0-5-7d4b33c67e" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-5-7d4b33c67e' and this object Jan 29 11:11:31.247815 kubelet[3069]: W0129 11:11:31.246591 3069 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4152-2-0-5-7d4b33c67e" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-5-7d4b33c67e' and this object Jan 29 11:11:31.247815 kubelet[3069]: E0129 11:11:31.246642 3069 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4152-2-0-5-7d4b33c67e" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-5-7d4b33c67e' and this object Jan 29 11:11:31.248531 kubelet[3069]: W0129 11:11:31.248410 3069 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4152-2-0-5-7d4b33c67e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-5-7d4b33c67e' and this object Jan 29 11:11:31.248531 kubelet[3069]: E0129 11:11:31.248456 3069 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4152-2-0-5-7d4b33c67e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-5-7d4b33c67e' and this object Jan 29 11:11:31.248531 kubelet[3069]: W0129 11:11:31.248500 3069 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4152-2-0-5-7d4b33c67e" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-5-7d4b33c67e' and this object Jan 29 11:11:31.248531 kubelet[3069]: E0129 11:11:31.248517 3069 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4152-2-0-5-7d4b33c67e" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-5-7d4b33c67e' and this object Jan 29 11:11:31.260347 kubelet[3069]: I0129 11:11:31.260294 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-cilium-config-path\") pod \"cilium-vtfzj\" (UID: \"dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe\") " pod="kube-system/cilium-vtfzj" Jan 29 11:11:31.260347 kubelet[3069]: I0129 11:11:31.260346 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-host-proc-sys-net\") pod \"cilium-vtfzj\" (UID: \"dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe\") " pod="kube-system/cilium-vtfzj" Jan 29 11:11:31.260879 kubelet[3069]: I0129 11:11:31.260365 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-host-proc-sys-kernel\") pod \"cilium-vtfzj\" (UID: \"dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe\") " pod="kube-system/cilium-vtfzj" Jan 29 11:11:31.260879 kubelet[3069]: I0129 11:11:31.260384 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-clustermesh-secrets\") pod \"cilium-vtfzj\" (UID: \"dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe\") " pod="kube-system/cilium-vtfzj" Jan 29 11:11:31.260879 kubelet[3069]: I0129 11:11:31.260413 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt7rw\" (UniqueName: \"kubernetes.io/projected/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-kube-api-access-kt7rw\") pod \"cilium-vtfzj\" (UID: \"dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe\") " pod="kube-system/cilium-vtfzj" Jan 29 11:11:31.260879 kubelet[3069]: I0129 11:11:31.260472 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-etc-cni-netd\") pod \"cilium-vtfzj\" (UID: \"dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe\") " pod="kube-system/cilium-vtfzj" Jan 29 11:11:31.260879 kubelet[3069]: I0129 11:11:31.260498 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-lib-modules\") pod \"cilium-vtfzj\" (UID: \"dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe\") " pod="kube-system/cilium-vtfzj" Jan 29 11:11:31.260989 kubelet[3069]: I0129 11:11:31.260517 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-cilium-ipsec-secrets\") pod \"cilium-vtfzj\" (UID: \"dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe\") " pod="kube-system/cilium-vtfzj" Jan 29 11:11:31.260989 kubelet[3069]: I0129 11:11:31.260535 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-bpf-maps\") pod \"cilium-vtfzj\" (UID: \"dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe\") " pod="kube-system/cilium-vtfzj" Jan 29 11:11:31.260989 kubelet[3069]: I0129 11:11:31.260558 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-cilium-run\") pod \"cilium-vtfzj\" (UID: \"dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe\") " pod="kube-system/cilium-vtfzj" Jan 29 11:11:31.260989 kubelet[3069]: I0129 11:11:31.260578 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-hostproc\") pod \"cilium-vtfzj\" (UID: \"dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe\") " pod="kube-system/cilium-vtfzj" Jan 29 11:11:31.260989 kubelet[3069]: I0129 11:11:31.260598 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-cilium-cgroup\") pod \"cilium-vtfzj\" (UID: \"dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe\") " pod="kube-system/cilium-vtfzj" Jan 29 11:11:31.260989 kubelet[3069]: I0129 11:11:31.260619 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-cni-path\") pod \"cilium-vtfzj\" (UID: \"dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe\") " pod="kube-system/cilium-vtfzj" Jan 29 11:11:31.261172 kubelet[3069]: I0129 11:11:31.260633 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-xtables-lock\") pod \"cilium-vtfzj\" (UID: \"dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe\") " pod="kube-system/cilium-vtfzj" Jan 29 11:11:31.261172 kubelet[3069]: I0129 11:11:31.260649 3069 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-hubble-tls\") pod \"cilium-vtfzj\" (UID: \"dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe\") " pod="kube-system/cilium-vtfzj" Jan 29 11:11:31.381816 sshd[4840]: Connection closed by 147.75.109.163 port 47426 Jan 29 11:11:31.382301 sshd-session[4835]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:31.390403 systemd[1]: sshd@22-188.245.239.20:22-147.75.109.163:47426.service: Deactivated successfully. Jan 29 11:11:31.392212 systemd-logind[1595]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:11:31.397059 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:11:31.398734 systemd-logind[1595]: Removed session 21. Jan 29 11:11:31.552098 systemd[1]: Started sshd@23-188.245.239.20:22-147.75.109.163:47442.service - OpenSSH per-connection server daemon (147.75.109.163:47442). Jan 29 11:11:32.363936 kubelet[3069]: E0129 11:11:32.363605 3069 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 29 11:11:32.363936 kubelet[3069]: E0129 11:11:32.363680 3069 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-vtfzj: failed to sync secret cache: timed out waiting for the condition Jan 29 11:11:32.363936 kubelet[3069]: E0129 11:11:32.363691 3069 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jan 29 11:11:32.363936 kubelet[3069]: E0129 11:11:32.363637 3069 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 29 11:11:32.363936 kubelet[3069]: E0129 11:11:32.363823 3069 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-hubble-tls podName:dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe nodeName:}" failed. No retries permitted until 2025-01-29 11:11:32.863783742 +0000 UTC m=+349.928627364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-hubble-tls") pod "cilium-vtfzj" (UID: "dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:11:32.363936 kubelet[3069]: E0129 11:11:32.363861 3069 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-clustermesh-secrets podName:dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe nodeName:}" failed. No retries permitted until 2025-01-29 11:11:32.863837822 +0000 UTC m=+349.928681444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-clustermesh-secrets") pod "cilium-vtfzj" (UID: "dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:11:32.366007 kubelet[3069]: E0129 11:11:32.363892 3069 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-cilium-ipsec-secrets podName:dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe nodeName:}" failed. No retries permitted until 2025-01-29 11:11:32.863874742 +0000 UTC m=+349.928718364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe-cilium-ipsec-secrets") pod "cilium-vtfzj" (UID: "dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:11:32.546013 sshd[4850]: Accepted publickey for core from 147.75.109.163 port 47442 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:32.548817 sshd-session[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:32.557862 systemd-logind[1595]: New session 22 of user core. Jan 29 11:11:32.567831 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:11:33.035818 containerd[1618]: time="2025-01-29T11:11:33.035720734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vtfzj,Uid:dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe,Namespace:kube-system,Attempt:0,}" Jan 29 11:11:33.064157 containerd[1618]: time="2025-01-29T11:11:33.064037875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:11:33.064157 containerd[1618]: time="2025-01-29T11:11:33.064129235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:11:33.065035 containerd[1618]: time="2025-01-29T11:11:33.064142514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:11:33.065035 containerd[1618]: time="2025-01-29T11:11:33.064258634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:11:33.113211 containerd[1618]: time="2025-01-29T11:11:33.113167321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vtfzj,Uid:dbbff0a0-9239-4ee3-8cbf-3d1cb51ae4fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9aae837dcee4c01a442da0a89858bd7c93d1fec302a9a345ea5484bb492430f\"" Jan 29 11:11:33.118243 containerd[1618]: time="2025-01-29T11:11:33.118203037Z" level=info msg="CreateContainer within sandbox \"a9aae837dcee4c01a442da0a89858bd7c93d1fec302a9a345ea5484bb492430f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:11:33.132415 containerd[1618]: time="2025-01-29T11:11:33.132333747Z" level=info msg="CreateContainer within sandbox \"a9aae837dcee4c01a442da0a89858bd7c93d1fec302a9a345ea5484bb492430f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b40909c86cc4d4938305e58b2461050e5e2ea40525bd7a13f1e88394fd9a24f1\"" Jan 29 11:11:33.133771 containerd[1618]: time="2025-01-29T11:11:33.133648586Z" level=info msg="StartContainer for \"b40909c86cc4d4938305e58b2461050e5e2ea40525bd7a13f1e88394fd9a24f1\"" Jan 29 11:11:33.190893 containerd[1618]: time="2025-01-29T11:11:33.190833907Z" level=info msg="StartContainer for \"b40909c86cc4d4938305e58b2461050e5e2ea40525bd7a13f1e88394fd9a24f1\" returns successfully" Jan 29 11:11:33.231378 sshd[4853]: Connection closed by 147.75.109.163 port 47442 Jan 29 11:11:33.233601 kubelet[3069]: E0129 11:11:33.233506 3069 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:11:33.234158 sshd-session[4850]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:33.239490 systemd[1]: sshd@23-188.245.239.20:22-147.75.109.163:47442.service: Deactivated successfully. Jan 29 11:11:33.243618 containerd[1618]: time="2025-01-29T11:11:33.243407911Z" level=info msg="shim disconnected" id=b40909c86cc4d4938305e58b2461050e5e2ea40525bd7a13f1e88394fd9a24f1 namespace=k8s.io Jan 29 11:11:33.243618 containerd[1618]: time="2025-01-29T11:11:33.243533831Z" level=warning msg="cleaning up after shim disconnected" id=b40909c86cc4d4938305e58b2461050e5e2ea40525bd7a13f1e88394fd9a24f1 namespace=k8s.io Jan 29 11:11:33.243618 containerd[1618]: time="2025-01-29T11:11:33.243544231Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:33.247033 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:11:33.248277 systemd-logind[1595]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:11:33.249875 systemd-logind[1595]: Removed session 22. Jan 29 11:11:33.403254 systemd[1]: Started sshd@24-188.245.239.20:22-147.75.109.163:47444.service - OpenSSH per-connection server daemon (147.75.109.163:47444). Jan 29 11:11:34.121883 containerd[1618]: time="2025-01-29T11:11:34.121716283Z" level=info msg="CreateContainer within sandbox \"a9aae837dcee4c01a442da0a89858bd7c93d1fec302a9a345ea5484bb492430f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:11:34.147795 containerd[1618]: time="2025-01-29T11:11:34.147496750Z" level=info msg="CreateContainer within sandbox \"a9aae837dcee4c01a442da0a89858bd7c93d1fec302a9a345ea5484bb492430f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b7750d2d5b6ec7681a640da3ac3a44cdae2e977cd00608e0676db723fc89ed96\"" Jan 29 11:11:34.149159 containerd[1618]: time="2025-01-29T11:11:34.148225149Z" level=info msg="StartContainer for \"b7750d2d5b6ec7681a640da3ac3a44cdae2e977cd00608e0676db723fc89ed96\"" Jan 29 11:11:34.212735 containerd[1618]: time="2025-01-29T11:11:34.212686195Z" level=info msg="StartContainer for \"b7750d2d5b6ec7681a640da3ac3a44cdae2e977cd00608e0676db723fc89ed96\" returns successfully" Jan 29 11:11:34.243477 containerd[1618]: time="2025-01-29T11:11:34.243414499Z" level=info msg="shim disconnected" id=b7750d2d5b6ec7681a640da3ac3a44cdae2e977cd00608e0676db723fc89ed96 namespace=k8s.io Jan 29 11:11:34.243925 containerd[1618]: time="2025-01-29T11:11:34.243902459Z" level=warning msg="cleaning up after shim disconnected" id=b7750d2d5b6ec7681a640da3ac3a44cdae2e977cd00608e0676db723fc89ed96 namespace=k8s.io Jan 29 11:11:34.244071 containerd[1618]: time="2025-01-29T11:11:34.244055258Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:34.394205 sshd[4965]: Accepted publickey for core from 147.75.109.163 port 47444 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:34.397471 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:34.403082 systemd-logind[1595]: New session 23 of user core. Jan 29 11:11:34.410329 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:11:34.886097 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7750d2d5b6ec7681a640da3ac3a44cdae2e977cd00608e0676db723fc89ed96-rootfs.mount: Deactivated successfully. Jan 29 11:11:35.123438 containerd[1618]: time="2025-01-29T11:11:35.123280092Z" level=info msg="CreateContainer within sandbox \"a9aae837dcee4c01a442da0a89858bd7c93d1fec302a9a345ea5484bb492430f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:11:35.149531 containerd[1618]: time="2025-01-29T11:11:35.149384123Z" level=info msg="CreateContainer within sandbox \"a9aae837dcee4c01a442da0a89858bd7c93d1fec302a9a345ea5484bb492430f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ae1d71d91bb06e2a4839fb8d6da498ce16f8d674450a6aac1d278075c63c615c\"" Jan 29 11:11:35.150795 containerd[1618]: time="2025-01-29T11:11:35.150728722Z" level=info msg="StartContainer for \"ae1d71d91bb06e2a4839fb8d6da498ce16f8d674450a6aac1d278075c63c615c\"" Jan 29 11:11:35.218492 containerd[1618]: time="2025-01-29T11:11:35.217844617Z" level=info msg="StartContainer for \"ae1d71d91bb06e2a4839fb8d6da498ce16f8d674450a6aac1d278075c63c615c\" returns successfully" Jan 29 11:11:35.251215 containerd[1618]: time="2025-01-29T11:11:35.250823605Z" level=info msg="shim disconnected" id=ae1d71d91bb06e2a4839fb8d6da498ce16f8d674450a6aac1d278075c63c615c namespace=k8s.io Jan 29 11:11:35.251215 containerd[1618]: time="2025-01-29T11:11:35.250923365Z" level=warning msg="cleaning up after shim disconnected" id=ae1d71d91bb06e2a4839fb8d6da498ce16f8d674450a6aac1d278075c63c615c namespace=k8s.io Jan 29 11:11:35.251215 containerd[1618]: time="2025-01-29T11:11:35.250943485Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:35.886944 systemd[1]: run-containerd-runc-k8s.io-ae1d71d91bb06e2a4839fb8d6da498ce16f8d674450a6aac1d278075c63c615c-runc.TNn4XA.mount: Deactivated successfully. Jan 29 11:11:35.887235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae1d71d91bb06e2a4839fb8d6da498ce16f8d674450a6aac1d278075c63c615c-rootfs.mount: Deactivated successfully. Jan 29 11:11:36.132686 containerd[1618]: time="2025-01-29T11:11:36.131027780Z" level=info msg="CreateContainer within sandbox \"a9aae837dcee4c01a442da0a89858bd7c93d1fec302a9a345ea5484bb492430f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:11:36.148464 containerd[1618]: time="2025-01-29T11:11:36.147591257Z" level=info msg="CreateContainer within sandbox \"a9aae837dcee4c01a442da0a89858bd7c93d1fec302a9a345ea5484bb492430f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"536b4d2a3d23690a9519eb3e88fdf873510d1f84ad50cec5cc498606c2eacb1f\"" Jan 29 11:11:36.150296 containerd[1618]: time="2025-01-29T11:11:36.150253416Z" level=info msg="StartContainer for \"536b4d2a3d23690a9519eb3e88fdf873510d1f84ad50cec5cc498606c2eacb1f\"" Jan 29 11:11:36.221135 containerd[1618]: time="2025-01-29T11:11:36.220807561Z" level=info msg="StartContainer for \"536b4d2a3d23690a9519eb3e88fdf873510d1f84ad50cec5cc498606c2eacb1f\" returns successfully" Jan 29 11:11:36.249602 containerd[1618]: time="2025-01-29T11:11:36.249449235Z" level=info msg="shim disconnected" id=536b4d2a3d23690a9519eb3e88fdf873510d1f84ad50cec5cc498606c2eacb1f namespace=k8s.io Jan 29 11:11:36.249949 containerd[1618]: time="2025-01-29T11:11:36.249602115Z" level=warning msg="cleaning up after shim disconnected" id=536b4d2a3d23690a9519eb3e88fdf873510d1f84ad50cec5cc498606c2eacb1f namespace=k8s.io Jan 29 11:11:36.249949 containerd[1618]: time="2025-01-29T11:11:36.249641395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:36.888474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-536b4d2a3d23690a9519eb3e88fdf873510d1f84ad50cec5cc498606c2eacb1f-rootfs.mount: Deactivated successfully. Jan 29 11:11:37.136776 containerd[1618]: time="2025-01-29T11:11:37.135789549Z" level=info msg="CreateContainer within sandbox \"a9aae837dcee4c01a442da0a89858bd7c93d1fec302a9a345ea5484bb492430f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:11:37.151869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119195679.mount: Deactivated successfully. Jan 29 11:11:37.154702 containerd[1618]: time="2025-01-29T11:11:37.154645588Z" level=info msg="CreateContainer within sandbox \"a9aae837dcee4c01a442da0a89858bd7c93d1fec302a9a345ea5484bb492430f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"297fde7bb703322e380408bcbee95c5174b612fcdaa39d9b2915d4287bfc8ba1\"" Jan 29 11:11:37.155535 containerd[1618]: time="2025-01-29T11:11:37.155492548Z" level=info msg="StartContainer for \"297fde7bb703322e380408bcbee95c5174b612fcdaa39d9b2915d4287bfc8ba1\"" Jan 29 11:11:37.217537 containerd[1618]: time="2025-01-29T11:11:37.217475585Z" level=info msg="StartContainer for \"297fde7bb703322e380408bcbee95c5174b612fcdaa39d9b2915d4287bfc8ba1\" returns successfully" Jan 29 11:11:37.556016 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 11:11:38.161898 kubelet[3069]: I0129 11:11:38.161814 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vtfzj" podStartSLOduration=7.161788559 podStartE2EDuration="7.161788559s" podCreationTimestamp="2025-01-29 11:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:11:38.161375759 +0000 UTC m=+355.226219341" watchObservedRunningTime="2025-01-29 11:11:38.161788559 +0000 UTC m=+355.226632181" Jan 29 11:11:39.273073 systemd[1]: run-containerd-runc-k8s.io-297fde7bb703322e380408bcbee95c5174b612fcdaa39d9b2915d4287bfc8ba1-runc.JImS8n.mount: Deactivated successfully. Jan 29 11:11:40.640036 systemd-networkd[1240]: lxc_health: Link UP Jan 29 11:11:40.654661 systemd-networkd[1240]: lxc_health: Gained carrier Jan 29 11:11:42.091968 systemd-networkd[1240]: lxc_health: Gained IPv6LL Jan 29 11:11:43.095851 containerd[1618]: time="2025-01-29T11:11:43.095604875Z" level=info msg="StopPodSandbox for \"e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9\"" Jan 29 11:11:43.095851 containerd[1618]: time="2025-01-29T11:11:43.095706715Z" level=info msg="TearDown network for sandbox \"e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9\" successfully" Jan 29 11:11:43.095851 containerd[1618]: time="2025-01-29T11:11:43.095717435Z" level=info msg="StopPodSandbox for \"e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9\" returns successfully" Jan 29 11:11:43.098738 containerd[1618]: time="2025-01-29T11:11:43.097465557Z" level=info msg="RemovePodSandbox for \"e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9\"" Jan 29 11:11:43.098738 containerd[1618]: time="2025-01-29T11:11:43.097576677Z" level=info msg="Forcibly stopping sandbox \"e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9\"" Jan 29 11:11:43.098738 containerd[1618]: time="2025-01-29T11:11:43.097647717Z" level=info msg="TearDown network for sandbox \"e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9\" successfully" Jan 29 11:11:43.103620 containerd[1618]: time="2025-01-29T11:11:43.103264562Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:11:43.103620 containerd[1618]: time="2025-01-29T11:11:43.103382642Z" level=info msg="RemovePodSandbox \"e34eb51498da56000d31b355653020ce8049908ce2c12960b5e2b196309d58e9\" returns successfully" Jan 29 11:11:43.105018 containerd[1618]: time="2025-01-29T11:11:43.104678283Z" level=info msg="StopPodSandbox for \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\"" Jan 29 11:11:43.105018 containerd[1618]: time="2025-01-29T11:11:43.104855003Z" level=info msg="TearDown network for sandbox \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\" successfully" Jan 29 11:11:43.105018 containerd[1618]: time="2025-01-29T11:11:43.104875363Z" level=info msg="StopPodSandbox for \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\" returns successfully" Jan 29 11:11:43.106935 containerd[1618]: time="2025-01-29T11:11:43.106674365Z" level=info msg="RemovePodSandbox for \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\"" Jan 29 11:11:43.106935 containerd[1618]: time="2025-01-29T11:11:43.106728885Z" level=info msg="Forcibly stopping sandbox \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\"" Jan 29 11:11:43.106935 containerd[1618]: time="2025-01-29T11:11:43.106856085Z" level=info msg="TearDown network for sandbox \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\" successfully" Jan 29 11:11:43.112650 containerd[1618]: time="2025-01-29T11:11:43.112496010Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:11:43.112650 containerd[1618]: time="2025-01-29T11:11:43.112605570Z" level=info msg="RemovePodSandbox \"6f5fa72b6796aaf42b0f0e22cb1cbc07828de563896361040b62412887a3238d\" returns successfully" Jan 29 11:11:45.841571 systemd[1]: run-containerd-runc-k8s.io-297fde7bb703322e380408bcbee95c5174b612fcdaa39d9b2915d4287bfc8ba1-runc.HPDg4c.mount: Deactivated successfully. Jan 29 11:11:46.065290 sshd[5030]: Connection closed by 147.75.109.163 port 47444 Jan 29 11:11:46.066135 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:46.074088 systemd-logind[1595]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:11:46.075360 systemd[1]: sshd@24-188.245.239.20:22-147.75.109.163:47444.service: Deactivated successfully. Jan 29 11:11:46.080993 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:11:46.082892 systemd-logind[1595]: Removed session 23.