Jan 29 16:07:49.909665 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 16:07:49.909692 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Jan 29 14:53:00 -00 2025 Jan 29 16:07:49.909703 kernel: KASLR enabled Jan 29 16:07:49.909709 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 29 16:07:49.909715 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 29 16:07:49.909720 kernel: random: crng init done Jan 29 16:07:49.909727 kernel: secureboot: Secure boot disabled Jan 29 16:07:49.909733 kernel: ACPI: Early table checksum verification disabled Jan 29 16:07:49.909739 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 29 16:07:49.909747 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 29 16:07:49.909754 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:07:49.909761 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:07:49.909767 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:07:49.909773 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:07:49.909781 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:07:49.909789 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:07:49.909795 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:07:49.909802 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:07:49.909808 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:07:49.909814 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 16:07:49.909820 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 29 16:07:49.909826 kernel: NUMA: Failed to initialise from firmware Jan 29 16:07:49.909833 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 16:07:49.909839 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 29 16:07:49.909845 kernel: Zone ranges: Jan 29 16:07:49.909852 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 16:07:49.909858 kernel: DMA32 empty Jan 29 16:07:49.909865 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 29 16:07:49.909871 kernel: Movable zone start for each node Jan 29 16:07:49.909877 kernel: Early memory node ranges Jan 29 16:07:49.909883 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 29 16:07:49.909889 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 29 16:07:49.909896 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 29 16:07:49.909902 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 29 16:07:49.909908 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 29 16:07:49.909914 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 29 16:07:49.909920 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 29 16:07:49.909927 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 29 16:07:49.909934 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 29 16:07:49.909940 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 16:07:49.909949 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 29 16:07:49.909956 kernel: psci: probing for conduit method from ACPI. Jan 29 16:07:49.909964 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 16:07:49.909972 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 16:07:49.909979 kernel: psci: Trusted OS migration not required Jan 29 16:07:49.909985 kernel: psci: SMC Calling Convention v1.1 Jan 29 16:07:49.909992 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 16:07:49.909999 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 16:07:49.910005 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 16:07:49.910012 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 16:07:49.910019 kernel: Detected PIPT I-cache on CPU0 Jan 29 16:07:49.910025 kernel: CPU features: detected: GIC system register CPU interface Jan 29 16:07:49.910032 kernel: CPU features: detected: Hardware dirty bit management Jan 29 16:07:49.910039 kernel: CPU features: detected: Spectre-v4 Jan 29 16:07:49.910046 kernel: CPU features: detected: Spectre-BHB Jan 29 16:07:49.910053 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 16:07:49.910059 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 16:07:49.910068 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 16:07:49.910074 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 16:07:49.910081 kernel: alternatives: applying boot alternatives Jan 29 16:07:49.910089 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac Jan 29 16:07:49.910096 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:07:49.910103 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:07:49.910109 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:07:49.910118 kernel: Fallback order for Node 0: 0 Jan 29 16:07:49.910125 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 29 16:07:49.910132 kernel: Policy zone: Normal Jan 29 16:07:49.910138 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:07:49.910145 kernel: software IO TLB: area num 2. Jan 29 16:07:49.910151 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 29 16:07:49.910158 kernel: Memory: 3883896K/4096000K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 212104K reserved, 0K cma-reserved) Jan 29 16:07:49.910165 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:07:49.910172 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:07:49.910179 kernel: rcu: RCU event tracing is enabled. Jan 29 16:07:49.910186 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:07:49.910193 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:07:49.910201 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:07:49.910208 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:07:49.910214 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:07:49.910221 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 16:07:49.910227 kernel: GICv3: 256 SPIs implemented Jan 29 16:07:49.910234 kernel: GICv3: 0 Extended SPIs implemented Jan 29 16:07:49.910240 kernel: Root IRQ handler: gic_handle_irq Jan 29 16:07:49.910247 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 16:07:49.910253 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 16:07:49.910260 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 16:07:49.910267 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 16:07:49.910275 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 16:07:49.910282 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 29 16:07:49.910288 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 29 16:07:49.910295 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:07:49.910304 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 16:07:49.910311 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 16:07:49.910317 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 16:07:49.910340 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 16:07:49.910347 kernel: Console: colour dummy device 80x25 Jan 29 16:07:49.910354 kernel: ACPI: Core revision 20230628 Jan 29 16:07:49.910362 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 16:07:49.910371 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:07:49.910378 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:07:49.910387 kernel: landlock: Up and running. Jan 29 16:07:49.910394 kernel: SELinux: Initializing. Jan 29 16:07:49.910409 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:07:49.910417 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:07:49.910424 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:07:49.910431 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:07:49.910437 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:07:49.910446 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:07:49.910453 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 16:07:49.910460 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 16:07:49.910466 kernel: Remapping and enabling EFI services. Jan 29 16:07:49.910474 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:07:49.910480 kernel: Detected PIPT I-cache on CPU1 Jan 29 16:07:49.910487 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 16:07:49.910494 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 29 16:07:49.910501 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 16:07:49.910509 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 16:07:49.910516 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:07:49.910528 kernel: SMP: Total of 2 processors activated. Jan 29 16:07:49.910537 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 16:07:49.910545 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 16:07:49.910552 kernel: CPU features: detected: Common not Private translations Jan 29 16:07:49.910559 kernel: CPU features: detected: CRC32 instructions Jan 29 16:07:49.910566 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 16:07:49.910573 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 16:07:49.910582 kernel: CPU features: detected: LSE atomic instructions Jan 29 16:07:49.910589 kernel: CPU features: detected: Privileged Access Never Jan 29 16:07:49.910596 kernel: CPU features: detected: RAS Extension Support Jan 29 16:07:49.910603 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 16:07:49.910610 kernel: CPU: All CPU(s) started at EL1 Jan 29 16:07:49.910617 kernel: alternatives: applying system-wide alternatives Jan 29 16:07:49.910624 kernel: devtmpfs: initialized Jan 29 16:07:49.910632 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:07:49.910641 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:07:49.910648 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:07:49.910655 kernel: SMBIOS 3.0.0 present. Jan 29 16:07:49.910662 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 29 16:07:49.910670 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:07:49.910677 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 16:07:49.910684 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 16:07:49.910691 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 16:07:49.910698 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:07:49.910706 kernel: audit: type=2000 audit(0.011:1): state=initialized audit_enabled=0 res=1 Jan 29 16:07:49.910714 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:07:49.910721 kernel: cpuidle: using governor menu Jan 29 16:07:49.910729 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 16:07:49.910736 kernel: ASID allocator initialised with 32768 entries Jan 29 16:07:49.910745 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:07:49.910753 kernel: Serial: AMBA PL011 UART driver Jan 29 16:07:49.910761 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 16:07:49.910768 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 16:07:49.910779 kernel: Modules: 509280 pages in range for PLT usage Jan 29 16:07:49.910788 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:07:49.910795 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:07:49.910802 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 16:07:49.910809 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 16:07:49.910816 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:07:49.910823 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:07:49.910830 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 16:07:49.910837 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 16:07:49.910846 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:07:49.910853 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:07:49.910860 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:07:49.910867 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:07:49.910874 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:07:49.910881 kernel: ACPI: Interpreter enabled Jan 29 16:07:49.910888 kernel: ACPI: Using GIC for interrupt routing Jan 29 16:07:49.910895 kernel: ACPI: MCFG table detected, 1 entries Jan 29 16:07:49.910903 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 16:07:49.910910 kernel: printk: console [ttyAMA0] enabled Jan 29 16:07:49.910918 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:07:49.911069 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:07:49.911143 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 16:07:49.911206 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 16:07:49.911269 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 16:07:49.911355 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 16:07:49.911366 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 16:07:49.911378 kernel: PCI host bridge to bus 0000:00 Jan 29 16:07:49.911473 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 16:07:49.911540 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 16:07:49.911600 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 16:07:49.911658 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:07:49.911742 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 16:07:49.911825 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 29 16:07:49.911891 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 29 16:07:49.911958 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 16:07:49.912036 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 16:07:49.912104 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 29 16:07:49.912176 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 16:07:49.912243 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 29 16:07:49.912341 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 16:07:49.912468 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 29 16:07:49.912555 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 16:07:49.912625 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 29 16:07:49.912697 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 16:07:49.912762 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 29 16:07:49.912842 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 16:07:49.912909 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 29 16:07:49.912980 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 16:07:49.913044 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 29 16:07:49.913118 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 16:07:49.913185 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 29 16:07:49.913262 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 29 16:07:49.913346 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 29 16:07:49.913446 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 29 16:07:49.913515 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 29 16:07:49.913592 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 16:07:49.913660 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 29 16:07:49.913731 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 16:07:49.913800 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 16:07:49.913873 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 16:07:49.913940 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 29 16:07:49.914015 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 29 16:07:49.914082 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 29 16:07:49.914148 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 29 16:07:49.914225 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 29 16:07:49.914293 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 29 16:07:49.914393 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 16:07:49.914486 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 29 16:07:49.914555 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 29 16:07:49.914631 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 29 16:07:49.914704 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 29 16:07:49.914770 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 16:07:49.914845 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 16:07:49.914912 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 29 16:07:49.914978 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 29 16:07:49.915044 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 16:07:49.915114 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 29 16:07:49.915179 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 29 16:07:49.915254 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 29 16:07:49.915482 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 29 16:07:49.915581 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 29 16:07:49.915645 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 29 16:07:49.915713 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 16:07:49.915775 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 29 16:07:49.915843 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 29 16:07:49.915911 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 16:07:49.915975 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 29 16:07:49.916039 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 29 16:07:49.916105 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 16:07:49.916169 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 29 16:07:49.916245 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 29 16:07:49.916346 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 16:07:49.916445 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 29 16:07:49.916526 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 29 16:07:49.916608 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 16:07:49.916683 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 29 16:07:49.916749 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 29 16:07:49.916817 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 16:07:49.916882 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 29 16:07:49.916949 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 29 16:07:49.917016 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 16:07:49.917080 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 29 16:07:49.917143 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 29 16:07:49.917209 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 29 16:07:49.917273 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 16:07:49.917703 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 29 16:07:49.917795 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 16:07:49.917862 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 29 16:07:49.917926 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 16:07:49.917991 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 29 16:07:49.918054 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 16:07:49.918520 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 29 16:07:49.918626 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 16:07:49.918694 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 29 16:07:49.918759 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 16:07:49.918842 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 29 16:07:49.918922 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 16:07:49.919005 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 29 16:07:49.919075 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 16:07:49.919146 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 29 16:07:49.919211 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 16:07:49.919288 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 29 16:07:49.919386 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 29 16:07:49.919475 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 29 16:07:49.919548 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 16:07:49.919631 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 29 16:07:49.919712 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 16:07:49.919801 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 29 16:07:49.919875 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 16:07:49.919943 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 29 16:07:49.920008 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 16:07:49.920082 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 29 16:07:49.920161 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 16:07:49.920241 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 29 16:07:49.920505 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 16:07:49.920634 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 29 16:07:49.920718 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 16:07:49.920799 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 29 16:07:49.920877 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 16:07:49.920949 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 29 16:07:49.921016 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 29 16:07:49.921099 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 29 16:07:49.921187 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 29 16:07:49.921262 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 16:07:49.921360 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 29 16:07:49.921459 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 16:07:49.921540 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 16:07:49.921619 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 29 16:07:49.921697 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 16:07:49.921780 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 29 16:07:49.921854 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 16:07:49.921929 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 16:07:49.921997 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 29 16:07:49.922069 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 16:07:49.922155 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 16:07:49.922228 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 29 16:07:49.922308 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 16:07:49.922464 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 16:07:49.922535 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 29 16:07:49.922611 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 16:07:49.924490 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 16:07:49.924639 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 16:07:49.924761 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 16:07:49.924853 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 29 16:07:49.924926 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 16:07:49.925004 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 29 16:07:49.925070 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 29 16:07:49.925136 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 16:07:49.925200 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 16:07:49.925264 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 29 16:07:49.925341 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 16:07:49.925476 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 29 16:07:49.925552 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 29 16:07:49.925620 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 16:07:49.925684 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 16:07:49.925748 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 29 16:07:49.925815 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 16:07:49.925888 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 29 16:07:49.925957 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 29 16:07:49.926032 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 29 16:07:49.926101 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 16:07:49.926165 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 16:07:49.926237 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 29 16:07:49.926306 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 16:07:49.928561 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 16:07:49.928654 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 16:07:49.928721 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 29 16:07:49.928796 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 16:07:49.928865 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 16:07:49.928930 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 29 16:07:49.929003 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 29 16:07:49.929080 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 16:07:49.929163 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 16:07:49.929236 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 16:07:49.929311 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 16:07:49.929459 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 16:07:49.929534 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 29 16:07:49.929597 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 16:07:49.929669 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 29 16:07:49.929733 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 29 16:07:49.929804 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 16:07:49.929888 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 29 16:07:49.929955 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 29 16:07:49.930017 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 16:07:49.930347 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 29 16:07:49.930461 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 29 16:07:49.932473 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 16:07:49.932596 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 29 16:07:49.932661 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 29 16:07:49.932727 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 16:07:49.932796 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 29 16:07:49.932859 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 29 16:07:49.932918 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 16:07:49.932989 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 29 16:07:49.933049 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 29 16:07:49.933109 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 16:07:49.933176 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 29 16:07:49.933237 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 29 16:07:49.933300 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 16:07:49.933398 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 29 16:07:49.933479 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 29 16:07:49.933540 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 16:07:49.933551 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 16:07:49.933563 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 16:07:49.933571 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 16:07:49.933579 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 16:07:49.933588 kernel: iommu: Default domain type: Translated Jan 29 16:07:49.933596 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 16:07:49.933604 kernel: efivars: Registered efivars operations Jan 29 16:07:49.933611 kernel: vgaarb: loaded Jan 29 16:07:49.933619 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 16:07:49.933626 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:07:49.933634 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:07:49.933642 kernel: pnp: PnP ACPI init Jan 29 16:07:49.933717 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 16:07:49.933730 kernel: pnp: PnP ACPI: found 1 devices Jan 29 16:07:49.933738 kernel: NET: Registered PF_INET protocol family Jan 29 16:07:49.933746 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:07:49.933754 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 16:07:49.933761 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:07:49.933769 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:07:49.933777 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 16:07:49.933786 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 16:07:49.933796 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:07:49.933803 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:07:49.933811 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:07:49.933891 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 29 16:07:49.933903 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:07:49.933910 kernel: kvm [1]: HYP mode not available Jan 29 16:07:49.933918 kernel: Initialise system trusted keyrings Jan 29 16:07:49.933925 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 16:07:49.933933 kernel: Key type asymmetric registered Jan 29 16:07:49.933942 kernel: Asymmetric key parser 'x509' registered Jan 29 16:07:49.933951 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 16:07:49.933959 kernel: io scheduler mq-deadline registered Jan 29 16:07:49.933966 kernel: io scheduler kyber registered Jan 29 16:07:49.933973 kernel: io scheduler bfq registered Jan 29 16:07:49.933982 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 16:07:49.934052 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 29 16:07:49.934118 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 29 16:07:49.934185 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:07:49.934254 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 29 16:07:49.935781 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 29 16:07:49.935923 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:07:49.935996 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 29 16:07:49.936061 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 29 16:07:49.936133 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:07:49.936213 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 29 16:07:49.936278 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 29 16:07:49.936364 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:07:49.936487 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 29 16:07:49.936563 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 29 16:07:49.936632 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:07:49.936700 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 29 16:07:49.936766 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 29 16:07:49.936830 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:07:49.936899 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 29 16:07:49.936963 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 29 16:07:49.937030 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:07:49.937099 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 29 16:07:49.937164 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 29 16:07:49.937228 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:07:49.937238 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 29 16:07:49.937309 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 29 16:07:49.938217 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 29 16:07:49.938294 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:07:49.938305 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 16:07:49.938313 kernel: ACPI: button: Power Button [PWRB] Jan 29 16:07:49.938339 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 16:07:49.938471 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 29 16:07:49.938563 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 29 16:07:49.938575 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:07:49.938586 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 16:07:49.938655 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 29 16:07:49.938666 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 29 16:07:49.938674 kernel: thunder_xcv, ver 1.0 Jan 29 16:07:49.938681 kernel: thunder_bgx, ver 1.0 Jan 29 16:07:49.938689 kernel: nicpf, ver 1.0 Jan 29 16:07:49.938697 kernel: nicvf, ver 1.0 Jan 29 16:07:49.938776 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 16:07:49.938842 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T16:07:49 UTC (1738166869) Jan 29 16:07:49.938855 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 16:07:49.938863 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 16:07:49.938870 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 16:07:49.938878 kernel: watchdog: Hard watchdog permanently disabled Jan 29 16:07:49.938886 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:07:49.938894 kernel: Segment Routing with IPv6 Jan 29 16:07:49.938901 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:07:49.938910 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:07:49.938921 kernel: Key type dns_resolver registered Jan 29 16:07:49.938928 kernel: registered taskstats version 1 Jan 29 16:07:49.938936 kernel: Loading compiled-in X.509 certificates Jan 29 16:07:49.938944 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6aa2640fb67e4af9702410ddab8a5c8b9fc0d77b' Jan 29 16:07:49.938951 kernel: Key type .fscrypt registered Jan 29 16:07:49.938959 kernel: Key type fscrypt-provisioning registered Jan 29 16:07:49.938967 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:07:49.938974 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:07:49.938982 kernel: ima: No architecture policies found Jan 29 16:07:49.938992 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 16:07:49.939000 kernel: clk: Disabling unused clocks Jan 29 16:07:49.939008 kernel: Freeing unused kernel memory: 38336K Jan 29 16:07:49.939016 kernel: Run /init as init process Jan 29 16:07:49.939024 kernel: with arguments: Jan 29 16:07:49.939032 kernel: /init Jan 29 16:07:49.939039 kernel: with environment: Jan 29 16:07:49.939046 kernel: HOME=/ Jan 29 16:07:49.939054 kernel: TERM=linux Jan 29 16:07:49.939062 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:07:49.939071 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:07:49.939082 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:07:49.939090 systemd[1]: Detected virtualization kvm. Jan 29 16:07:49.939098 systemd[1]: Detected architecture arm64. Jan 29 16:07:49.939106 systemd[1]: Running in initrd. Jan 29 16:07:49.939113 systemd[1]: No hostname configured, using default hostname. Jan 29 16:07:49.939123 systemd[1]: Hostname set to <localhost>. Jan 29 16:07:49.939131 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:07:49.939139 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:07:49.939147 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:07:49.939155 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:07:49.939164 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:07:49.939172 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:07:49.939180 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:07:49.939191 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:07:49.939200 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:07:49.939208 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:07:49.939216 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:07:49.939224 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:07:49.939232 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:07:49.939240 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:07:49.939251 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:07:49.939261 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:07:49.939269 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:07:49.939278 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:07:49.939287 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:07:49.939297 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:07:49.939305 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:07:49.939313 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:07:49.939426 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:07:49.939441 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:07:49.939449 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:07:49.939457 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:07:49.939465 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:07:49.939473 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:07:49.939481 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:07:49.939489 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:07:49.939497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:07:49.939545 systemd-journald[236]: Collecting audit messages is disabled. Jan 29 16:07:49.939568 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:07:49.939577 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:07:49.939587 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:07:49.939596 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:07:49.939606 systemd-journald[236]: Journal started Jan 29 16:07:49.939625 systemd-journald[236]: Runtime Journal (/run/log/journal/a994ee601bac428baa7c7f30b4b3d756) is 8M, max 76.6M, 68.6M free. Jan 29 16:07:49.924867 systemd-modules-load[238]: Inserted module 'overlay' Jan 29 16:07:49.942914 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:07:49.946671 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:07:49.946728 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:07:49.947812 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:07:49.949847 kernel: Bridge firewalling registered Jan 29 16:07:49.949001 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 29 16:07:49.950021 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:07:49.956531 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:07:49.959629 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:07:49.969040 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:07:49.974183 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:07:49.983019 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:07:49.986371 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:07:49.991358 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:07:49.997605 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:07:49.998675 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:07:50.004919 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:07:50.020351 dracut-cmdline[274]: dracut-dracut-053 Jan 29 16:07:50.021056 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac Jan 29 16:07:50.039736 systemd-resolved[273]: Positive Trust Anchors: Jan 29 16:07:50.039752 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:07:50.039784 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:07:50.045023 systemd-resolved[273]: Defaulting to hostname 'linux'. Jan 29 16:07:50.049009 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:07:50.050388 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:07:50.119380 kernel: SCSI subsystem initialized Jan 29 16:07:50.124529 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:07:50.132553 kernel: iscsi: registered transport (tcp) Jan 29 16:07:50.145364 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:07:50.145458 kernel: QLogic iSCSI HBA Driver Jan 29 16:07:50.190160 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:07:50.195524 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:07:50.225728 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:07:50.225813 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:07:50.226379 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:07:50.280408 kernel: raid6: neonx8 gen() 15571 MB/s Jan 29 16:07:50.295738 kernel: raid6: neonx4 gen() 15739 MB/s Jan 29 16:07:50.312391 kernel: raid6: neonx2 gen() 13126 MB/s Jan 29 16:07:50.329387 kernel: raid6: neonx1 gen() 10401 MB/s Jan 29 16:07:50.346374 kernel: raid6: int64x8 gen() 6748 MB/s Jan 29 16:07:50.363385 kernel: raid6: int64x4 gen() 7308 MB/s Jan 29 16:07:50.380371 kernel: raid6: int64x2 gen() 6058 MB/s Jan 29 16:07:50.397429 kernel: raid6: int64x1 gen() 5018 MB/s Jan 29 16:07:50.397531 kernel: raid6: using algorithm neonx4 gen() 15739 MB/s Jan 29 16:07:50.414373 kernel: raid6: .... xor() 12326 MB/s, rmw enabled Jan 29 16:07:50.414472 kernel: raid6: using neon recovery algorithm Jan 29 16:07:50.419413 kernel: xor: measuring software checksum speed Jan 29 16:07:50.419478 kernel: 8regs : 15150 MB/sec Jan 29 16:07:50.419499 kernel: 32regs : 21710 MB/sec Jan 29 16:07:50.420359 kernel: arm64_neon : 23229 MB/sec Jan 29 16:07:50.420390 kernel: xor: using function: arm64_neon (23229 MB/sec) Jan 29 16:07:50.474369 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:07:50.489935 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:07:50.497556 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:07:50.512820 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jan 29 16:07:50.516953 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:07:50.524587 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:07:50.539699 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Jan 29 16:07:50.579977 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:07:50.588729 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:07:50.645487 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:07:50.654607 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:07:50.677409 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:07:50.679967 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:07:50.682714 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:07:50.684043 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:07:50.690789 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:07:50.703896 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:07:50.758378 kernel: scsi host0: Virtio SCSI HBA Jan 29 16:07:50.766766 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 16:07:50.766810 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 29 16:07:50.770486 kernel: ACPI: bus type USB registered Jan 29 16:07:50.770548 kernel: usbcore: registered new interface driver usbfs Jan 29 16:07:50.770559 kernel: usbcore: registered new interface driver hub Jan 29 16:07:50.772429 kernel: usbcore: registered new device driver usb Jan 29 16:07:50.812224 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:07:50.812369 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:07:50.814627 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:07:50.815442 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:07:50.815606 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:07:50.818201 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:07:50.824612 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 29 16:07:50.828533 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 29 16:07:50.828667 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:07:50.828678 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 29 16:07:50.824868 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:07:50.841373 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 29 16:07:50.850597 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 29 16:07:50.850728 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 29 16:07:50.850812 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 29 16:07:50.850893 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 16:07:50.850975 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:07:50.850985 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 16:07:50.864435 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 29 16:07:50.864590 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 16:07:50.864695 kernel: GPT:17805311 != 80003071 Jan 29 16:07:50.864706 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:07:50.864716 kernel: GPT:17805311 != 80003071 Jan 29 16:07:50.864724 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:07:50.864735 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:07:50.864746 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 16:07:50.864835 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 29 16:07:50.864940 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 29 16:07:50.865023 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 29 16:07:50.865102 kernel: hub 1-0:1.0: USB hub found Jan 29 16:07:50.865205 kernel: hub 1-0:1.0: 4 ports detected Jan 29 16:07:50.865287 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 16:07:50.865562 kernel: hub 2-0:1.0: USB hub found Jan 29 16:07:50.865664 kernel: hub 2-0:1.0: 4 ports detected Jan 29 16:07:50.855856 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:07:50.866504 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:07:50.893078 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:07:50.910343 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (524) Jan 29 16:07:50.912354 kernel: BTRFS: device fsid d7b4a0ef-7a03-4a6c-8f31-7cafae04447a devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (520) Jan 29 16:07:50.921777 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 29 16:07:50.946408 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 29 16:07:50.948046 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 29 16:07:50.957593 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 16:07:50.966905 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 29 16:07:50.983672 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:07:50.990836 disk-uuid[580]: Primary Header is updated. Jan 29 16:07:50.990836 disk-uuid[580]: Secondary Entries is updated. Jan 29 16:07:50.990836 disk-uuid[580]: Secondary Header is updated. Jan 29 16:07:50.997356 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:07:51.101352 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 16:07:51.344478 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 29 16:07:51.478878 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 29 16:07:51.478938 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 29 16:07:51.480347 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 29 16:07:51.534851 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 29 16:07:51.535057 kernel: usbcore: registered new interface driver usbhid Jan 29 16:07:51.535077 kernel: usbhid: USB HID core driver Jan 29 16:07:52.009461 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:07:52.012348 disk-uuid[581]: The operation has completed successfully. Jan 29 16:07:52.079540 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:07:52.079664 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:07:52.093722 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:07:52.099558 sh[596]: Success Jan 29 16:07:52.113549 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 16:07:52.159186 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:07:52.168564 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:07:52.171466 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:07:52.194787 kernel: BTRFS info (device dm-0): first mount of filesystem d7b4a0ef-7a03-4a6c-8f31-7cafae04447a Jan 29 16:07:52.194861 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:07:52.194885 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:07:52.195762 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:07:52.195800 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:07:52.203401 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 16:07:52.206969 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:07:52.209288 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:07:52.217557 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:07:52.221834 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:07:52.237640 kernel: BTRFS info (device sda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:07:52.237704 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:07:52.238337 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:07:52.243365 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:07:52.243438 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:07:52.254996 kernel: BTRFS info (device sda6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:07:52.254477 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:07:52.262053 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:07:52.268554 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:07:52.353680 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:07:52.365052 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:07:52.385010 ignition[698]: Ignition 2.20.0 Jan 29 16:07:52.385019 ignition[698]: Stage: fetch-offline Jan 29 16:07:52.385060 ignition[698]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:52.385071 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:07:52.385231 ignition[698]: parsed url from cmdline: "" Jan 29 16:07:52.385235 ignition[698]: no config URL provided Jan 29 16:07:52.385240 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:07:52.391104 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:07:52.385247 ignition[698]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:07:52.385253 ignition[698]: failed to fetch config: resource requires networking Jan 29 16:07:52.394637 systemd-networkd[785]: lo: Link UP Jan 29 16:07:52.385482 ignition[698]: Ignition finished successfully Jan 29 16:07:52.394641 systemd-networkd[785]: lo: Gained carrier Jan 29 16:07:52.396486 systemd-networkd[785]: Enumeration completed Jan 29 16:07:52.396626 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:07:52.397146 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:07:52.397150 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:07:52.397288 systemd[1]: Reached target network.target - Network. Jan 29 16:07:52.398767 systemd-networkd[785]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:07:52.398771 systemd-networkd[785]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:07:52.399299 systemd-networkd[785]: eth0: Link UP Jan 29 16:07:52.399302 systemd-networkd[785]: eth0: Gained carrier Jan 29 16:07:52.399310 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:07:52.402693 systemd-networkd[785]: eth1: Link UP Jan 29 16:07:52.402697 systemd-networkd[785]: eth1: Gained carrier Jan 29 16:07:52.402706 systemd-networkd[785]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:07:52.404670 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:07:52.418810 ignition[789]: Ignition 2.20.0 Jan 29 16:07:52.418820 ignition[789]: Stage: fetch Jan 29 16:07:52.419001 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:52.419011 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:07:52.419114 ignition[789]: parsed url from cmdline: "" Jan 29 16:07:52.419117 ignition[789]: no config URL provided Jan 29 16:07:52.419122 ignition[789]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:07:52.419130 ignition[789]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:07:52.419217 ignition[789]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 29 16:07:52.420113 ignition[789]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 16:07:52.439471 systemd-networkd[785]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:07:52.466437 systemd-networkd[785]: eth0: DHCPv4 address 167.235.198.80/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 16:07:52.620846 ignition[789]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 29 16:07:52.626474 ignition[789]: GET result: OK Jan 29 16:07:52.626620 ignition[789]: parsing config with SHA512: 3de629ca275ad55cd99eb9e9ffe9418881fcd12a56b2cf91d2e5de53d72f93be029067507b97138737f67c82154e3c2cccab8548e47e5bc90129a1f48420b8ed Jan 29 16:07:52.635764 unknown[789]: fetched base config from "system" Jan 29 16:07:52.636271 ignition[789]: fetch: fetch complete Jan 29 16:07:52.635778 unknown[789]: fetched base config from "system" Jan 29 16:07:52.636278 ignition[789]: fetch: fetch passed Jan 29 16:07:52.635786 unknown[789]: fetched user config from "hetzner" Jan 29 16:07:52.636362 ignition[789]: Ignition finished successfully Jan 29 16:07:52.638951 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:07:52.647623 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:07:52.669417 ignition[797]: Ignition 2.20.0 Jan 29 16:07:52.669433 ignition[797]: Stage: kargs Jan 29 16:07:52.669625 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:52.669636 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:07:52.670605 ignition[797]: kargs: kargs passed Jan 29 16:07:52.673164 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:07:52.670672 ignition[797]: Ignition finished successfully Jan 29 16:07:52.680624 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:07:52.693150 ignition[803]: Ignition 2.20.0 Jan 29 16:07:52.693158 ignition[803]: Stage: disks Jan 29 16:07:52.693354 ignition[803]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:52.693365 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:07:52.696653 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:07:52.694408 ignition[803]: disks: disks passed Jan 29 16:07:52.698541 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:07:52.694469 ignition[803]: Ignition finished successfully Jan 29 16:07:52.699184 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:07:52.700161 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:07:52.701180 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:07:52.702048 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:07:52.715697 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:07:52.736501 systemd-fsck[811]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 16:07:52.741625 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:07:53.208626 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:07:53.266447 kernel: EXT4-fs (sda9): mounted filesystem 41c89329-6889-4dd8-82a1-efe68f55bab8 r/w with ordered data mode. Quota mode: none. Jan 29 16:07:53.267660 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:07:53.269504 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:07:53.283553 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:07:53.287541 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:07:53.291952 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 16:07:53.292971 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:07:53.293012 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:07:53.303263 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:07:53.310072 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (819) Jan 29 16:07:53.313980 kernel: BTRFS info (device sda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:07:53.314042 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:07:53.314053 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:07:53.314281 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:07:53.323828 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:07:53.323893 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:07:53.329401 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:07:53.363163 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:07:53.372243 coreos-metadata[821]: Jan 29 16:07:53.371 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 29 16:07:53.373921 coreos-metadata[821]: Jan 29 16:07:53.373 INFO Fetch successful Jan 29 16:07:53.375347 coreos-metadata[821]: Jan 29 16:07:53.375 INFO wrote hostname ci-4230-0-0-d-0116a6be22 to /sysroot/etc/hostname Jan 29 16:07:53.376474 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:07:53.377405 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:07:53.382948 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:07:53.387541 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:07:53.480778 systemd-networkd[785]: eth1: Gained IPv6LL Jan 29 16:07:53.501161 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:07:53.509522 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:07:53.513578 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:07:53.525502 kernel: BTRFS info (device sda6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:07:53.542879 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:07:53.549353 ignition[936]: INFO : Ignition 2.20.0 Jan 29 16:07:53.549353 ignition[936]: INFO : Stage: mount Jan 29 16:07:53.549353 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:53.549353 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:07:53.553271 ignition[936]: INFO : mount: mount passed Jan 29 16:07:53.553271 ignition[936]: INFO : Ignition finished successfully Jan 29 16:07:53.551285 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:07:53.557562 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:07:54.119664 systemd-networkd[785]: eth0: Gained IPv6LL Jan 29 16:07:54.193830 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:07:54.197683 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:07:54.230349 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (948) Jan 29 16:07:54.236351 kernel: BTRFS info (device sda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:07:54.236457 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:07:54.236480 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:07:54.240506 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:07:54.240586 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:07:54.243671 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:07:54.264544 ignition[965]: INFO : Ignition 2.20.0 Jan 29 16:07:54.264544 ignition[965]: INFO : Stage: files Jan 29 16:07:54.265788 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:54.265788 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:07:54.267610 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:07:54.267610 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:07:54.267610 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:07:54.271426 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:07:54.272502 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:07:54.272502 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:07:54.272007 unknown[965]: wrote ssh authorized keys file for user: core Jan 29 16:07:54.275609 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 16:07:54.275609 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 29 16:07:54.340555 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:07:56.081192 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 16:07:56.083407 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:07:56.083407 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 16:07:56.732707 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 16:07:57.208089 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:07:57.209485 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:07:57.209485 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:07:57.209485 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:07:57.209485 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:07:57.209485 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:07:57.209485 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:07:57.209485 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:07:57.209485 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:07:57.216945 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:07:57.216945 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:07:57.216945 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 16:07:57.216945 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 16:07:57.216945 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 16:07:57.216945 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Jan 29 16:07:57.832257 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 16:07:59.499257 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 16:07:59.499257 ignition[965]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 16:07:59.503238 ignition[965]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:07:59.503238 ignition[965]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:07:59.503238 ignition[965]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 16:07:59.503238 ignition[965]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 16:07:59.503238 ignition[965]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 16:07:59.503238 ignition[965]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 16:07:59.503238 ignition[965]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 16:07:59.503238 ignition[965]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:07:59.503238 ignition[965]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:07:59.503238 ignition[965]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:07:59.503238 ignition[965]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:07:59.503238 ignition[965]: INFO : files: files passed Jan 29 16:07:59.503238 ignition[965]: INFO : Ignition finished successfully Jan 29 16:07:59.503706 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:07:59.509667 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:07:59.513748 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:07:59.519552 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:07:59.519649 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:07:59.531506 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:07:59.531506 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:07:59.535132 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:07:59.539397 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:07:59.541217 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:07:59.550592 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:07:59.579051 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:07:59.579219 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:07:59.582728 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:07:59.583702 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:07:59.584821 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:07:59.595526 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:07:59.611190 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:07:59.616734 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:07:59.630469 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:07:59.631155 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:07:59.632515 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:07:59.633525 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:07:59.633654 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:07:59.634911 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:07:59.635557 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:07:59.636565 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:07:59.637546 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:07:59.638561 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:07:59.639599 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:07:59.640627 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:07:59.641757 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:07:59.642708 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:07:59.643752 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:07:59.644648 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:07:59.644776 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:07:59.645965 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:07:59.646637 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:07:59.647626 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:07:59.648039 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:07:59.648762 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:07:59.648892 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:07:59.650503 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:07:59.650630 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:07:59.651667 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:07:59.651762 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:07:59.652861 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 16:07:59.652952 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:07:59.658589 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:07:59.661565 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:07:59.662033 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:07:59.662149 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:07:59.663072 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:07:59.663165 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:07:59.675101 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:07:59.677352 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:07:59.679055 ignition[1018]: INFO : Ignition 2.20.0 Jan 29 16:07:59.679055 ignition[1018]: INFO : Stage: umount Jan 29 16:07:59.680656 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:59.680656 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:07:59.682660 ignition[1018]: INFO : umount: umount passed Jan 29 16:07:59.682660 ignition[1018]: INFO : Ignition finished successfully Jan 29 16:07:59.682467 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:07:59.684210 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:07:59.685616 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:07:59.685719 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:07:59.689688 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:07:59.689794 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:07:59.694072 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:07:59.694132 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:07:59.694798 systemd[1]: Stopped target network.target - Network. Jan 29 16:07:59.695243 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:07:59.695314 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:07:59.695920 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:07:59.698457 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:07:59.699652 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:07:59.700276 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:07:59.701270 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:07:59.702123 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:07:59.702167 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:07:59.703021 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:07:59.703057 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:07:59.704155 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:07:59.704212 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:07:59.705011 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:07:59.705049 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:07:59.706049 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:07:59.706808 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:07:59.708690 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:07:59.709192 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:07:59.709289 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:07:59.710705 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:07:59.710809 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:07:59.715792 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:07:59.715924 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:07:59.721433 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:07:59.722123 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:07:59.722216 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:07:59.727406 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:07:59.728624 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:07:59.728778 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:07:59.732171 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:07:59.732886 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:07:59.732948 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:07:59.740520 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:07:59.741447 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:07:59.741545 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:07:59.743488 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:07:59.743566 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:07:59.744920 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:07:59.744967 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:07:59.745756 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:07:59.747890 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:07:59.755176 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:07:59.755454 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:07:59.761871 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:07:59.761931 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:07:59.762792 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:07:59.762822 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:07:59.764162 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:07:59.764217 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:07:59.766223 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:07:59.766274 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:07:59.767928 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:07:59.767975 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:07:59.782130 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:07:59.783625 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:07:59.783744 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:07:59.786814 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:07:59.786879 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:07:59.790110 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:07:59.790176 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:07:59.791271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:07:59.792911 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:07:59.795091 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:07:59.795232 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:07:59.796502 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:07:59.796607 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:07:59.798386 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:07:59.805669 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:07:59.818125 systemd[1]: Switching root. Jan 29 16:07:59.851498 systemd-journald[236]: Journal stopped Jan 29 16:08:00.905052 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jan 29 16:08:00.905184 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:08:00.905200 kernel: SELinux: policy capability open_perms=1 Jan 29 16:08:00.905210 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:08:00.905224 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:08:00.905260 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:08:00.905270 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:08:00.905279 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:08:00.905288 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:08:00.905310 kernel: audit: type=1403 audit(1738166880.032:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:08:00.905333 systemd[1]: Successfully loaded SELinux policy in 40.241ms. Jan 29 16:08:00.905362 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.303ms. Jan 29 16:08:00.905376 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:08:00.905387 systemd[1]: Detected virtualization kvm. Jan 29 16:08:00.905397 systemd[1]: Detected architecture arm64. Jan 29 16:08:00.905407 systemd[1]: Detected first boot. Jan 29 16:08:00.905416 systemd[1]: Hostname set to <ci-4230-0-0-d-0116a6be22>. Jan 29 16:08:00.905426 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:08:00.906385 zram_generator::config[1064]: No configuration found. Jan 29 16:08:00.906410 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:08:00.906428 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:08:00.906440 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:08:00.906451 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:08:00.906461 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:08:00.906470 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:08:00.906481 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:08:00.906491 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:08:00.906501 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:08:00.906511 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:08:00.906523 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:08:00.906533 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:08:00.906543 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:08:00.906558 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:08:00.906568 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:08:00.906578 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:08:00.906588 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:08:00.906598 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:08:00.906610 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:08:00.906620 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:08:00.906630 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 16:08:00.906640 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:08:00.906651 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:08:00.906661 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:08:00.906671 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:08:00.906683 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:08:00.906692 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:08:00.906707 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:08:00.906717 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:08:00.906727 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:08:00.906738 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:08:00.906748 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:08:00.906758 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:08:00.906772 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:08:00.906797 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:08:00.906807 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:08:00.906817 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:08:00.906827 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:08:00.906837 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:08:00.906848 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:08:00.906859 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:08:00.906869 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:08:00.906879 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:08:00.906890 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:08:00.906904 systemd[1]: Reached target machines.target - Containers. Jan 29 16:08:00.906914 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:08:00.906925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:08:00.906938 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:08:00.906952 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:08:00.906964 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:08:00.906976 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:08:00.906986 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:08:00.906996 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:08:00.907006 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:08:00.907016 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:08:00.907026 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:08:00.907037 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:08:00.907048 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:08:00.907057 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:08:00.907068 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:08:00.907078 kernel: fuse: init (API version 7.39) Jan 29 16:08:00.907088 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:08:00.907098 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:08:00.907109 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:08:00.907119 kernel: ACPI: bus type drm_connector registered Jan 29 16:08:00.907132 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:08:00.907142 kernel: loop: module loaded Jan 29 16:08:00.907151 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:08:00.907161 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:08:00.907171 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:08:00.907182 systemd[1]: Stopped verity-setup.service. Jan 29 16:08:00.907193 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:08:00.907203 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:08:00.907213 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:08:00.907223 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:08:00.907236 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:08:00.907250 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:08:00.907260 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:08:00.907270 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:08:00.907280 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:08:00.907299 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:08:00.907312 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:08:00.907357 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:08:00.907479 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:08:00.907501 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:08:00.907511 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:08:00.907523 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:08:00.907534 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:08:00.907545 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:08:00.907555 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:08:00.907565 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:08:00.907612 systemd-journald[1135]: Collecting audit messages is disabled. Jan 29 16:08:00.907638 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:08:00.907649 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:08:00.907660 systemd-journald[1135]: Journal started Jan 29 16:08:00.907682 systemd-journald[1135]: Runtime Journal (/run/log/journal/a994ee601bac428baa7c7f30b4b3d756) is 8M, max 76.6M, 68.6M free. Jan 29 16:08:00.641053 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:08:00.652796 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 16:08:00.653636 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:08:00.912371 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:08:00.912383 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:08:00.923470 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:08:00.930703 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:08:00.937561 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:08:00.942048 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:08:00.942841 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:08:00.942880 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:08:00.944586 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:08:00.951541 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:08:00.955894 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:08:00.957753 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:08:00.964608 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:08:00.970744 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:08:00.974541 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:08:00.981547 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:08:00.982202 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:08:00.984180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:08:00.990007 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:08:00.996169 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:08:00.999776 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:08:01.000588 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:08:01.004178 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:08:01.005469 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:08:01.013073 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:08:01.022568 systemd-journald[1135]: Time spent on flushing to /var/log/journal/a994ee601bac428baa7c7f30b4b3d756 is 39.448ms for 1145 entries. Jan 29 16:08:01.022568 systemd-journald[1135]: System Journal (/var/log/journal/a994ee601bac428baa7c7f30b4b3d756) is 8M, max 584.8M, 576.8M free. Jan 29 16:08:01.074532 systemd-journald[1135]: Received client request to flush runtime journal. Jan 29 16:08:01.074586 kernel: loop0: detected capacity change from 0 to 113512 Jan 29 16:08:01.038301 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:08:01.039581 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:08:01.054930 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:08:01.065199 udevadm[1190]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 16:08:01.082602 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:08:01.096472 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Jan 29 16:08:01.096489 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Jan 29 16:08:01.099766 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:08:01.107353 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:08:01.109546 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:08:01.113244 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:08:01.123774 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:08:01.135359 kernel: loop1: detected capacity change from 0 to 123192 Jan 29 16:08:01.160575 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:08:01.167732 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:08:01.179557 kernel: loop2: detected capacity change from 0 to 201592 Jan 29 16:08:01.185835 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 29 16:08:01.185952 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 29 16:08:01.193489 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:08:01.235346 kernel: loop3: detected capacity change from 0 to 8 Jan 29 16:08:01.258450 kernel: loop4: detected capacity change from 0 to 113512 Jan 29 16:08:01.288521 kernel: loop5: detected capacity change from 0 to 123192 Jan 29 16:08:01.305400 kernel: loop6: detected capacity change from 0 to 201592 Jan 29 16:08:01.343340 kernel: loop7: detected capacity change from 0 to 8 Jan 29 16:08:01.344182 (sd-merge)[1214]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 29 16:08:01.344804 (sd-merge)[1214]: Merged extensions into '/usr'. Jan 29 16:08:01.350864 systemd[1]: Reload requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:08:01.351062 systemd[1]: Reloading... Jan 29 16:08:01.498195 zram_generator::config[1242]: No configuration found. Jan 29 16:08:01.539143 ldconfig[1179]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:08:01.632874 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:08:01.694900 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:08:01.695580 systemd[1]: Reloading finished in 344 ms. Jan 29 16:08:01.716230 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:08:01.718537 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:08:01.734975 systemd[1]: Starting ensure-sysext.service... Jan 29 16:08:01.738744 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:08:01.763209 systemd[1]: Reload requested from client PID 1279 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:08:01.765400 systemd[1]: Reloading... Jan 29 16:08:01.774643 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:08:01.774859 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:08:01.775617 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:08:01.775816 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Jan 29 16:08:01.775860 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Jan 29 16:08:01.783514 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:08:01.783528 systemd-tmpfiles[1280]: Skipping /boot Jan 29 16:08:01.796685 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:08:01.796700 systemd-tmpfiles[1280]: Skipping /boot Jan 29 16:08:01.864363 zram_generator::config[1309]: No configuration found. Jan 29 16:08:01.974232 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:08:02.036407 systemd[1]: Reloading finished in 270 ms. Jan 29 16:08:02.051381 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:08:02.065346 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:08:02.078772 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:08:02.084471 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:08:02.087562 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:08:02.093122 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:08:02.097512 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:08:02.100549 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:08:02.105228 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:08:02.108645 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:08:02.124820 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:08:02.133657 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:08:02.134763 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:08:02.134895 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:08:02.149709 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:08:02.150778 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:08:02.150945 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:08:02.161718 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:08:02.163600 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:08:02.165385 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:08:02.171818 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:08:02.172187 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:08:02.181339 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Jan 29 16:08:02.191199 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:08:02.196975 systemd[1]: Finished ensure-sysext.service. Jan 29 16:08:02.200951 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:08:02.207669 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:08:02.218520 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:08:02.224570 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:08:02.231616 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:08:02.232665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:08:02.232716 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:08:02.239619 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:08:02.248531 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:08:02.249520 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:08:02.253346 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:08:02.264920 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:08:02.265606 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:08:02.273889 augenrules[1402]: No rules Jan 29 16:08:02.279022 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:08:02.279251 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:08:02.281979 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:08:02.282258 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:08:02.287274 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:08:02.297195 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:08:02.297838 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:08:02.304755 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:08:02.304969 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:08:02.305863 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:08:02.315176 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:08:02.315711 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:08:02.316741 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:08:02.332579 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:08:02.386744 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 16:08:02.470015 systemd-networkd[1417]: lo: Link UP Jan 29 16:08:02.470041 systemd-networkd[1417]: lo: Gained carrier Jan 29 16:08:02.472053 systemd-networkd[1417]: Enumeration completed Jan 29 16:08:02.472217 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:08:02.473203 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:08:02.473207 systemd-networkd[1417]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:08:02.474088 systemd-networkd[1417]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:08:02.474093 systemd-networkd[1417]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:08:02.474582 systemd-networkd[1417]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:08:02.474607 systemd-networkd[1417]: eth0: Link UP Jan 29 16:08:02.474610 systemd-networkd[1417]: eth0: Gained carrier Jan 29 16:08:02.474618 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:08:02.490597 systemd-resolved[1351]: Positive Trust Anchors: Jan 29 16:08:02.490615 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:08:02.490648 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:08:02.496887 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:08:02.499378 systemd-networkd[1417]: eth1: Link UP Jan 29 16:08:02.499391 systemd-networkd[1417]: eth1: Gained carrier Jan 29 16:08:02.499416 systemd-networkd[1417]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:08:02.503667 systemd-resolved[1351]: Using system hostname 'ci-4230-0-0-d-0116a6be22'. Jan 29 16:08:02.505552 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:08:02.506354 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:08:02.507163 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:08:02.510430 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:08:02.512571 systemd[1]: Reached target network.target - Network. Jan 29 16:08:02.513101 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:08:02.528475 systemd-networkd[1417]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:08:02.529943 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Jan 29 16:08:02.536369 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:08:02.554433 systemd-networkd[1417]: eth0: DHCPv4 address 167.235.198.80/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 16:08:02.555222 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Jan 29 16:08:02.556039 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Jan 29 16:08:02.586356 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1420) Jan 29 16:08:02.605416 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:08:02.638865 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 29 16:08:02.638992 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:08:02.645607 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:08:02.647940 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:08:02.650585 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:08:02.651643 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:08:02.651687 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:08:02.651710 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:08:02.661142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:08:02.661393 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:08:02.687515 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 16:08:02.688740 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:08:02.688924 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:08:02.690667 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:08:02.690895 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:08:02.700636 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:08:02.701309 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:08:02.701434 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:08:02.711013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:08:02.726669 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:08:02.728128 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 29 16:08:02.728179 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 16:08:02.728207 kernel: [drm] features: -context_init Jan 29 16:08:02.729361 kernel: [drm] number of scanouts: 1 Jan 29 16:08:02.729412 kernel: [drm] number of cap sets: 0 Jan 29 16:08:02.735377 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 29 16:08:02.740373 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 16:08:02.747370 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 16:08:02.755064 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:08:02.755398 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:08:02.757906 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:08:02.763696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:08:02.822937 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:08:02.879516 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:08:02.888745 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:08:02.901874 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:08:02.933104 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:08:02.934216 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:08:02.934953 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:08:02.935771 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:08:02.936592 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:08:02.937532 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:08:02.938743 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:08:02.939411 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:08:02.940035 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:08:02.940070 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:08:02.940587 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:08:02.942934 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:08:02.945846 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:08:02.949941 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:08:02.951142 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:08:02.951919 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:08:02.955411 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:08:02.957129 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:08:02.966692 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:08:02.968811 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:08:02.970215 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:08:02.971736 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:08:02.973362 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:08:02.972347 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:08:02.972377 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:08:02.975516 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:08:02.986947 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:08:02.991951 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:08:03.000548 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:08:03.004581 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:08:03.005564 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:08:03.010627 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:08:03.015513 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:08:03.017602 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 29 16:08:03.022545 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:08:03.033363 jq[1486]: false Jan 29 16:08:03.028620 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:08:03.037783 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:08:03.040046 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:08:03.041837 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:08:03.045482 coreos-metadata[1482]: Jan 29 16:08:03.043 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 29 16:08:03.044790 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:08:03.071050 coreos-metadata[1482]: Jan 29 16:08:03.050 INFO Fetch successful Jan 29 16:08:03.071050 coreos-metadata[1482]: Jan 29 16:08:03.051 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 29 16:08:03.071050 coreos-metadata[1482]: Jan 29 16:08:03.051 INFO Fetch successful Jan 29 16:08:03.052454 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:08:03.056948 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:08:03.064832 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:08:03.065090 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:08:03.078754 dbus-daemon[1484]: [system] SELinux support is enabled Jan 29 16:08:03.099691 extend-filesystems[1487]: Found loop4 Jan 29 16:08:03.099691 extend-filesystems[1487]: Found loop5 Jan 29 16:08:03.099691 extend-filesystems[1487]: Found loop6 Jan 29 16:08:03.099691 extend-filesystems[1487]: Found loop7 Jan 29 16:08:03.099691 extend-filesystems[1487]: Found sda Jan 29 16:08:03.099691 extend-filesystems[1487]: Found sda1 Jan 29 16:08:03.099691 extend-filesystems[1487]: Found sda2 Jan 29 16:08:03.099691 extend-filesystems[1487]: Found sda3 Jan 29 16:08:03.099691 extend-filesystems[1487]: Found usr Jan 29 16:08:03.099691 extend-filesystems[1487]: Found sda4 Jan 29 16:08:03.099691 extend-filesystems[1487]: Found sda6 Jan 29 16:08:03.099691 extend-filesystems[1487]: Found sda7 Jan 29 16:08:03.099691 extend-filesystems[1487]: Found sda9 Jan 29 16:08:03.099691 extend-filesystems[1487]: Checking size of /dev/sda9 Jan 29 16:08:03.105366 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:08:03.114392 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:08:03.114659 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:08:03.127563 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:08:03.147657 jq[1497]: true Jan 29 16:08:03.127752 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:08:03.145256 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:08:03.147940 tar[1500]: linux-arm64/LICENSE Jan 29 16:08:03.147940 tar[1500]: linux-arm64/helm Jan 29 16:08:03.145385 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:08:03.146130 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:08:03.146149 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:08:03.161216 update_engine[1496]: I20250129 16:08:03.156855 1496 main.cc:92] Flatcar Update Engine starting Jan 29 16:08:03.161216 update_engine[1496]: I20250129 16:08:03.160773 1496 update_check_scheduler.cc:74] Next update check in 5m24s Jan 29 16:08:03.168171 jq[1514]: true Jan 29 16:08:03.163726 (ntainerd)[1518]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:08:03.168576 extend-filesystems[1487]: Resized partition /dev/sda9 Jan 29 16:08:03.164921 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:08:03.172214 extend-filesystems[1529]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:08:03.169505 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:08:03.183977 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 29 16:08:03.280782 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:08:03.282964 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:08:03.292709 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1425) Jan 29 16:08:03.287023 systemd-logind[1494]: New seat seat0. Jan 29 16:08:03.293855 systemd-logind[1494]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 16:08:03.293871 systemd-logind[1494]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 29 16:08:03.294598 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:08:03.394567 bash[1552]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:08:03.395496 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:08:03.409738 systemd[1]: Starting sshkeys.service... Jan 29 16:08:03.447720 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:08:03.493345 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 29 16:08:03.502920 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:08:03.540254 extend-filesystems[1529]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 16:08:03.540254 extend-filesystems[1529]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 29 16:08:03.540254 extend-filesystems[1529]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 29 16:08:03.548775 extend-filesystems[1487]: Resized filesystem in /dev/sda9 Jan 29 16:08:03.548775 extend-filesystems[1487]: Found sr0 Jan 29 16:08:03.541447 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:08:03.541656 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:08:03.565371 coreos-metadata[1564]: Jan 29 16:08:03.562 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 29 16:08:03.566360 coreos-metadata[1564]: Jan 29 16:08:03.565 INFO Fetch successful Jan 29 16:08:03.575072 unknown[1564]: wrote ssh authorized keys file for user: core Jan 29 16:08:03.591456 systemd-networkd[1417]: eth0: Gained IPv6LL Jan 29 16:08:03.591979 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Jan 29 16:08:03.595798 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:08:03.597239 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:08:03.611350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:03.615443 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:08:03.626133 update-ssh-keys[1570]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:08:03.625493 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:08:03.637370 systemd[1]: Finished sshkeys.service. Jan 29 16:08:03.655745 systemd-networkd[1417]: eth1: Gained IPv6LL Jan 29 16:08:03.656137 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Jan 29 16:08:03.674175 locksmithd[1528]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:08:03.693659 containerd[1518]: time="2025-01-29T16:08:03.691739160Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:08:03.715843 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:08:03.785965 containerd[1518]: time="2025-01-29T16:08:03.784134800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:08:03.785965 containerd[1518]: time="2025-01-29T16:08:03.785827360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:08:03.785965 containerd[1518]: time="2025-01-29T16:08:03.785864400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:08:03.785965 containerd[1518]: time="2025-01-29T16:08:03.785881520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:08:03.786119 containerd[1518]: time="2025-01-29T16:08:03.786047920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:08:03.786119 containerd[1518]: time="2025-01-29T16:08:03.786064920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:08:03.786153 containerd[1518]: time="2025-01-29T16:08:03.786126880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:08:03.786153 containerd[1518]: time="2025-01-29T16:08:03.786141440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:08:03.786412 containerd[1518]: time="2025-01-29T16:08:03.786382920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:08:03.786412 containerd[1518]: time="2025-01-29T16:08:03.786408800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:08:03.786479 containerd[1518]: time="2025-01-29T16:08:03.786423000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:08:03.786479 containerd[1518]: time="2025-01-29T16:08:03.786434720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:08:03.787046 containerd[1518]: time="2025-01-29T16:08:03.786515040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:08:03.787046 containerd[1518]: time="2025-01-29T16:08:03.786707800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:08:03.787046 containerd[1518]: time="2025-01-29T16:08:03.786836800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:08:03.787046 containerd[1518]: time="2025-01-29T16:08:03.786849680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:08:03.787046 containerd[1518]: time="2025-01-29T16:08:03.786918880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:08:03.787046 containerd[1518]: time="2025-01-29T16:08:03.786960200Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:08:03.802984 containerd[1518]: time="2025-01-29T16:08:03.802930280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:08:03.804762 containerd[1518]: time="2025-01-29T16:08:03.804449040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:08:03.804762 containerd[1518]: time="2025-01-29T16:08:03.804509080Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:08:03.804762 containerd[1518]: time="2025-01-29T16:08:03.804528560Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:08:03.804762 containerd[1518]: time="2025-01-29T16:08:03.804544880Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:08:03.805004 containerd[1518]: time="2025-01-29T16:08:03.804973640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:08:03.805275 containerd[1518]: time="2025-01-29T16:08:03.805245200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:08:03.805563 containerd[1518]: time="2025-01-29T16:08:03.805542480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:08:03.805598 containerd[1518]: time="2025-01-29T16:08:03.805568680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:08:03.805598 containerd[1518]: time="2025-01-29T16:08:03.805584600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:08:03.805631 containerd[1518]: time="2025-01-29T16:08:03.805597880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:08:03.805631 containerd[1518]: time="2025-01-29T16:08:03.805618240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:08:03.805668 containerd[1518]: time="2025-01-29T16:08:03.805632840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:08:03.805668 containerd[1518]: time="2025-01-29T16:08:03.805648480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:08:03.805668 containerd[1518]: time="2025-01-29T16:08:03.805663520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:08:03.805719 containerd[1518]: time="2025-01-29T16:08:03.805687320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:08:03.805719 containerd[1518]: time="2025-01-29T16:08:03.805703000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:08:03.805719 containerd[1518]: time="2025-01-29T16:08:03.805714680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:08:03.805765 containerd[1518]: time="2025-01-29T16:08:03.805735480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.805765 containerd[1518]: time="2025-01-29T16:08:03.805749040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.805803 containerd[1518]: time="2025-01-29T16:08:03.805767120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.805803 containerd[1518]: time="2025-01-29T16:08:03.805780960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.805803 containerd[1518]: time="2025-01-29T16:08:03.805793200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.805860 containerd[1518]: time="2025-01-29T16:08:03.805805960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.805860 containerd[1518]: time="2025-01-29T16:08:03.805817440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.806071 containerd[1518]: time="2025-01-29T16:08:03.805830120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.806102 containerd[1518]: time="2025-01-29T16:08:03.806083840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.806121 containerd[1518]: time="2025-01-29T16:08:03.806101040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.806363 containerd[1518]: time="2025-01-29T16:08:03.806340120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.806403 containerd[1518]: time="2025-01-29T16:08:03.806367320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.806403 containerd[1518]: time="2025-01-29T16:08:03.806383800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.806449 containerd[1518]: time="2025-01-29T16:08:03.806400880Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:08:03.806449 containerd[1518]: time="2025-01-29T16:08:03.806443760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.806485 containerd[1518]: time="2025-01-29T16:08:03.806463000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.806485 containerd[1518]: time="2025-01-29T16:08:03.806474560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:08:03.809333 containerd[1518]: time="2025-01-29T16:08:03.806979720Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:08:03.809333 containerd[1518]: time="2025-01-29T16:08:03.807139800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:08:03.809333 containerd[1518]: time="2025-01-29T16:08:03.807155400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:08:03.809333 containerd[1518]: time="2025-01-29T16:08:03.807167600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:08:03.809333 containerd[1518]: time="2025-01-29T16:08:03.807231240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.809333 containerd[1518]: time="2025-01-29T16:08:03.807255320Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:08:03.809333 containerd[1518]: time="2025-01-29T16:08:03.807287560Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:08:03.809333 containerd[1518]: time="2025-01-29T16:08:03.807311960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:08:03.809538 containerd[1518]: time="2025-01-29T16:08:03.808259120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:08:03.809538 containerd[1518]: time="2025-01-29T16:08:03.808491280Z" level=info msg="Connect containerd service" Jan 29 16:08:03.809538 containerd[1518]: time="2025-01-29T16:08:03.808548760Z" level=info msg="using legacy CRI server" Jan 29 16:08:03.809538 containerd[1518]: time="2025-01-29T16:08:03.808557320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:08:03.809538 containerd[1518]: time="2025-01-29T16:08:03.808834240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:08:03.810212 containerd[1518]: time="2025-01-29T16:08:03.810183160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:08:03.810429 containerd[1518]: time="2025-01-29T16:08:03.810396040Z" level=info msg="Start subscribing containerd event" Jan 29 16:08:03.810459 containerd[1518]: time="2025-01-29T16:08:03.810446240Z" level=info msg="Start recovering state" Jan 29 16:08:03.810687 containerd[1518]: time="2025-01-29T16:08:03.810618760Z" level=info msg="Start event monitor" Jan 29 16:08:03.810687 containerd[1518]: time="2025-01-29T16:08:03.810642600Z" level=info msg="Start snapshots syncer" Jan 29 16:08:03.810687 containerd[1518]: time="2025-01-29T16:08:03.810652120Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:08:03.810687 containerd[1518]: time="2025-01-29T16:08:03.810659120Z" level=info msg="Start streaming server" Jan 29 16:08:03.812921 containerd[1518]: time="2025-01-29T16:08:03.812414640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:08:03.812921 containerd[1518]: time="2025-01-29T16:08:03.812792040Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:08:03.816854 containerd[1518]: time="2025-01-29T16:08:03.816801160Z" level=info msg="containerd successfully booted in 0.128922s" Jan 29 16:08:03.816985 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:08:04.193568 tar[1500]: linux-arm64/README.md Jan 29 16:08:04.212283 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:08:04.679608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:04.680542 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:08:05.297573 kubelet[1596]: E0129 16:08:05.297483 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:08:05.301035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:08:05.301188 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:08:05.303445 systemd[1]: kubelet.service: Consumed 881ms CPU time, 250.4M memory peak. Jan 29 16:08:05.769850 sshd_keygen[1517]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:08:05.797032 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:08:05.804019 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:08:05.818538 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:08:05.818868 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:08:05.825770 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:08:05.839416 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:08:05.849821 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:08:05.853465 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 16:08:05.855111 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:08:05.856655 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:08:05.857339 systemd[1]: Startup finished in 819ms (kernel) + 10.333s (initrd) + 5.864s (userspace) = 17.017s. Jan 29 16:08:15.552214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:08:15.565699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:15.693405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:15.698137 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:08:15.754887 kubelet[1633]: E0129 16:08:15.754810 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:08:15.757734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:08:15.758004 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:08:15.758804 systemd[1]: kubelet.service: Consumed 169ms CPU time, 104.9M memory peak. Jan 29 16:08:26.009466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:08:26.017736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:26.183459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:26.187999 (kubelet)[1648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:08:26.231965 kubelet[1648]: E0129 16:08:26.231879 1648 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:08:26.234414 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:08:26.234559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:08:26.235091 systemd[1]: kubelet.service: Consumed 144ms CPU time, 102.4M memory peak. Jan 29 16:08:33.491336 systemd-resolved[1351]: Clock change detected. Flushing caches. Jan 29 16:08:33.491608 systemd-timesyncd[1386]: Contacted time server 141.144.246.224:123 (2.flatcar.pool.ntp.org). Jan 29 16:08:33.491684 systemd-timesyncd[1386]: Initial clock synchronization to Wed 2025-01-29 16:08:33.491277 UTC. Jan 29 16:08:34.020309 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:08:34.034389 systemd[1]: Started sshd@0-167.235.198.80:22-149.50.252.131:38396.service - OpenSSH per-connection server daemon (149.50.252.131:38396). Jan 29 16:08:34.036924 systemd[1]: Started sshd@1-167.235.198.80:22-149.50.252.131:38400.service - OpenSSH per-connection server daemon (149.50.252.131:38400). Jan 29 16:08:34.214963 sshd[1657]: Connection closed by 149.50.252.131 port 38396 [preauth] Jan 29 16:08:34.218208 systemd[1]: sshd@0-167.235.198.80:22-149.50.252.131:38396.service: Deactivated successfully. Jan 29 16:08:34.242333 sshd[1658]: Connection closed by 149.50.252.131 port 38400 [preauth] Jan 29 16:08:34.243585 systemd[1]: sshd@1-167.235.198.80:22-149.50.252.131:38400.service: Deactivated successfully. Jan 29 16:08:34.286314 systemd[1]: Started sshd@2-167.235.198.80:22-149.50.252.131:38406.service - OpenSSH per-connection server daemon (149.50.252.131:38406). Jan 29 16:08:34.314137 systemd[1]: Started sshd@3-167.235.198.80:22-149.50.252.131:38416.service - OpenSSH per-connection server daemon (149.50.252.131:38416). Jan 29 16:08:34.446078 sshd[1666]: Connection closed by 149.50.252.131 port 38406 [preauth] Jan 29 16:08:34.447823 systemd[1]: sshd@2-167.235.198.80:22-149.50.252.131:38406.service: Deactivated successfully. Jan 29 16:08:34.497323 sshd[1669]: Connection closed by 149.50.252.131 port 38416 [preauth] Jan 29 16:08:34.499493 systemd[1]: sshd@3-167.235.198.80:22-149.50.252.131:38416.service: Deactivated successfully. Jan 29 16:08:34.538854 systemd[1]: Started sshd@4-167.235.198.80:22-149.50.252.131:38418.service - OpenSSH per-connection server daemon (149.50.252.131:38418). Jan 29 16:08:34.580321 systemd[1]: Started sshd@5-167.235.198.80:22-149.50.252.131:38434.service - OpenSSH per-connection server daemon (149.50.252.131:38434). Jan 29 16:08:34.725822 sshd[1676]: Connection closed by 149.50.252.131 port 38418 [preauth] Jan 29 16:08:34.728114 systemd[1]: sshd@4-167.235.198.80:22-149.50.252.131:38418.service: Deactivated successfully. Jan 29 16:08:34.774066 sshd[1679]: Connection closed by 149.50.252.131 port 38434 [preauth] Jan 29 16:08:34.776696 systemd[1]: sshd@5-167.235.198.80:22-149.50.252.131:38434.service: Deactivated successfully. Jan 29 16:08:34.806346 systemd[1]: Started sshd@6-167.235.198.80:22-149.50.252.131:38450.service - OpenSSH per-connection server daemon (149.50.252.131:38450). Jan 29 16:08:34.833012 systemd[1]: Started sshd@7-167.235.198.80:22-149.50.252.131:38464.service - OpenSSH per-connection server daemon (149.50.252.131:38464). Jan 29 16:08:34.992532 sshd[1686]: Connection closed by 149.50.252.131 port 38450 [preauth] Jan 29 16:08:34.993735 systemd[1]: sshd@6-167.235.198.80:22-149.50.252.131:38450.service: Deactivated successfully. Jan 29 16:08:34.999043 sshd[1689]: Connection closed by 149.50.252.131 port 38464 [preauth] Jan 29 16:08:35.000324 systemd[1]: sshd@7-167.235.198.80:22-149.50.252.131:38464.service: Deactivated successfully. Jan 29 16:08:35.070259 systemd[1]: Started sshd@8-167.235.198.80:22-149.50.252.131:38468.service - OpenSSH per-connection server daemon (149.50.252.131:38468). Jan 29 16:08:35.087201 systemd[1]: Started sshd@9-167.235.198.80:22-149.50.252.131:38484.service - OpenSSH per-connection server daemon (149.50.252.131:38484). Jan 29 16:08:35.250545 sshd[1696]: Connection closed by 149.50.252.131 port 38468 [preauth] Jan 29 16:08:35.251342 systemd[1]: sshd@8-167.235.198.80:22-149.50.252.131:38468.service: Deactivated successfully. Jan 29 16:08:35.272207 sshd[1698]: Connection closed by 149.50.252.131 port 38484 [preauth] Jan 29 16:08:35.273588 systemd[1]: sshd@9-167.235.198.80:22-149.50.252.131:38484.service: Deactivated successfully. Jan 29 16:08:35.352136 systemd[1]: Started sshd@10-167.235.198.80:22-149.50.252.131:38486.service - OpenSSH per-connection server daemon (149.50.252.131:38486). Jan 29 16:08:35.356255 systemd[1]: Started sshd@11-167.235.198.80:22-149.50.252.131:38492.service - OpenSSH per-connection server daemon (149.50.252.131:38492). Jan 29 16:08:35.530029 sshd[1708]: Connection closed by 149.50.252.131 port 38492 [preauth] Jan 29 16:08:35.533338 systemd[1]: sshd@11-167.235.198.80:22-149.50.252.131:38492.service: Deactivated successfully. Jan 29 16:08:35.540857 sshd[1707]: Connection closed by 149.50.252.131 port 38486 [preauth] Jan 29 16:08:35.541706 systemd[1]: sshd@10-167.235.198.80:22-149.50.252.131:38486.service: Deactivated successfully. Jan 29 16:08:36.062687 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 16:08:36.072204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:36.203573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:36.208207 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:08:36.261748 kubelet[1723]: E0129 16:08:36.261687 1723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:08:36.265323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:08:36.265630 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:08:36.266369 systemd[1]: kubelet.service: Consumed 156ms CPU time, 104.3M memory peak. Jan 29 16:08:37.605480 systemd[1]: Started sshd@12-167.235.198.80:22-149.50.252.131:38494.service - OpenSSH per-connection server daemon (149.50.252.131:38494). Jan 29 16:08:37.623207 systemd[1]: Started sshd@13-167.235.198.80:22-149.50.252.131:38504.service - OpenSSH per-connection server daemon (149.50.252.131:38504). Jan 29 16:08:37.766969 sshd[1731]: Connection closed by 149.50.252.131 port 38494 [preauth] Jan 29 16:08:37.767051 systemd[1]: sshd@12-167.235.198.80:22-149.50.252.131:38494.service: Deactivated successfully. Jan 29 16:08:37.806968 sshd[1733]: Connection closed by 149.50.252.131 port 38504 [preauth] Jan 29 16:08:37.809564 systemd[1]: sshd@13-167.235.198.80:22-149.50.252.131:38504.service: Deactivated successfully. Jan 29 16:08:45.853214 systemd[1]: Started sshd@14-167.235.198.80:22-149.50.252.131:44778.service - OpenSSH per-connection server daemon (149.50.252.131:44778). Jan 29 16:08:45.872129 systemd[1]: Started sshd@15-167.235.198.80:22-149.50.252.131:44784.service - OpenSSH per-connection server daemon (149.50.252.131:44784). Jan 29 16:08:46.030981 sshd[1743]: Connection closed by 149.50.252.131 port 44784 [preauth] Jan 29 16:08:46.033512 systemd[1]: sshd@15-167.235.198.80:22-149.50.252.131:44784.service: Deactivated successfully. Jan 29 16:08:46.043153 sshd[1741]: Connection closed by 149.50.252.131 port 44778 [preauth] Jan 29 16:08:46.044870 systemd[1]: sshd@14-167.235.198.80:22-149.50.252.131:44778.service: Deactivated successfully. Jan 29 16:08:46.329938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 16:08:46.337029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:46.486128 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:46.491935 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:08:46.538882 kubelet[1758]: E0129 16:08:46.538820 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:08:46.541756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:08:46.541949 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:08:46.542763 systemd[1]: kubelet.service: Consumed 159ms CPU time, 103.4M memory peak. Jan 29 16:08:48.427858 update_engine[1496]: I20250129 16:08:48.427189 1496 update_attempter.cc:509] Updating boot flags... Jan 29 16:08:48.475151 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1774) Jan 29 16:08:48.551875 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1775) Jan 29 16:08:48.717194 systemd[1]: Started sshd@16-167.235.198.80:22-139.178.68.195:44726.service - OpenSSH per-connection server daemon (139.178.68.195:44726). Jan 29 16:08:49.707077 sshd[1784]: Accepted publickey for core from 139.178.68.195 port 44726 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:08:49.710204 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:49.723880 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:08:49.729346 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:08:49.733016 systemd-logind[1494]: New session 1 of user core. Jan 29 16:08:49.743772 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:08:49.752287 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:08:49.756704 (systemd)[1788]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:08:49.760605 systemd-logind[1494]: New session c1 of user core. Jan 29 16:08:49.893714 systemd[1788]: Queued start job for default target default.target. Jan 29 16:08:49.900377 systemd[1788]: Created slice app.slice - User Application Slice. Jan 29 16:08:49.900419 systemd[1788]: Reached target paths.target - Paths. Jan 29 16:08:49.900469 systemd[1788]: Reached target timers.target - Timers. Jan 29 16:08:49.902520 systemd[1788]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:08:49.926834 systemd[1788]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:08:49.926925 systemd[1788]: Reached target sockets.target - Sockets. Jan 29 16:08:49.926993 systemd[1788]: Reached target basic.target - Basic System. Jan 29 16:08:49.927025 systemd[1788]: Reached target default.target - Main User Target. Jan 29 16:08:49.927059 systemd[1788]: Startup finished in 158ms. Jan 29 16:08:49.927239 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:08:49.940498 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:08:50.646236 systemd[1]: Started sshd@17-167.235.198.80:22-139.178.68.195:44734.service - OpenSSH per-connection server daemon (139.178.68.195:44734). Jan 29 16:08:51.637172 sshd[1799]: Accepted publickey for core from 139.178.68.195 port 44734 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:08:51.639148 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:51.646501 systemd-logind[1494]: New session 2 of user core. Jan 29 16:08:51.659146 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:08:52.319442 sshd[1801]: Connection closed by 139.178.68.195 port 44734 Jan 29 16:08:52.320537 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Jan 29 16:08:52.325709 systemd[1]: sshd@17-167.235.198.80:22-139.178.68.195:44734.service: Deactivated successfully. Jan 29 16:08:52.330083 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:08:52.332168 systemd-logind[1494]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:08:52.333479 systemd-logind[1494]: Removed session 2. Jan 29 16:08:52.495231 systemd[1]: Started sshd@18-167.235.198.80:22-139.178.68.195:44742.service - OpenSSH per-connection server daemon (139.178.68.195:44742). Jan 29 16:08:53.484643 sshd[1807]: Accepted publickey for core from 139.178.68.195 port 44742 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:08:53.486825 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:53.493826 systemd-logind[1494]: New session 3 of user core. Jan 29 16:08:53.499125 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:08:54.155311 sshd[1809]: Connection closed by 139.178.68.195 port 44742 Jan 29 16:08:54.156203 sshd-session[1807]: pam_unix(sshd:session): session closed for user core Jan 29 16:08:54.161564 systemd[1]: sshd@18-167.235.198.80:22-139.178.68.195:44742.service: Deactivated successfully. Jan 29 16:08:54.164055 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:08:54.164973 systemd-logind[1494]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:08:54.166217 systemd-logind[1494]: Removed session 3. Jan 29 16:08:54.331215 systemd[1]: Started sshd@19-167.235.198.80:22-139.178.68.195:44748.service - OpenSSH per-connection server daemon (139.178.68.195:44748). Jan 29 16:08:55.319260 sshd[1815]: Accepted publickey for core from 139.178.68.195 port 44748 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:08:55.322189 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:55.329497 systemd-logind[1494]: New session 4 of user core. Jan 29 16:08:55.343140 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:08:55.998945 sshd[1817]: Connection closed by 139.178.68.195 port 44748 Jan 29 16:08:56.000032 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Jan 29 16:08:56.004645 systemd[1]: sshd@19-167.235.198.80:22-139.178.68.195:44748.service: Deactivated successfully. Jan 29 16:08:56.006946 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:08:56.008888 systemd-logind[1494]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:08:56.010430 systemd-logind[1494]: Removed session 4. Jan 29 16:08:56.182337 systemd[1]: Started sshd@20-167.235.198.80:22-139.178.68.195:43972.service - OpenSSH per-connection server daemon (139.178.68.195:43972). Jan 29 16:08:56.579135 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 16:08:56.589146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:56.727970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:56.729040 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:08:56.769640 kubelet[1833]: E0129 16:08:56.769585 1833 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:08:56.772272 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:08:56.772456 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:08:56.772768 systemd[1]: kubelet.service: Consumed 144ms CPU time, 102.2M memory peak. Jan 29 16:08:57.177437 sshd[1823]: Accepted publickey for core from 139.178.68.195 port 43972 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:08:57.179872 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:57.186628 systemd-logind[1494]: New session 5 of user core. Jan 29 16:08:57.200395 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:08:57.716286 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:08:57.716700 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:08:57.732414 sudo[1841]: pam_unix(sudo:session): session closed for user root Jan 29 16:08:57.894488 sshd[1840]: Connection closed by 139.178.68.195 port 43972 Jan 29 16:08:57.895761 sshd-session[1823]: pam_unix(sshd:session): session closed for user core Jan 29 16:08:57.900891 systemd[1]: sshd@20-167.235.198.80:22-139.178.68.195:43972.service: Deactivated successfully. Jan 29 16:08:57.904310 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:08:57.906478 systemd-logind[1494]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:08:57.907770 systemd-logind[1494]: Removed session 5. Jan 29 16:08:58.071353 systemd[1]: Started sshd@21-167.235.198.80:22-139.178.68.195:43978.service - OpenSSH per-connection server daemon (139.178.68.195:43978). Jan 29 16:08:59.058416 sshd[1847]: Accepted publickey for core from 139.178.68.195 port 43978 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:08:59.061281 sshd-session[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:59.068742 systemd-logind[1494]: New session 6 of user core. Jan 29 16:08:59.074148 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:08:59.579992 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:08:59.580296 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:08:59.584766 sudo[1851]: pam_unix(sudo:session): session closed for user root Jan 29 16:08:59.591307 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:08:59.591627 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:08:59.610407 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:08:59.640943 augenrules[1873]: No rules Jan 29 16:08:59.642239 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:08:59.642502 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:08:59.644009 sudo[1850]: pam_unix(sudo:session): session closed for user root Jan 29 16:08:59.804548 sshd[1849]: Connection closed by 139.178.68.195 port 43978 Jan 29 16:08:59.803722 sshd-session[1847]: pam_unix(sshd:session): session closed for user core Jan 29 16:08:59.808537 systemd[1]: sshd@21-167.235.198.80:22-139.178.68.195:43978.service: Deactivated successfully. Jan 29 16:08:59.810972 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:08:59.812677 systemd-logind[1494]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:08:59.814060 systemd-logind[1494]: Removed session 6. Jan 29 16:08:59.977270 systemd[1]: Started sshd@22-167.235.198.80:22-139.178.68.195:43994.service - OpenSSH per-connection server daemon (139.178.68.195:43994). Jan 29 16:09:00.951661 sshd[1882]: Accepted publickey for core from 139.178.68.195 port 43994 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:09:00.953694 sshd-session[1882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:09:00.960584 systemd-logind[1494]: New session 7 of user core. Jan 29 16:09:00.967212 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:09:01.470333 sudo[1885]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:09:01.470621 sudo[1885]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:09:01.832204 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:09:01.833741 (dockerd)[1902]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:09:02.102862 dockerd[1902]: time="2025-01-29T16:09:02.102671859Z" level=info msg="Starting up" Jan 29 16:09:02.207205 systemd[1]: var-lib-docker-metacopy\x2dcheck738573201-merged.mount: Deactivated successfully. Jan 29 16:09:02.217544 dockerd[1902]: time="2025-01-29T16:09:02.217471819Z" level=info msg="Loading containers: start." Jan 29 16:09:02.409274 kernel: Initializing XFRM netlink socket Jan 29 16:09:02.505396 systemd-networkd[1417]: docker0: Link UP Jan 29 16:09:02.540822 dockerd[1902]: time="2025-01-29T16:09:02.540619699Z" level=info msg="Loading containers: done." Jan 29 16:09:02.564640 dockerd[1902]: time="2025-01-29T16:09:02.564562419Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:09:02.564936 dockerd[1902]: time="2025-01-29T16:09:02.564681979Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:09:02.564997 dockerd[1902]: time="2025-01-29T16:09:02.564928739Z" level=info msg="Daemon has completed initialization" Jan 29 16:09:02.607592 dockerd[1902]: time="2025-01-29T16:09:02.607517219Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:09:02.609371 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:09:04.102306 systemd[1]: Started sshd@23-167.235.198.80:22-149.50.252.131:38382.service - OpenSSH per-connection server daemon (149.50.252.131:38382). Jan 29 16:09:04.110164 systemd[1]: Started sshd@24-167.235.198.80:22-149.50.252.131:38372.service - OpenSSH per-connection server daemon (149.50.252.131:38372). Jan 29 16:09:04.273651 sshd[2091]: Connection closed by 149.50.252.131 port 38382 [preauth] Jan 29 16:09:04.277488 systemd[1]: sshd@23-167.235.198.80:22-149.50.252.131:38382.service: Deactivated successfully. Jan 29 16:09:04.300197 sshd[2093]: Connection closed by 149.50.252.131 port 38372 [preauth] Jan 29 16:09:04.301121 systemd[1]: sshd@24-167.235.198.80:22-149.50.252.131:38372.service: Deactivated successfully. Jan 29 16:09:04.437312 systemd[1]: Started sshd@25-167.235.198.80:22-103.142.199.159:37288.service - OpenSSH per-connection server daemon (103.142.199.159:37288). Jan 29 16:09:05.289910 sshd[2101]: Invalid user agotoz from 103.142.199.159 port 37288 Jan 29 16:09:05.453681 sshd[2101]: Received disconnect from 103.142.199.159 port 37288:11: Bye Bye [preauth] Jan 29 16:09:05.453681 sshd[2101]: Disconnected from invalid user agotoz 103.142.199.159 port 37288 [preauth] Jan 29 16:09:05.455097 systemd[1]: sshd@25-167.235.198.80:22-103.142.199.159:37288.service: Deactivated successfully. Jan 29 16:09:05.799291 containerd[1518]: time="2025-01-29T16:09:05.799249259Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 16:09:06.461882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2236237151.mount: Deactivated successfully. Jan 29 16:09:06.830786 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 16:09:06.845731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:09:06.953112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:09:06.963308 (kubelet)[2161]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:09:07.007819 kubelet[2161]: E0129 16:09:07.007323 2161 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:09:07.010399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:09:07.010576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:09:07.011182 systemd[1]: kubelet.service: Consumed 151ms CPU time, 102.7M memory peak. Jan 29 16:09:08.258828 containerd[1518]: time="2025-01-29T16:09:08.257486209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:08.258828 containerd[1518]: time="2025-01-29T16:09:08.258477246Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=26221040" Jan 29 16:09:08.260231 containerd[1518]: time="2025-01-29T16:09:08.260188220Z" level=info msg="ImageCreate event name:\"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:08.266168 containerd[1518]: time="2025-01-29T16:09:08.266113202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:08.267182 containerd[1518]: time="2025-01-29T16:09:08.267139443Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"26217748\" in 2.467846903s" Jan 29 16:09:08.267182 containerd[1518]: time="2025-01-29T16:09:08.267181046Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\"" Jan 29 16:09:08.267876 containerd[1518]: time="2025-01-29T16:09:08.267847818Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 16:09:10.436859 containerd[1518]: time="2025-01-29T16:09:10.436596301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:10.438819 containerd[1518]: time="2025-01-29T16:09:10.438718727Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=22527127" Jan 29 16:09:10.441821 containerd[1518]: time="2025-01-29T16:09:10.440567934Z" level=info msg="ImageCreate event name:\"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:10.448510 containerd[1518]: time="2025-01-29T16:09:10.448461236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:10.449680 containerd[1518]: time="2025-01-29T16:09:10.449635076Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"23968433\" in 2.181754936s" Jan 29 16:09:10.449680 containerd[1518]: time="2025-01-29T16:09:10.449676359Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\"" Jan 29 16:09:10.450761 containerd[1518]: time="2025-01-29T16:09:10.450733712Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 16:09:12.332030 containerd[1518]: time="2025-01-29T16:09:12.331897110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:12.333516 containerd[1518]: time="2025-01-29T16:09:12.333451084Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=17481133" Jan 29 16:09:12.334925 containerd[1518]: time="2025-01-29T16:09:12.334872610Z" level=info msg="ImageCreate event name:\"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:12.339166 containerd[1518]: time="2025-01-29T16:09:12.339099345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:12.340666 containerd[1518]: time="2025-01-29T16:09:12.340510670Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"18922457\" in 1.889624188s" Jan 29 16:09:12.340666 containerd[1518]: time="2025-01-29T16:09:12.340568314Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\"" Jan 29 16:09:12.341743 containerd[1518]: time="2025-01-29T16:09:12.341453567Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 16:09:13.605745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2305004738.mount: Deactivated successfully. Jan 29 16:09:13.962532 containerd[1518]: time="2025-01-29T16:09:13.962165765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:13.963807 containerd[1518]: time="2025-01-29T16:09:13.963728934Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364423" Jan 29 16:09:13.964910 containerd[1518]: time="2025-01-29T16:09:13.964835196Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:13.967554 containerd[1518]: time="2025-01-29T16:09:13.967504747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:13.968693 containerd[1518]: time="2025-01-29T16:09:13.968453321Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 1.626970673s" Jan 29 16:09:13.968693 containerd[1518]: time="2025-01-29T16:09:13.968483123Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\"" Jan 29 16:09:13.969671 containerd[1518]: time="2025-01-29T16:09:13.969112998Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 16:09:14.914888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3606202615.mount: Deactivated successfully. Jan 29 16:09:16.469805 containerd[1518]: time="2025-01-29T16:09:16.468631236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:16.471291 containerd[1518]: time="2025-01-29T16:09:16.471236597Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jan 29 16:09:16.473058 containerd[1518]: time="2025-01-29T16:09:16.472986039Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:16.477080 containerd[1518]: time="2025-01-29T16:09:16.477034427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:16.479061 containerd[1518]: time="2025-01-29T16:09:16.479014519Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.509871039s" Jan 29 16:09:16.479061 containerd[1518]: time="2025-01-29T16:09:16.479058122Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 29 16:09:16.479535 containerd[1518]: time="2025-01-29T16:09:16.479465060Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 16:09:17.079751 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 16:09:17.088167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:09:17.104416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1806549148.mount: Deactivated successfully. Jan 29 16:09:17.211093 containerd[1518]: time="2025-01-29T16:09:17.210918866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:17.223068 containerd[1518]: time="2025-01-29T16:09:17.222992514Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 29 16:09:17.224443 containerd[1518]: time="2025-01-29T16:09:17.224370854Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:17.231866 containerd[1518]: time="2025-01-29T16:09:17.231767177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:17.234945 containerd[1518]: time="2025-01-29T16:09:17.234863713Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 753.632015ms" Jan 29 16:09:17.235073 containerd[1518]: time="2025-01-29T16:09:17.234953077Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 16:09:17.235884 containerd[1518]: time="2025-01-29T16:09:17.235726230Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 16:09:17.254117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:09:17.259073 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:09:17.304757 kubelet[2253]: E0129 16:09:17.304666 2253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:09:17.310048 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:09:17.310225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:09:17.310535 systemd[1]: kubelet.service: Consumed 151ms CPU time, 102.5M memory peak. Jan 29 16:09:17.957024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1342680626.mount: Deactivated successfully. Jan 29 16:09:20.589741 containerd[1518]: time="2025-01-29T16:09:20.586153479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:20.592614 containerd[1518]: time="2025-01-29T16:09:20.592550229Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812491" Jan 29 16:09:20.594405 containerd[1518]: time="2025-01-29T16:09:20.594359334Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:20.624620 containerd[1518]: time="2025-01-29T16:09:20.622157255Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.386252737s" Jan 29 16:09:20.624620 containerd[1518]: time="2025-01-29T16:09:20.622222897Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 29 16:09:20.624620 containerd[1518]: time="2025-01-29T16:09:20.622656793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:26.070982 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:09:26.071711 systemd[1]: kubelet.service: Consumed 151ms CPU time, 102.5M memory peak. Jan 29 16:09:26.078368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:09:26.118483 systemd[1]: Reload requested from client PID 2341 ('systemctl') (unit session-7.scope)... Jan 29 16:09:26.118656 systemd[1]: Reloading... Jan 29 16:09:26.239916 zram_generator::config[2386]: No configuration found. Jan 29 16:09:26.350682 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:09:26.447403 systemd[1]: Reloading finished in 328 ms. Jan 29 16:09:26.498586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:09:26.511451 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:09:26.518712 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:09:26.520456 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:09:26.520707 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:09:26.520763 systemd[1]: kubelet.service: Consumed 103ms CPU time, 91.6M memory peak. Jan 29 16:09:26.527117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:09:26.642061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:09:26.652230 (kubelet)[2440]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:09:26.706137 kubelet[2440]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:09:26.706137 kubelet[2440]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 16:09:26.706137 kubelet[2440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:09:26.706496 kubelet[2440]: I0129 16:09:26.706243 2440 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:09:27.030001 kubelet[2440]: I0129 16:09:27.029341 2440 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 16:09:27.030001 kubelet[2440]: I0129 16:09:27.029380 2440 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:09:27.030001 kubelet[2440]: I0129 16:09:27.029686 2440 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 16:09:27.065029 kubelet[2440]: E0129 16:09:27.064963 2440 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://167.235.198.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 167.235.198.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:27.072140 kubelet[2440]: I0129 16:09:27.072081 2440 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:09:27.084043 kubelet[2440]: E0129 16:09:27.083970 2440 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:09:27.084043 kubelet[2440]: I0129 16:09:27.084032 2440 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:09:27.088189 kubelet[2440]: I0129 16:09:27.088114 2440 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:09:27.088455 kubelet[2440]: I0129 16:09:27.088416 2440 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:09:27.088680 kubelet[2440]: I0129 16:09:27.088457 2440 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-d-0116a6be22","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:09:27.088835 kubelet[2440]: I0129 16:09:27.088755 2440 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:09:27.088835 kubelet[2440]: I0129 16:09:27.088768 2440 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 16:09:27.089052 kubelet[2440]: I0129 16:09:27.089030 2440 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:09:27.094907 kubelet[2440]: I0129 16:09:27.094840 2440 kubelet.go:446] "Attempting to sync node with API server" Jan 29 16:09:27.094907 kubelet[2440]: I0129 16:09:27.094892 2440 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:09:27.095109 kubelet[2440]: I0129 16:09:27.094928 2440 kubelet.go:352] "Adding apiserver pod source" Jan 29 16:09:27.095109 kubelet[2440]: I0129 16:09:27.094950 2440 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:09:27.101558 kubelet[2440]: W0129 16:09:27.101495 2440 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://167.235.198.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:09:27.101558 kubelet[2440]: E0129 16:09:27.101564 2440 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://167.235.198.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 167.235.198.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:27.101967 kubelet[2440]: W0129 16:09:27.101920 2440 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://167.235.198.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-d-0116a6be22&limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:09:27.102012 kubelet[2440]: E0129 16:09:27.101977 2440 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://167.235.198.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-d-0116a6be22&limit=500&resourceVersion=0\": dial tcp 167.235.198.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:27.102373 kubelet[2440]: I0129 16:09:27.102348 2440 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:09:27.103163 kubelet[2440]: I0129 16:09:27.103128 2440 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:09:27.103287 kubelet[2440]: W0129 16:09:27.103271 2440 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:09:27.104585 kubelet[2440]: I0129 16:09:27.104550 2440 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 16:09:27.104663 kubelet[2440]: I0129 16:09:27.104595 2440 server.go:1287] "Started kubelet" Jan 29 16:09:27.105735 kubelet[2440]: I0129 16:09:27.105695 2440 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:09:27.106779 kubelet[2440]: I0129 16:09:27.106753 2440 server.go:490] "Adding debug handlers to kubelet server" Jan 29 16:09:27.109229 kubelet[2440]: I0129 16:09:27.109155 2440 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:09:27.109524 kubelet[2440]: I0129 16:09:27.109500 2440 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:09:27.110338 kubelet[2440]: E0129 16:09:27.109813 2440 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://167.235.198.80:6443/api/v1/namespaces/default/events\": dial tcp 167.235.198.80:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-0-d-0116a6be22.181f35a5ba961571 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-d-0116a6be22,UID:ci-4230-0-0-d-0116a6be22,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-d-0116a6be22,},FirstTimestamp:2025-01-29 16:09:27.104574833 +0000 UTC m=+0.447242335,LastTimestamp:2025-01-29 16:09:27.104574833 +0000 UTC m=+0.447242335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-d-0116a6be22,}" Jan 29 16:09:27.113381 kubelet[2440]: I0129 16:09:27.113336 2440 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:09:27.113836 kubelet[2440]: I0129 16:09:27.113804 2440 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:09:27.116352 kubelet[2440]: I0129 16:09:27.116270 2440 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 16:09:27.116977 kubelet[2440]: E0129 16:09:27.116939 2440 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-0-d-0116a6be22\" not found" Jan 29 16:09:27.117735 kubelet[2440]: I0129 16:09:27.117713 2440 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:09:27.117903 kubelet[2440]: I0129 16:09:27.117892 2440 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:09:27.119438 kubelet[2440]: W0129 16:09:27.119381 2440 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://167.235.198.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:09:27.119571 kubelet[2440]: E0129 16:09:27.119545 2440 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://167.235.198.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 167.235.198.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:27.119730 kubelet[2440]: E0129 16:09:27.119702 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://167.235.198.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-d-0116a6be22?timeout=10s\": dial tcp 167.235.198.80:6443: connect: connection refused" interval="200ms" Jan 29 16:09:27.122457 kubelet[2440]: E0129 16:09:27.122425 2440 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:09:27.123952 kubelet[2440]: I0129 16:09:27.123930 2440 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:09:27.124078 kubelet[2440]: I0129 16:09:27.124068 2440 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:09:27.124249 kubelet[2440]: I0129 16:09:27.124229 2440 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:09:27.137941 kubelet[2440]: I0129 16:09:27.137884 2440 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:09:27.139421 kubelet[2440]: I0129 16:09:27.139071 2440 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:09:27.139421 kubelet[2440]: I0129 16:09:27.139100 2440 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 16:09:27.139421 kubelet[2440]: I0129 16:09:27.139121 2440 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 16:09:27.139421 kubelet[2440]: I0129 16:09:27.139129 2440 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 16:09:27.139421 kubelet[2440]: E0129 16:09:27.139184 2440 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:09:27.145043 kubelet[2440]: W0129 16:09:27.145000 2440 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://167.235.198.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:09:27.145189 kubelet[2440]: E0129 16:09:27.145048 2440 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://167.235.198.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 167.235.198.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:27.153618 kubelet[2440]: I0129 16:09:27.153508 2440 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 16:09:27.153618 kubelet[2440]: I0129 16:09:27.153529 2440 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 16:09:27.153618 kubelet[2440]: I0129 16:09:27.153550 2440 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:09:27.156399 kubelet[2440]: I0129 16:09:27.156367 2440 policy_none.go:49] "None policy: Start" Jan 29 16:09:27.156399 kubelet[2440]: I0129 16:09:27.156401 2440 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 16:09:27.156516 kubelet[2440]: I0129 16:09:27.156420 2440 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:09:27.162906 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:09:27.174513 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:09:27.184487 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:09:27.196517 kubelet[2440]: I0129 16:09:27.196467 2440 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:09:27.197496 kubelet[2440]: I0129 16:09:27.197183 2440 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:09:27.197496 kubelet[2440]: I0129 16:09:27.197220 2440 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:09:27.198224 kubelet[2440]: I0129 16:09:27.197947 2440 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:09:27.201019 kubelet[2440]: E0129 16:09:27.200942 2440 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 16:09:27.201019 kubelet[2440]: E0129 16:09:27.200999 2440 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-0-d-0116a6be22\" not found" Jan 29 16:09:27.253380 systemd[1]: Created slice kubepods-burstable-pod3421003d42a59284d991bacfae98de7a.slice - libcontainer container kubepods-burstable-pod3421003d42a59284d991bacfae98de7a.slice. Jan 29 16:09:27.275384 kubelet[2440]: E0129 16:09:27.275333 2440 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-d-0116a6be22\" not found" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.279930 systemd[1]: Created slice kubepods-burstable-pod07ea957743eac6478d43100ec49190fa.slice - libcontainer container kubepods-burstable-pod07ea957743eac6478d43100ec49190fa.slice. Jan 29 16:09:27.283528 kubelet[2440]: E0129 16:09:27.283478 2440 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-d-0116a6be22\" not found" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.287947 systemd[1]: Created slice kubepods-burstable-podf0f740f29ca5f9e91bdfd69b86c424b1.slice - libcontainer container kubepods-burstable-podf0f740f29ca5f9e91bdfd69b86c424b1.slice. Jan 29 16:09:27.290350 kubelet[2440]: E0129 16:09:27.290267 2440 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-d-0116a6be22\" not found" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.303067 kubelet[2440]: I0129 16:09:27.302985 2440 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.303567 kubelet[2440]: E0129 16:09:27.303532 2440 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://167.235.198.80:6443/api/v1/nodes\": dial tcp 167.235.198.80:6443: connect: connection refused" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.319380 kubelet[2440]: I0129 16:09:27.319293 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/07ea957743eac6478d43100ec49190fa-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-d-0116a6be22\" (UID: \"07ea957743eac6478d43100ec49190fa\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.319380 kubelet[2440]: I0129 16:09:27.319358 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ea957743eac6478d43100ec49190fa-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-d-0116a6be22\" (UID: \"07ea957743eac6478d43100ec49190fa\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.319380 kubelet[2440]: I0129 16:09:27.319403 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07ea957743eac6478d43100ec49190fa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-d-0116a6be22\" (UID: \"07ea957743eac6478d43100ec49190fa\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.319931 kubelet[2440]: I0129 16:09:27.319455 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07ea957743eac6478d43100ec49190fa-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-d-0116a6be22\" (UID: \"07ea957743eac6478d43100ec49190fa\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.319931 kubelet[2440]: I0129 16:09:27.319488 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07ea957743eac6478d43100ec49190fa-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-d-0116a6be22\" (UID: \"07ea957743eac6478d43100ec49190fa\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.319931 kubelet[2440]: I0129 16:09:27.319522 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0f740f29ca5f9e91bdfd69b86c424b1-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-d-0116a6be22\" (UID: \"f0f740f29ca5f9e91bdfd69b86c424b1\") " pod="kube-system/kube-scheduler-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.319931 kubelet[2440]: I0129 16:09:27.319581 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3421003d42a59284d991bacfae98de7a-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-d-0116a6be22\" (UID: \"3421003d42a59284d991bacfae98de7a\") " pod="kube-system/kube-apiserver-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.319931 kubelet[2440]: I0129 16:09:27.319629 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3421003d42a59284d991bacfae98de7a-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-d-0116a6be22\" (UID: \"3421003d42a59284d991bacfae98de7a\") " pod="kube-system/kube-apiserver-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.320108 kubelet[2440]: I0129 16:09:27.319659 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3421003d42a59284d991bacfae98de7a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-d-0116a6be22\" (UID: \"3421003d42a59284d991bacfae98de7a\") " pod="kube-system/kube-apiserver-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.320722 kubelet[2440]: E0129 16:09:27.320675 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://167.235.198.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-d-0116a6be22?timeout=10s\": dial tcp 167.235.198.80:6443: connect: connection refused" interval="400ms" Jan 29 16:09:27.505926 kubelet[2440]: I0129 16:09:27.505593 2440 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.506266 kubelet[2440]: E0129 16:09:27.506132 2440 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://167.235.198.80:6443/api/v1/nodes\": dial tcp 167.235.198.80:6443: connect: connection refused" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.578155 containerd[1518]: time="2025-01-29T16:09:27.578002842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-d-0116a6be22,Uid:3421003d42a59284d991bacfae98de7a,Namespace:kube-system,Attempt:0,}" Jan 29 16:09:27.584827 containerd[1518]: time="2025-01-29T16:09:27.584693356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-d-0116a6be22,Uid:07ea957743eac6478d43100ec49190fa,Namespace:kube-system,Attempt:0,}" Jan 29 16:09:27.592406 containerd[1518]: time="2025-01-29T16:09:27.591835599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-d-0116a6be22,Uid:f0f740f29ca5f9e91bdfd69b86c424b1,Namespace:kube-system,Attempt:0,}" Jan 29 16:09:27.722088 kubelet[2440]: E0129 16:09:27.722022 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://167.235.198.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-d-0116a6be22?timeout=10s\": dial tcp 167.235.198.80:6443: connect: connection refused" interval="800ms" Jan 29 16:09:27.909956 kubelet[2440]: I0129 16:09:27.909468 2440 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:27.910341 kubelet[2440]: E0129 16:09:27.910272 2440 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://167.235.198.80:6443/api/v1/nodes\": dial tcp 167.235.198.80:6443: connect: connection refused" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:28.025709 kubelet[2440]: W0129 16:09:28.025672 2440 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://167.235.198.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:09:28.025866 kubelet[2440]: E0129 16:09:28.025719 2440 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://167.235.198.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 167.235.198.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:28.203077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1844454551.mount: Deactivated successfully. Jan 29 16:09:28.217210 containerd[1518]: time="2025-01-29T16:09:28.217034427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:09:28.218897 containerd[1518]: time="2025-01-29T16:09:28.218739024Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 29 16:09:28.223630 containerd[1518]: time="2025-01-29T16:09:28.223436724Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:09:28.232879 containerd[1518]: time="2025-01-29T16:09:28.231374775Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:09:28.237284 containerd[1518]: time="2025-01-29T16:09:28.236915574Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:09:28.239538 containerd[1518]: time="2025-01-29T16:09:28.239463829Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:09:28.240635 containerd[1518]: time="2025-01-29T16:09:28.240561172Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:09:28.243819 containerd[1518]: time="2025-01-29T16:09:28.242161527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:09:28.246061 containerd[1518]: time="2025-01-29T16:09:28.245993449Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 667.860843ms" Jan 29 16:09:28.248705 containerd[1518]: time="2025-01-29T16:09:28.248635626Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 663.783426ms" Jan 29 16:09:28.256857 containerd[1518]: time="2025-01-29T16:09:28.256778201Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 664.682396ms" Jan 29 16:09:28.266599 kubelet[2440]: W0129 16:09:28.266465 2440 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://167.235.198.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-d-0116a6be22&limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:09:28.266599 kubelet[2440]: E0129 16:09:28.266536 2440 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://167.235.198.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-d-0116a6be22&limit=500&resourceVersion=0\": dial tcp 167.235.198.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:28.382744 containerd[1518]: time="2025-01-29T16:09:28.382606944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:09:28.382744 containerd[1518]: time="2025-01-29T16:09:28.382688666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:09:28.382744 containerd[1518]: time="2025-01-29T16:09:28.382699946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:28.383718 containerd[1518]: time="2025-01-29T16:09:28.382952111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:09:28.383718 containerd[1518]: time="2025-01-29T16:09:28.383057154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:09:28.383718 containerd[1518]: time="2025-01-29T16:09:28.383070314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:28.383718 containerd[1518]: time="2025-01-29T16:09:28.383138475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:28.383718 containerd[1518]: time="2025-01-29T16:09:28.382775188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:28.386519 containerd[1518]: time="2025-01-29T16:09:28.385849734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:09:28.386519 containerd[1518]: time="2025-01-29T16:09:28.386106819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:09:28.386519 containerd[1518]: time="2025-01-29T16:09:28.386123380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:28.386519 containerd[1518]: time="2025-01-29T16:09:28.386290543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:28.411049 systemd[1]: Started cri-containerd-228af3fad7484924b82ba2afb112de8a63e259cbb1c4ae76c4f35278cbfec48c.scope - libcontainer container 228af3fad7484924b82ba2afb112de8a63e259cbb1c4ae76c4f35278cbfec48c. Jan 29 16:09:28.426726 systemd[1]: Started cri-containerd-947ff973e7b14c2e6cafaa4508e495731b8dc730180cedd5903a57830f044c86.scope - libcontainer container 947ff973e7b14c2e6cafaa4508e495731b8dc730180cedd5903a57830f044c86. Jan 29 16:09:28.432928 systemd[1]: Started cri-containerd-5bdbc356a7314d6e3768e49c331b58898635d7e5a0614910c2c620bc8eb26933.scope - libcontainer container 5bdbc356a7314d6e3768e49c331b58898635d7e5a0614910c2c620bc8eb26933. Jan 29 16:09:28.484734 containerd[1518]: time="2025-01-29T16:09:28.483631354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-d-0116a6be22,Uid:3421003d42a59284d991bacfae98de7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"228af3fad7484924b82ba2afb112de8a63e259cbb1c4ae76c4f35278cbfec48c\"" Jan 29 16:09:28.493184 containerd[1518]: time="2025-01-29T16:09:28.493121318Z" level=info msg="CreateContainer within sandbox \"228af3fad7484924b82ba2afb112de8a63e259cbb1c4ae76c4f35278cbfec48c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:09:28.508733 containerd[1518]: time="2025-01-29T16:09:28.508583811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-d-0116a6be22,Uid:07ea957743eac6478d43100ec49190fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"947ff973e7b14c2e6cafaa4508e495731b8dc730180cedd5903a57830f044c86\"" Jan 29 16:09:28.512608 containerd[1518]: time="2025-01-29T16:09:28.512564856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-d-0116a6be22,Uid:f0f740f29ca5f9e91bdfd69b86c424b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bdbc356a7314d6e3768e49c331b58898635d7e5a0614910c2c620bc8eb26933\"" Jan 29 16:09:28.516308 containerd[1518]: time="2025-01-29T16:09:28.516076812Z" level=info msg="CreateContainer within sandbox \"5bdbc356a7314d6e3768e49c331b58898635d7e5a0614910c2c620bc8eb26933\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:09:28.516431 containerd[1518]: time="2025-01-29T16:09:28.516325137Z" level=info msg="CreateContainer within sandbox \"947ff973e7b14c2e6cafaa4508e495731b8dc730180cedd5903a57830f044c86\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:09:28.524149 containerd[1518]: time="2025-01-29T16:09:28.523949981Z" level=info msg="CreateContainer within sandbox \"228af3fad7484924b82ba2afb112de8a63e259cbb1c4ae76c4f35278cbfec48c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dd42d4541e61b8a215992a7a4a0ac598ea574c5f6f1143ea9f9f128494c76dac\"" Jan 29 16:09:28.525065 kubelet[2440]: E0129 16:09:28.524604 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://167.235.198.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-d-0116a6be22?timeout=10s\": dial tcp 167.235.198.80:6443: connect: connection refused" interval="1.6s" Jan 29 16:09:28.525260 containerd[1518]: time="2025-01-29T16:09:28.525222808Z" level=info msg="StartContainer for \"dd42d4541e61b8a215992a7a4a0ac598ea574c5f6f1143ea9f9f128494c76dac\"" Jan 29 16:09:28.535802 containerd[1518]: time="2025-01-29T16:09:28.535744834Z" level=info msg="CreateContainer within sandbox \"5bdbc356a7314d6e3768e49c331b58898635d7e5a0614910c2c620bc8eb26933\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7ef5c201b69e02b73941542b7ea3028607379f29fd9365e4dc9f1e45c11538a3\"" Jan 29 16:09:28.538152 containerd[1518]: time="2025-01-29T16:09:28.536773056Z" level=info msg="StartContainer for \"7ef5c201b69e02b73941542b7ea3028607379f29fd9365e4dc9f1e45c11538a3\"" Jan 29 16:09:28.541432 containerd[1518]: time="2025-01-29T16:09:28.541381235Z" level=info msg="CreateContainer within sandbox \"947ff973e7b14c2e6cafaa4508e495731b8dc730180cedd5903a57830f044c86\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"56283c2f09b2a4c2e7868d3f92cc118063dc40e28583dcc5e15a26cf97405477\"" Jan 29 16:09:28.542386 containerd[1518]: time="2025-01-29T16:09:28.542354336Z" level=info msg="StartContainer for \"56283c2f09b2a4c2e7868d3f92cc118063dc40e28583dcc5e15a26cf97405477\"" Jan 29 16:09:28.569039 systemd[1]: Started cri-containerd-dd42d4541e61b8a215992a7a4a0ac598ea574c5f6f1143ea9f9f128494c76dac.scope - libcontainer container dd42d4541e61b8a215992a7a4a0ac598ea574c5f6f1143ea9f9f128494c76dac. Jan 29 16:09:28.588045 systemd[1]: Started cri-containerd-7ef5c201b69e02b73941542b7ea3028607379f29fd9365e4dc9f1e45c11538a3.scope - libcontainer container 7ef5c201b69e02b73941542b7ea3028607379f29fd9365e4dc9f1e45c11538a3. Jan 29 16:09:28.590163 kubelet[2440]: W0129 16:09:28.590092 2440 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://167.235.198.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:09:28.590163 kubelet[2440]: E0129 16:09:28.590164 2440 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://167.235.198.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 167.235.198.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:28.604401 systemd[1]: Started cri-containerd-56283c2f09b2a4c2e7868d3f92cc118063dc40e28583dcc5e15a26cf97405477.scope - libcontainer container 56283c2f09b2a4c2e7868d3f92cc118063dc40e28583dcc5e15a26cf97405477. Jan 29 16:09:28.631569 containerd[1518]: time="2025-01-29T16:09:28.631452290Z" level=info msg="StartContainer for \"dd42d4541e61b8a215992a7a4a0ac598ea574c5f6f1143ea9f9f128494c76dac\" returns successfully" Jan 29 16:09:28.659921 containerd[1518]: time="2025-01-29T16:09:28.659755258Z" level=info msg="StartContainer for \"7ef5c201b69e02b73941542b7ea3028607379f29fd9365e4dc9f1e45c11538a3\" returns successfully" Jan 29 16:09:28.678405 kubelet[2440]: W0129 16:09:28.678340 2440 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://167.235.198.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:09:28.678405 kubelet[2440]: E0129 16:09:28.678406 2440 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://167.235.198.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 167.235.198.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:28.684758 containerd[1518]: time="2025-01-29T16:09:28.684617832Z" level=info msg="StartContainer for \"56283c2f09b2a4c2e7868d3f92cc118063dc40e28583dcc5e15a26cf97405477\" returns successfully" Jan 29 16:09:28.716342 kubelet[2440]: I0129 16:09:28.716309 2440 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:28.716730 kubelet[2440]: E0129 16:09:28.716671 2440 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://167.235.198.80:6443/api/v1/nodes\": dial tcp 167.235.198.80:6443: connect: connection refused" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:29.156562 kubelet[2440]: E0129 16:09:29.156525 2440 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-d-0116a6be22\" not found" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:29.160388 kubelet[2440]: E0129 16:09:29.160332 2440 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-d-0116a6be22\" not found" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:29.164695 kubelet[2440]: E0129 16:09:29.164665 2440 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-d-0116a6be22\" not found" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:30.167477 kubelet[2440]: E0129 16:09:30.167438 2440 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-d-0116a6be22\" not found" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:30.167953 kubelet[2440]: E0129 16:09:30.167845 2440 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-d-0116a6be22\" not found" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:30.320232 kubelet[2440]: I0129 16:09:30.320157 2440 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:31.664234 kubelet[2440]: E0129 16:09:31.664199 2440 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-d-0116a6be22\" not found" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:32.256192 kubelet[2440]: E0129 16:09:32.256149 2440 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-0-0-d-0116a6be22\" not found" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:32.271729 kubelet[2440]: E0129 16:09:32.271678 2440 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-d-0116a6be22\" not found" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:32.299052 kubelet[2440]: E0129 16:09:32.298928 2440 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-0-0-d-0116a6be22.181f35a5ba961571 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-d-0116a6be22,UID:ci-4230-0-0-d-0116a6be22,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-d-0116a6be22,},FirstTimestamp:2025-01-29 16:09:27.104574833 +0000 UTC m=+0.447242335,LastTimestamp:2025-01-29 16:09:27.104574833 +0000 UTC m=+0.447242335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-d-0116a6be22,}" Jan 29 16:09:32.354391 kubelet[2440]: E0129 16:09:32.354254 2440 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-0-0-d-0116a6be22.181f35a5bba6218a default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-d-0116a6be22,UID:ci-4230-0-0-d-0116a6be22,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-d-0116a6be22,},FirstTimestamp:2025-01-29 16:09:27.122403722 +0000 UTC m=+0.465071224,LastTimestamp:2025-01-29 16:09:27.122403722 +0000 UTC m=+0.465071224,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-d-0116a6be22,}" Jan 29 16:09:32.360077 kubelet[2440]: I0129 16:09:32.359930 2440 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:32.360077 kubelet[2440]: E0129 16:09:32.359980 2440 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4230-0-0-d-0116a6be22\": node \"ci-4230-0-0-d-0116a6be22\" not found" Jan 29 16:09:32.415294 kubelet[2440]: E0129 16:09:32.415109 2440 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-0-0-d-0116a6be22.181f35a5bd74c1c8 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-d-0116a6be22,UID:ci-4230-0-0-d-0116a6be22,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4230-0-0-d-0116a6be22 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-d-0116a6be22,},FirstTimestamp:2025-01-29 16:09:27.152722376 +0000 UTC m=+0.495389878,LastTimestamp:2025-01-29 16:09:27.152722376 +0000 UTC m=+0.495389878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-d-0116a6be22,}" Jan 29 16:09:32.417159 kubelet[2440]: I0129 16:09:32.417109 2440 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:32.426097 kubelet[2440]: E0129 16:09:32.426049 2440 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-0-0-d-0116a6be22\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:32.426097 kubelet[2440]: I0129 16:09:32.426085 2440 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:32.428650 kubelet[2440]: E0129 16:09:32.428539 2440 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-0-0-d-0116a6be22\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:32.428874 kubelet[2440]: I0129 16:09:32.428858 2440 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:32.430966 kubelet[2440]: E0129 16:09:32.430923 2440 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-0-0-d-0116a6be22\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:32.475846 kubelet[2440]: E0129 16:09:32.475116 2440 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-0-0-d-0116a6be22.181f35a5bd74d501 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-d-0116a6be22,UID:ci-4230-0-0-d-0116a6be22,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-4230-0-0-d-0116a6be22 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-d-0116a6be22,},FirstTimestamp:2025-01-29 16:09:27.152727297 +0000 UTC m=+0.495394799,LastTimestamp:2025-01-29 16:09:27.152727297 +0000 UTC m=+0.495394799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-d-0116a6be22,}" Jan 29 16:09:32.530959 kubelet[2440]: E0129 16:09:32.530637 2440 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-0-0-d-0116a6be22.181f35a5bd74e091 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-d-0116a6be22,UID:ci-4230-0-0-d-0116a6be22,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ci-4230-0-0-d-0116a6be22 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-d-0116a6be22,},FirstTimestamp:2025-01-29 16:09:27.152730257 +0000 UTC m=+0.495397759,LastTimestamp:2025-01-29 16:09:27.152730257 +0000 UTC m=+0.495397759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-d-0116a6be22,}" Jan 29 16:09:33.102990 kubelet[2440]: I0129 16:09:33.102736 2440 apiserver.go:52] "Watching apiserver" Jan 29 16:09:33.118051 kubelet[2440]: I0129 16:09:33.117969 2440 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:09:34.618473 systemd[1]: Reload requested from client PID 2720 ('systemctl') (unit session-7.scope)... Jan 29 16:09:34.618507 systemd[1]: Reloading... Jan 29 16:09:34.735831 zram_generator::config[2771]: No configuration found. Jan 29 16:09:34.838657 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:09:34.942464 systemd[1]: Reloading finished in 323 ms. Jan 29 16:09:34.973480 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:09:34.987537 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:09:34.987871 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:09:34.987929 systemd[1]: kubelet.service: Consumed 915ms CPU time, 124.9M memory peak. Jan 29 16:09:34.996295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:09:35.158380 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:09:35.168202 (kubelet)[2810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:09:35.219537 kubelet[2810]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:09:35.219970 kubelet[2810]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 16:09:35.220096 kubelet[2810]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:09:35.220336 kubelet[2810]: I0129 16:09:35.220299 2810 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:09:35.227640 kubelet[2810]: I0129 16:09:35.227599 2810 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 16:09:35.227906 kubelet[2810]: I0129 16:09:35.227893 2810 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:09:35.228380 kubelet[2810]: I0129 16:09:35.228341 2810 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 16:09:35.229896 kubelet[2810]: I0129 16:09:35.229871 2810 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:09:35.232441 kubelet[2810]: I0129 16:09:35.232388 2810 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:09:35.237120 kubelet[2810]: E0129 16:09:35.237010 2810 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:09:35.237120 kubelet[2810]: I0129 16:09:35.237095 2810 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:09:35.240089 kubelet[2810]: I0129 16:09:35.240057 2810 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:09:35.241155 kubelet[2810]: I0129 16:09:35.240937 2810 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:09:35.242248 kubelet[2810]: I0129 16:09:35.240986 2810 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-d-0116a6be22","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:09:35.242455 kubelet[2810]: I0129 16:09:35.242269 2810 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:09:35.242455 kubelet[2810]: I0129 16:09:35.242297 2810 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 16:09:35.242455 kubelet[2810]: I0129 16:09:35.242428 2810 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:09:35.242829 kubelet[2810]: I0129 16:09:35.242739 2810 kubelet.go:446] "Attempting to sync node with API server" Jan 29 16:09:35.242829 kubelet[2810]: I0129 16:09:35.242769 2810 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:09:35.242829 kubelet[2810]: I0129 16:09:35.242822 2810 kubelet.go:352] "Adding apiserver pod source" Jan 29 16:09:35.243117 kubelet[2810]: I0129 16:09:35.242851 2810 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:09:35.249476 kubelet[2810]: I0129 16:09:35.249437 2810 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:09:35.251807 kubelet[2810]: I0129 16:09:35.250020 2810 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:09:35.251807 kubelet[2810]: I0129 16:09:35.251306 2810 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 16:09:35.251807 kubelet[2810]: I0129 16:09:35.251349 2810 server.go:1287] "Started kubelet" Jan 29 16:09:35.255773 kubelet[2810]: I0129 16:09:35.254286 2810 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:09:35.260712 kubelet[2810]: I0129 16:09:35.260646 2810 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:09:35.269056 kubelet[2810]: I0129 16:09:35.268296 2810 server.go:490] "Adding debug handlers to kubelet server" Jan 29 16:09:35.269426 kubelet[2810]: I0129 16:09:35.261100 2810 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:09:35.269426 kubelet[2810]: I0129 16:09:35.260742 2810 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:09:35.269574 kubelet[2810]: I0129 16:09:35.269551 2810 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:09:35.269608 kubelet[2810]: I0129 16:09:35.263071 2810 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 16:09:35.269946 kubelet[2810]: I0129 16:09:35.263083 2810 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:09:35.269993 kubelet[2810]: E0129 16:09:35.263207 2810 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-0-d-0116a6be22\" not found" Jan 29 16:09:35.269993 kubelet[2810]: I0129 16:09:35.267321 2810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:09:35.272311 kubelet[2810]: I0129 16:09:35.271104 2810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:09:35.272311 kubelet[2810]: I0129 16:09:35.271134 2810 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 16:09:35.272311 kubelet[2810]: I0129 16:09:35.271157 2810 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 16:09:35.272311 kubelet[2810]: I0129 16:09:35.271163 2810 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 16:09:35.272311 kubelet[2810]: E0129 16:09:35.271203 2810 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:09:35.272311 kubelet[2810]: I0129 16:09:35.271474 2810 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:09:35.286853 kubelet[2810]: I0129 16:09:35.286810 2810 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:09:35.287072 kubelet[2810]: I0129 16:09:35.287059 2810 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:09:35.288804 kubelet[2810]: I0129 16:09:35.287419 2810 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:09:35.361661 kubelet[2810]: I0129 16:09:35.361628 2810 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 16:09:35.361661 kubelet[2810]: I0129 16:09:35.361650 2810 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 16:09:35.361842 kubelet[2810]: I0129 16:09:35.361675 2810 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:09:35.361900 kubelet[2810]: I0129 16:09:35.361885 2810 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:09:35.361924 kubelet[2810]: I0129 16:09:35.361902 2810 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:09:35.361924 kubelet[2810]: I0129 16:09:35.361924 2810 policy_none.go:49] "None policy: Start" Jan 29 16:09:35.361974 kubelet[2810]: I0129 16:09:35.361933 2810 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 16:09:35.361974 kubelet[2810]: I0129 16:09:35.361945 2810 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:09:35.362088 kubelet[2810]: I0129 16:09:35.362074 2810 state_mem.go:75] "Updated machine memory state" Jan 29 16:09:35.367391 kubelet[2810]: I0129 16:09:35.367342 2810 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:09:35.367551 kubelet[2810]: I0129 16:09:35.367530 2810 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:09:35.367595 kubelet[2810]: I0129 16:09:35.367547 2810 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:09:35.371248 kubelet[2810]: E0129 16:09:35.371167 2810 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 16:09:35.372303 kubelet[2810]: I0129 16:09:35.372216 2810 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:09:35.374098 kubelet[2810]: I0129 16:09:35.374057 2810 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:35.374453 kubelet[2810]: I0129 16:09:35.374419 2810 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:35.374722 kubelet[2810]: I0129 16:09:35.374695 2810 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:35.472550 kubelet[2810]: I0129 16:09:35.472335 2810 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:35.474834 kubelet[2810]: I0129 16:09:35.474473 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3421003d42a59284d991bacfae98de7a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-d-0116a6be22\" (UID: \"3421003d42a59284d991bacfae98de7a\") " pod="kube-system/kube-apiserver-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:35.474834 kubelet[2810]: I0129 16:09:35.474517 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07ea957743eac6478d43100ec49190fa-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-d-0116a6be22\" (UID: \"07ea957743eac6478d43100ec49190fa\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:35.474834 kubelet[2810]: I0129 16:09:35.474534 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ea957743eac6478d43100ec49190fa-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-d-0116a6be22\" (UID: \"07ea957743eac6478d43100ec49190fa\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:35.474834 kubelet[2810]: I0129 16:09:35.474551 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07ea957743eac6478d43100ec49190fa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-d-0116a6be22\" (UID: \"07ea957743eac6478d43100ec49190fa\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:35.474834 kubelet[2810]: I0129 16:09:35.474569 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0f740f29ca5f9e91bdfd69b86c424b1-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-d-0116a6be22\" (UID: \"f0f740f29ca5f9e91bdfd69b86c424b1\") " pod="kube-system/kube-scheduler-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:35.475066 kubelet[2810]: I0129 16:09:35.474583 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3421003d42a59284d991bacfae98de7a-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-d-0116a6be22\" (UID: \"3421003d42a59284d991bacfae98de7a\") " pod="kube-system/kube-apiserver-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:35.475066 kubelet[2810]: I0129 16:09:35.474600 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3421003d42a59284d991bacfae98de7a-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-d-0116a6be22\" (UID: \"3421003d42a59284d991bacfae98de7a\") " pod="kube-system/kube-apiserver-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:35.475066 kubelet[2810]: I0129 16:09:35.474616 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07ea957743eac6478d43100ec49190fa-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-d-0116a6be22\" (UID: \"07ea957743eac6478d43100ec49190fa\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:35.475066 kubelet[2810]: I0129 16:09:35.474631 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/07ea957743eac6478d43100ec49190fa-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-d-0116a6be22\" (UID: \"07ea957743eac6478d43100ec49190fa\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:35.489096 kubelet[2810]: I0129 16:09:35.489053 2810 kubelet_node_status.go:125] "Node was previously registered" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:35.489240 kubelet[2810]: I0129 16:09:35.489147 2810 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230-0-0-d-0116a6be22" Jan 29 16:09:35.605787 sudo[2843]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 16:09:35.606192 sudo[2843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 16:09:36.102630 sudo[2843]: pam_unix(sudo:session): session closed for user root Jan 29 16:09:36.245143 kubelet[2810]: I0129 16:09:36.243673 2810 apiserver.go:52] "Watching apiserver" Jan 29 16:09:36.271105 kubelet[2810]: I0129 16:09:36.270983 2810 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:09:36.297084 kubelet[2810]: I0129 16:09:36.296619 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-0-d-0116a6be22" podStartSLOduration=1.2965986059999999 podStartE2EDuration="1.296598606s" podCreationTimestamp="2025-01-29 16:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:09:36.296421444 +0000 UTC m=+1.123992158" watchObservedRunningTime="2025-01-29 16:09:36.296598606 +0000 UTC m=+1.124169320" Jan 29 16:09:36.333845 kubelet[2810]: I0129 16:09:36.331897 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-0-d-0116a6be22" podStartSLOduration=1.331876259 podStartE2EDuration="1.331876259s" podCreationTimestamp="2025-01-29 16:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:09:36.313903068 +0000 UTC m=+1.141473902" watchObservedRunningTime="2025-01-29 16:09:36.331876259 +0000 UTC m=+1.159446973" Jan 29 16:09:36.351714 kubelet[2810]: I0129 16:09:36.350622 2810 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:36.353420 systemd[1]: Started sshd@26-167.235.198.80:22-149.50.252.131:54570.service - OpenSSH per-connection server daemon (149.50.252.131:54570). Jan 29 16:09:36.376434 systemd[1]: Started sshd@27-167.235.198.80:22-149.50.252.131:54582.service - OpenSSH per-connection server daemon (149.50.252.131:54582). Jan 29 16:09:36.385399 kubelet[2810]: I0129 16:09:36.385038 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-0-d-0116a6be22" podStartSLOduration=1.3849831 podStartE2EDuration="1.3849831s" podCreationTimestamp="2025-01-29 16:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:09:36.334454172 +0000 UTC m=+1.162024886" watchObservedRunningTime="2025-01-29 16:09:36.3849831 +0000 UTC m=+1.212553814" Jan 29 16:09:36.387877 kubelet[2810]: E0129 16:09:36.387116 2810 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-0-0-d-0116a6be22\" already exists" pod="kube-system/kube-scheduler-ci-4230-0-0-d-0116a6be22" Jan 29 16:09:36.591731 sshd[2857]: Connection closed by 149.50.252.131 port 54582 [preauth] Jan 29 16:09:36.595346 systemd[1]: sshd@27-167.235.198.80:22-149.50.252.131:54582.service: Deactivated successfully. Jan 29 16:09:36.596485 sshd[2854]: Connection closed by 149.50.252.131 port 54570 [preauth] Jan 29 16:09:36.602731 systemd[1]: sshd@26-167.235.198.80:22-149.50.252.131:54570.service: Deactivated successfully. Jan 29 16:09:38.542700 sudo[1885]: pam_unix(sudo:session): session closed for user root Jan 29 16:09:38.701183 sshd[1884]: Connection closed by 139.178.68.195 port 43994 Jan 29 16:09:38.701027 sshd-session[1882]: pam_unix(sshd:session): session closed for user core Jan 29 16:09:38.707274 systemd[1]: sshd@22-167.235.198.80:22-139.178.68.195:43994.service: Deactivated successfully. Jan 29 16:09:38.710410 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:09:38.710650 systemd[1]: session-7.scope: Consumed 8.161s CPU time, 266M memory peak. Jan 29 16:09:38.712628 systemd-logind[1494]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:09:38.714558 systemd-logind[1494]: Removed session 7. Jan 29 16:09:40.032371 kubelet[2810]: I0129 16:09:40.032282 2810 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:09:40.035329 containerd[1518]: time="2025-01-29T16:09:40.034323136Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:09:40.036408 kubelet[2810]: I0129 16:09:40.035954 2810 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:09:40.677724 systemd[1]: Created slice kubepods-besteffort-pod6c51a88b_ed80_44c6_ac83_ad8114c95991.slice - libcontainer container kubepods-besteffort-pod6c51a88b_ed80_44c6_ac83_ad8114c95991.slice. Jan 29 16:09:40.712825 systemd[1]: Created slice kubepods-burstable-pod56526d92_8267_4c39_b176_3a2d0823d621.slice - libcontainer container kubepods-burstable-pod56526d92_8267_4c39_b176_3a2d0823d621.slice. Jan 29 16:09:40.716359 kubelet[2810]: I0129 16:09:40.716138 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nprg4\" (UniqueName: \"kubernetes.io/projected/56526d92-8267-4c39-b176-3a2d0823d621-kube-api-access-nprg4\") pod \"cilium-bxwsb\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " pod="kube-system/cilium-bxwsb" Jan 29 16:09:40.716359 kubelet[2810]: I0129 16:09:40.716183 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c51a88b-ed80-44c6-ac83-ad8114c95991-xtables-lock\") pod \"kube-proxy-v8sqq\" (UID: \"6c51a88b-ed80-44c6-ac83-ad8114c95991\") " pod="kube-system/kube-proxy-v8sqq" Jan 29 16:09:40.716359 kubelet[2810]: I0129 16:09:40.716201 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6c51a88b-ed80-44c6-ac83-ad8114c95991-kube-proxy\") pod \"kube-proxy-v8sqq\" (UID: \"6c51a88b-ed80-44c6-ac83-ad8114c95991\") " pod="kube-system/kube-proxy-v8sqq" Jan 29 16:09:40.716359 kubelet[2810]: I0129 16:09:40.716216 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-bpf-maps\") pod \"cilium-bxwsb\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " pod="kube-system/cilium-bxwsb" Jan 29 16:09:40.716359 kubelet[2810]: I0129 16:09:40.716230 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-lib-modules\") pod \"cilium-bxwsb\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " pod="kube-system/cilium-bxwsb" Jan 29 16:09:40.716359 kubelet[2810]: I0129 16:09:40.716245 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-hostproc\") pod \"cilium-bxwsb\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " pod="kube-system/cilium-bxwsb" Jan 29 16:09:40.716585 kubelet[2810]: I0129 16:09:40.716259 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56526d92-8267-4c39-b176-3a2d0823d621-hubble-tls\") pod \"cilium-bxwsb\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " pod="kube-system/cilium-bxwsb" Jan 29 16:09:40.716585 kubelet[2810]: I0129 16:09:40.716275 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2wpn\" (UniqueName: \"kubernetes.io/projected/6c51a88b-ed80-44c6-ac83-ad8114c95991-kube-api-access-l2wpn\") pod \"kube-proxy-v8sqq\" (UID: \"6c51a88b-ed80-44c6-ac83-ad8114c95991\") " pod="kube-system/kube-proxy-v8sqq" Jan 29 16:09:40.716585 kubelet[2810]: I0129 16:09:40.716293 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-cni-path\") pod \"cilium-bxwsb\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " pod="kube-system/cilium-bxwsb" Jan 29 16:09:40.716585 kubelet[2810]: I0129 16:09:40.716308 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-host-proc-sys-kernel\") pod \"cilium-bxwsb\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " pod="kube-system/cilium-bxwsb" Jan 29 16:09:40.716585 kubelet[2810]: I0129 16:09:40.716363 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-etc-cni-netd\") pod \"cilium-bxwsb\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " pod="kube-system/cilium-bxwsb" Jan 29 16:09:40.716688 kubelet[2810]: I0129 16:09:40.716378 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-host-proc-sys-net\") pod \"cilium-bxwsb\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " pod="kube-system/cilium-bxwsb" Jan 29 16:09:40.716688 kubelet[2810]: I0129 16:09:40.716394 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-cilium-cgroup\") pod \"cilium-bxwsb\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " pod="kube-system/cilium-bxwsb" Jan 29 16:09:40.716688 kubelet[2810]: I0129 16:09:40.716417 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c51a88b-ed80-44c6-ac83-ad8114c95991-lib-modules\") pod \"kube-proxy-v8sqq\" (UID: \"6c51a88b-ed80-44c6-ac83-ad8114c95991\") " pod="kube-system/kube-proxy-v8sqq" Jan 29 16:09:40.716688 kubelet[2810]: I0129 16:09:40.716448 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-cilium-run\") pod \"cilium-bxwsb\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " pod="kube-system/cilium-bxwsb" Jan 29 16:09:40.716688 kubelet[2810]: I0129 16:09:40.716462 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-xtables-lock\") pod \"cilium-bxwsb\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " pod="kube-system/cilium-bxwsb" Jan 29 16:09:40.716688 kubelet[2810]: I0129 16:09:40.716476 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56526d92-8267-4c39-b176-3a2d0823d621-clustermesh-secrets\") pod \"cilium-bxwsb\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " pod="kube-system/cilium-bxwsb" Jan 29 16:09:40.717949 kubelet[2810]: I0129 16:09:40.716489 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56526d92-8267-4c39-b176-3a2d0823d621-cilium-config-path\") pod \"cilium-bxwsb\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " pod="kube-system/cilium-bxwsb" Jan 29 16:09:40.999218 containerd[1518]: time="2025-01-29T16:09:40.998397884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v8sqq,Uid:6c51a88b-ed80-44c6-ac83-ad8114c95991,Namespace:kube-system,Attempt:0,}" Jan 29 16:09:41.019828 containerd[1518]: time="2025-01-29T16:09:41.018761035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxwsb,Uid:56526d92-8267-4c39-b176-3a2d0823d621,Namespace:kube-system,Attempt:0,}" Jan 29 16:09:41.029877 containerd[1518]: time="2025-01-29T16:09:41.029673456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:09:41.030439 containerd[1518]: time="2025-01-29T16:09:41.029909858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:09:41.030439 containerd[1518]: time="2025-01-29T16:09:41.030218981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:41.031948 containerd[1518]: time="2025-01-29T16:09:41.030666305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:41.051438 containerd[1518]: time="2025-01-29T16:09:41.051103455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:09:41.051438 containerd[1518]: time="2025-01-29T16:09:41.051171375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:09:41.051438 containerd[1518]: time="2025-01-29T16:09:41.051187096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:41.051438 containerd[1518]: time="2025-01-29T16:09:41.051368537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:41.056018 systemd[1]: Started cri-containerd-dce7c8c017ce92f0e7943c6b1c3f65bb3df6a8725ae31f763eb2ad683f178bdd.scope - libcontainer container dce7c8c017ce92f0e7943c6b1c3f65bb3df6a8725ae31f763eb2ad683f178bdd. Jan 29 16:09:41.075260 systemd[1]: Started cri-containerd-a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1.scope - libcontainer container a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1. Jan 29 16:09:41.125672 containerd[1518]: time="2025-01-29T16:09:41.125297264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v8sqq,Uid:6c51a88b-ed80-44c6-ac83-ad8114c95991,Namespace:kube-system,Attempt:0,} returns sandbox id \"dce7c8c017ce92f0e7943c6b1c3f65bb3df6a8725ae31f763eb2ad683f178bdd\"" Jan 29 16:09:41.132026 containerd[1518]: time="2025-01-29T16:09:41.130684674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxwsb,Uid:56526d92-8267-4c39-b176-3a2d0823d621,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\"" Jan 29 16:09:41.137198 containerd[1518]: time="2025-01-29T16:09:41.137074013Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:09:41.141368 containerd[1518]: time="2025-01-29T16:09:41.141310292Z" level=info msg="CreateContainer within sandbox \"dce7c8c017ce92f0e7943c6b1c3f65bb3df6a8725ae31f763eb2ad683f178bdd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:09:41.149784 systemd[1]: Created slice kubepods-besteffort-pode8c82f51_dc50_45ff_aa5a_b6636f16ce22.slice - libcontainer container kubepods-besteffort-pode8c82f51_dc50_45ff_aa5a_b6636f16ce22.slice. Jan 29 16:09:41.173839 containerd[1518]: time="2025-01-29T16:09:41.173767914Z" level=info msg="CreateContainer within sandbox \"dce7c8c017ce92f0e7943c6b1c3f65bb3df6a8725ae31f763eb2ad683f178bdd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"68e369d3273ff2150758289cff531cbe1861ea542dea98ae7a46c7d0440e8f68\"" Jan 29 16:09:41.175425 containerd[1518]: time="2025-01-29T16:09:41.175362488Z" level=info msg="StartContainer for \"68e369d3273ff2150758289cff531cbe1861ea542dea98ae7a46c7d0440e8f68\"" Jan 29 16:09:41.219455 kubelet[2810]: I0129 16:09:41.219416 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txtgb\" (UniqueName: \"kubernetes.io/projected/e8c82f51-dc50-45ff-aa5a-b6636f16ce22-kube-api-access-txtgb\") pod \"cilium-operator-6c4d7847fc-bvn4f\" (UID: \"e8c82f51-dc50-45ff-aa5a-b6636f16ce22\") " pod="kube-system/cilium-operator-6c4d7847fc-bvn4f" Jan 29 16:09:41.231762 kubelet[2810]: I0129 16:09:41.219493 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8c82f51-dc50-45ff-aa5a-b6636f16ce22-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bvn4f\" (UID: \"e8c82f51-dc50-45ff-aa5a-b6636f16ce22\") " pod="kube-system/cilium-operator-6c4d7847fc-bvn4f" Jan 29 16:09:41.231129 systemd[1]: Started cri-containerd-68e369d3273ff2150758289cff531cbe1861ea542dea98ae7a46c7d0440e8f68.scope - libcontainer container 68e369d3273ff2150758289cff531cbe1861ea542dea98ae7a46c7d0440e8f68. Jan 29 16:09:41.267212 containerd[1518]: time="2025-01-29T16:09:41.266949419Z" level=info msg="StartContainer for \"68e369d3273ff2150758289cff531cbe1861ea542dea98ae7a46c7d0440e8f68\" returns successfully" Jan 29 16:09:41.456573 containerd[1518]: time="2025-01-29T16:09:41.456530379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bvn4f,Uid:e8c82f51-dc50-45ff-aa5a-b6636f16ce22,Namespace:kube-system,Attempt:0,}" Jan 29 16:09:41.478781 containerd[1518]: time="2025-01-29T16:09:41.478515703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:09:41.478781 containerd[1518]: time="2025-01-29T16:09:41.478598624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:09:41.478781 containerd[1518]: time="2025-01-29T16:09:41.478617144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:41.478781 containerd[1518]: time="2025-01-29T16:09:41.478731625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:41.498031 systemd[1]: Started cri-containerd-e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27.scope - libcontainer container e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27. Jan 29 16:09:41.539107 containerd[1518]: time="2025-01-29T16:09:41.538965424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bvn4f,Uid:e8c82f51-dc50-45ff-aa5a-b6636f16ce22,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27\"" Jan 29 16:09:45.681083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2606379736.mount: Deactivated successfully. Jan 29 16:09:45.787812 kubelet[2810]: I0129 16:09:45.787714 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v8sqq" podStartSLOduration=5.787692071 podStartE2EDuration="5.787692071s" podCreationTimestamp="2025-01-29 16:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:09:41.40175487 +0000 UTC m=+6.229325584" watchObservedRunningTime="2025-01-29 16:09:45.787692071 +0000 UTC m=+10.615262785" Jan 29 16:09:47.221640 containerd[1518]: time="2025-01-29T16:09:47.221558515Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:47.223391 containerd[1518]: time="2025-01-29T16:09:47.223101245Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 16:09:47.224831 containerd[1518]: time="2025-01-29T16:09:47.224506934Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:47.226746 containerd[1518]: time="2025-01-29T16:09:47.226576947Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.089159491s" Jan 29 16:09:47.226746 containerd[1518]: time="2025-01-29T16:09:47.226632427Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 16:09:47.229281 containerd[1518]: time="2025-01-29T16:09:47.229159963Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:09:47.230924 containerd[1518]: time="2025-01-29T16:09:47.230693893Z" level=info msg="CreateContainer within sandbox \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:09:47.256063 containerd[1518]: time="2025-01-29T16:09:47.255969132Z" level=info msg="CreateContainer within sandbox \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409\"" Jan 29 16:09:47.258302 containerd[1518]: time="2025-01-29T16:09:47.257051499Z" level=info msg="StartContainer for \"ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409\"" Jan 29 16:09:47.310493 systemd[1]: Started cri-containerd-ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409.scope - libcontainer container ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409. Jan 29 16:09:47.340665 containerd[1518]: time="2025-01-29T16:09:47.340600945Z" level=info msg="StartContainer for \"ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409\" returns successfully" Jan 29 16:09:47.361107 systemd[1]: cri-containerd-ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409.scope: Deactivated successfully. Jan 29 16:09:47.654437 containerd[1518]: time="2025-01-29T16:09:47.654374123Z" level=info msg="shim disconnected" id=ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409 namespace=k8s.io Jan 29 16:09:47.654437 containerd[1518]: time="2025-01-29T16:09:47.654433484Z" level=warning msg="cleaning up after shim disconnected" id=ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409 namespace=k8s.io Jan 29 16:09:47.654437 containerd[1518]: time="2025-01-29T16:09:47.654441724Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:09:48.243521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409-rootfs.mount: Deactivated successfully. Jan 29 16:09:48.397856 containerd[1518]: time="2025-01-29T16:09:48.397622732Z" level=info msg="CreateContainer within sandbox \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:09:48.422229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3042104579.mount: Deactivated successfully. Jan 29 16:09:48.436250 containerd[1518]: time="2025-01-29T16:09:48.436203640Z" level=info msg="CreateContainer within sandbox \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f\"" Jan 29 16:09:48.437529 containerd[1518]: time="2025-01-29T16:09:48.437478047Z" level=info msg="StartContainer for \"0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f\"" Jan 29 16:09:48.472003 systemd[1]: Started cri-containerd-0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f.scope - libcontainer container 0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f. Jan 29 16:09:48.508127 containerd[1518]: time="2025-01-29T16:09:48.507085659Z" level=info msg="StartContainer for \"0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f\" returns successfully" Jan 29 16:09:48.522224 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:09:48.522472 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:09:48.522836 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:09:48.529801 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:09:48.530062 systemd[1]: cri-containerd-0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f.scope: Deactivated successfully. Jan 29 16:09:48.557412 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:09:48.569084 containerd[1518]: time="2025-01-29T16:09:48.568951344Z" level=info msg="shim disconnected" id=0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f namespace=k8s.io Jan 29 16:09:48.569084 containerd[1518]: time="2025-01-29T16:09:48.569050265Z" level=warning msg="cleaning up after shim disconnected" id=0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f namespace=k8s.io Jan 29 16:09:48.569084 containerd[1518]: time="2025-01-29T16:09:48.569066305Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:09:48.582875 containerd[1518]: time="2025-01-29T16:09:48.582823546Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:09:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:09:49.243419 systemd[1]: run-containerd-runc-k8s.io-0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f-runc.TCtXoE.mount: Deactivated successfully. Jan 29 16:09:49.243718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f-rootfs.mount: Deactivated successfully. Jan 29 16:09:49.330569 containerd[1518]: time="2025-01-29T16:09:49.330495563Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:49.331914 containerd[1518]: time="2025-01-29T16:09:49.331847370Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 16:09:49.332939 containerd[1518]: time="2025-01-29T16:09:49.332007771Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:49.334446 containerd[1518]: time="2025-01-29T16:09:49.334125783Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.10491534s" Jan 29 16:09:49.334446 containerd[1518]: time="2025-01-29T16:09:49.334180863Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 16:09:49.338194 containerd[1518]: time="2025-01-29T16:09:49.337342961Z" level=info msg="CreateContainer within sandbox \"e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:09:49.358840 containerd[1518]: time="2025-01-29T16:09:49.358657879Z" level=info msg="CreateContainer within sandbox \"e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7\"" Jan 29 16:09:49.359690 containerd[1518]: time="2025-01-29T16:09:49.359554444Z" level=info msg="StartContainer for \"7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7\"" Jan 29 16:09:49.392164 systemd[1]: Started cri-containerd-7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7.scope - libcontainer container 7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7. Jan 29 16:09:49.409177 containerd[1518]: time="2025-01-29T16:09:49.409122359Z" level=info msg="CreateContainer within sandbox \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:09:49.429286 containerd[1518]: time="2025-01-29T16:09:49.428729627Z" level=info msg="CreateContainer within sandbox \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2\"" Jan 29 16:09:49.431508 containerd[1518]: time="2025-01-29T16:09:49.431381642Z" level=info msg="StartContainer for \"fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2\"" Jan 29 16:09:49.463082 containerd[1518]: time="2025-01-29T16:09:49.462963657Z" level=info msg="StartContainer for \"7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7\" returns successfully" Jan 29 16:09:49.473092 systemd[1]: Started cri-containerd-fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2.scope - libcontainer container fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2. Jan 29 16:09:49.509773 containerd[1518]: time="2025-01-29T16:09:49.509613755Z" level=info msg="StartContainer for \"fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2\" returns successfully" Jan 29 16:09:49.514432 systemd[1]: cri-containerd-fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2.scope: Deactivated successfully. Jan 29 16:09:49.611874 containerd[1518]: time="2025-01-29T16:09:49.611693361Z" level=info msg="shim disconnected" id=fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2 namespace=k8s.io Jan 29 16:09:49.613068 containerd[1518]: time="2025-01-29T16:09:49.612184844Z" level=warning msg="cleaning up after shim disconnected" id=fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2 namespace=k8s.io Jan 29 16:09:49.613068 containerd[1518]: time="2025-01-29T16:09:49.612850447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:09:50.422150 containerd[1518]: time="2025-01-29T16:09:50.422106585Z" level=info msg="CreateContainer within sandbox \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:09:50.442361 containerd[1518]: time="2025-01-29T16:09:50.442288690Z" level=info msg="CreateContainer within sandbox \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702\"" Jan 29 16:09:50.443631 containerd[1518]: time="2025-01-29T16:09:50.443314775Z" level=info msg="StartContainer for \"c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702\"" Jan 29 16:09:50.489033 systemd[1]: Started cri-containerd-c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702.scope - libcontainer container c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702. Jan 29 16:09:50.524515 systemd[1]: cri-containerd-c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702.scope: Deactivated successfully. Jan 29 16:09:50.530181 containerd[1518]: time="2025-01-29T16:09:50.529954665Z" level=info msg="StartContainer for \"c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702\" returns successfully" Jan 29 16:09:50.553589 containerd[1518]: time="2025-01-29T16:09:50.553501027Z" level=info msg="shim disconnected" id=c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702 namespace=k8s.io Jan 29 16:09:50.553589 containerd[1518]: time="2025-01-29T16:09:50.553580348Z" level=warning msg="cleaning up after shim disconnected" id=c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702 namespace=k8s.io Jan 29 16:09:50.553589 containerd[1518]: time="2025-01-29T16:09:50.553588788Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:09:51.244785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702-rootfs.mount: Deactivated successfully. Jan 29 16:09:51.429977 containerd[1518]: time="2025-01-29T16:09:51.429869120Z" level=info msg="CreateContainer within sandbox \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:09:51.456868 kubelet[2810]: I0129 16:09:51.456767 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bvn4f" podStartSLOduration=2.663295872 podStartE2EDuration="10.456743891s" podCreationTimestamp="2025-01-29 16:09:41 +0000 UTC" firstStartedPulling="2025-01-29 16:09:41.54172057 +0000 UTC m=+6.369291284" lastFinishedPulling="2025-01-29 16:09:49.335168589 +0000 UTC m=+14.162739303" observedRunningTime="2025-01-29 16:09:50.456114841 +0000 UTC m=+15.283685555" watchObservedRunningTime="2025-01-29 16:09:51.456743891 +0000 UTC m=+16.284314725" Jan 29 16:09:51.458034 containerd[1518]: time="2025-01-29T16:09:51.457653895Z" level=info msg="CreateContainer within sandbox \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf\"" Jan 29 16:09:51.458942 containerd[1518]: time="2025-01-29T16:09:51.458786941Z" level=info msg="StartContainer for \"2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf\"" Jan 29 16:09:51.491047 systemd[1]: Started cri-containerd-2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf.scope - libcontainer container 2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf. Jan 29 16:09:51.523888 containerd[1518]: time="2025-01-29T16:09:51.523091214Z" level=info msg="StartContainer for \"2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf\" returns successfully" Jan 29 16:09:51.676375 kubelet[2810]: I0129 16:09:51.675476 2810 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 16:09:51.718526 kubelet[2810]: I0129 16:09:51.718486 2810 status_manager.go:890] "Failed to get status for pod" podUID="428b75fe-485b-43db-b1b3-a89e8b794386" pod="kube-system/coredns-668d6bf9bc-982fw" err="pods \"coredns-668d6bf9bc-982fw\" is forbidden: User \"system:node:ci-4230-0-0-d-0116a6be22\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-0-0-d-0116a6be22' and this object" Jan 29 16:09:51.720305 systemd[1]: Created slice kubepods-burstable-pod428b75fe_485b_43db_b1b3_a89e8b794386.slice - libcontainer container kubepods-burstable-pod428b75fe_485b_43db_b1b3_a89e8b794386.slice. Jan 29 16:09:51.732627 systemd[1]: Created slice kubepods-burstable-pod312d45dc_0657_4c28_9f36_5279333367c9.slice - libcontainer container kubepods-burstable-pod312d45dc_0657_4c28_9f36_5279333367c9.slice. Jan 29 16:09:51.815087 kubelet[2810]: I0129 16:09:51.814937 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xpsn\" (UniqueName: \"kubernetes.io/projected/428b75fe-485b-43db-b1b3-a89e8b794386-kube-api-access-5xpsn\") pod \"coredns-668d6bf9bc-982fw\" (UID: \"428b75fe-485b-43db-b1b3-a89e8b794386\") " pod="kube-system/coredns-668d6bf9bc-982fw" Jan 29 16:09:51.815087 kubelet[2810]: I0129 16:09:51.814981 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/312d45dc-0657-4c28-9f36-5279333367c9-config-volume\") pod \"coredns-668d6bf9bc-kxnps\" (UID: \"312d45dc-0657-4c28-9f36-5279333367c9\") " pod="kube-system/coredns-668d6bf9bc-kxnps" Jan 29 16:09:51.815087 kubelet[2810]: I0129 16:09:51.815032 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfxvx\" (UniqueName: \"kubernetes.io/projected/312d45dc-0657-4c28-9f36-5279333367c9-kube-api-access-cfxvx\") pod \"coredns-668d6bf9bc-kxnps\" (UID: \"312d45dc-0657-4c28-9f36-5279333367c9\") " pod="kube-system/coredns-668d6bf9bc-kxnps" Jan 29 16:09:51.815087 kubelet[2810]: I0129 16:09:51.815058 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/428b75fe-485b-43db-b1b3-a89e8b794386-config-volume\") pod \"coredns-668d6bf9bc-982fw\" (UID: \"428b75fe-485b-43db-b1b3-a89e8b794386\") " pod="kube-system/coredns-668d6bf9bc-982fw" Jan 29 16:09:52.026486 containerd[1518]: time="2025-01-29T16:09:52.026148336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-982fw,Uid:428b75fe-485b-43db-b1b3-a89e8b794386,Namespace:kube-system,Attempt:0,}" Jan 29 16:09:52.041876 containerd[1518]: time="2025-01-29T16:09:52.040421121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kxnps,Uid:312d45dc-0657-4c28-9f36-5279333367c9,Namespace:kube-system,Attempt:0,}" Jan 29 16:09:52.454485 kubelet[2810]: I0129 16:09:52.454416 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bxwsb" podStartSLOduration=6.362254018 podStartE2EDuration="12.454394211s" podCreationTimestamp="2025-01-29 16:09:40 +0000 UTC" firstStartedPulling="2025-01-29 16:09:41.135992123 +0000 UTC m=+5.963562797" lastFinishedPulling="2025-01-29 16:09:47.228132276 +0000 UTC m=+12.055702990" observedRunningTime="2025-01-29 16:09:52.452344401 +0000 UTC m=+17.279915155" watchObservedRunningTime="2025-01-29 16:09:52.454394211 +0000 UTC m=+17.281964925" Jan 29 16:09:53.818281 systemd-networkd[1417]: cilium_host: Link UP Jan 29 16:09:53.820294 systemd-networkd[1417]: cilium_net: Link UP Jan 29 16:09:53.820708 systemd-networkd[1417]: cilium_net: Gained carrier Jan 29 16:09:53.821013 systemd-networkd[1417]: cilium_host: Gained carrier Jan 29 16:09:53.881077 systemd-networkd[1417]: cilium_host: Gained IPv6LL Jan 29 16:09:53.940872 systemd-networkd[1417]: cilium_vxlan: Link UP Jan 29 16:09:53.941270 systemd-networkd[1417]: cilium_vxlan: Gained carrier Jan 29 16:09:54.048126 systemd-networkd[1417]: cilium_net: Gained IPv6LL Jan 29 16:09:54.244924 kernel: NET: Registered PF_ALG protocol family Jan 29 16:09:55.001576 systemd-networkd[1417]: lxc_health: Link UP Jan 29 16:09:55.019409 systemd-networkd[1417]: lxc_health: Gained carrier Jan 29 16:09:55.104025 systemd-networkd[1417]: cilium_vxlan: Gained IPv6LL Jan 29 16:09:55.615457 systemd-networkd[1417]: lxc004d94f46058: Link UP Jan 29 16:09:55.616955 kernel: eth0: renamed from tmp8d0f7 Jan 29 16:09:55.622170 kernel: eth0: renamed from tmp93a5b Jan 29 16:09:55.631869 systemd-networkd[1417]: lxcfbadeb97cb4f: Link UP Jan 29 16:09:55.633378 systemd-networkd[1417]: lxcfbadeb97cb4f: Gained carrier Jan 29 16:09:55.633496 systemd-networkd[1417]: lxc004d94f46058: Gained carrier Jan 29 16:09:56.512093 systemd-networkd[1417]: lxc_health: Gained IPv6LL Jan 29 16:09:57.088241 systemd-networkd[1417]: lxcfbadeb97cb4f: Gained IPv6LL Jan 29 16:09:57.216229 systemd-networkd[1417]: lxc004d94f46058: Gained IPv6LL Jan 29 16:09:59.852837 containerd[1518]: time="2025-01-29T16:09:59.851852004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:09:59.852837 containerd[1518]: time="2025-01-29T16:09:59.852399046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:09:59.852837 containerd[1518]: time="2025-01-29T16:09:59.852419206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:59.852837 containerd[1518]: time="2025-01-29T16:09:59.852521206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:59.881821 containerd[1518]: time="2025-01-29T16:09:59.880881888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:09:59.884656 containerd[1518]: time="2025-01-29T16:09:59.883451696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:09:59.884656 containerd[1518]: time="2025-01-29T16:09:59.883479256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:59.884656 containerd[1518]: time="2025-01-29T16:09:59.883593616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:59.921060 systemd[1]: Started cri-containerd-8d0f7588e14ca392c5ac480037609b716df9487a2d6fab6a1594517e3b051bb6.scope - libcontainer container 8d0f7588e14ca392c5ac480037609b716df9487a2d6fab6a1594517e3b051bb6. Jan 29 16:09:59.923585 systemd[1]: Started cri-containerd-93a5b64d81678f39b0e38214ae4f7aae8ae9ae9c0af7fef3144ee8bb6ace47b4.scope - libcontainer container 93a5b64d81678f39b0e38214ae4f7aae8ae9ae9c0af7fef3144ee8bb6ace47b4. Jan 29 16:09:59.971404 containerd[1518]: time="2025-01-29T16:09:59.970913070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kxnps,Uid:312d45dc-0657-4c28-9f36-5279333367c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"93a5b64d81678f39b0e38214ae4f7aae8ae9ae9c0af7fef3144ee8bb6ace47b4\"" Jan 29 16:09:59.979069 containerd[1518]: time="2025-01-29T16:09:59.979022854Z" level=info msg="CreateContainer within sandbox \"93a5b64d81678f39b0e38214ae4f7aae8ae9ae9c0af7fef3144ee8bb6ace47b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:09:59.988133 containerd[1518]: time="2025-01-29T16:09:59.988021960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-982fw,Uid:428b75fe-485b-43db-b1b3-a89e8b794386,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d0f7588e14ca392c5ac480037609b716df9487a2d6fab6a1594517e3b051bb6\"" Jan 29 16:09:59.991977 containerd[1518]: time="2025-01-29T16:09:59.991868331Z" level=info msg="CreateContainer within sandbox \"8d0f7588e14ca392c5ac480037609b716df9487a2d6fab6a1594517e3b051bb6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:10:00.013271 containerd[1518]: time="2025-01-29T16:10:00.013085630Z" level=info msg="CreateContainer within sandbox \"93a5b64d81678f39b0e38214ae4f7aae8ae9ae9c0af7fef3144ee8bb6ace47b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e60c5a45e52394eeba7dc4ef6a6e07940c9a9f0b2a18b2561027396f7379d828\"" Jan 29 16:10:00.014715 containerd[1518]: time="2025-01-29T16:10:00.013848192Z" level=info msg="StartContainer for \"e60c5a45e52394eeba7dc4ef6a6e07940c9a9f0b2a18b2561027396f7379d828\"" Jan 29 16:10:00.015781 containerd[1518]: time="2025-01-29T16:10:00.015663357Z" level=info msg="CreateContainer within sandbox \"8d0f7588e14ca392c5ac480037609b716df9487a2d6fab6a1594517e3b051bb6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9338bd0143c84aa5288fafc3888aded8e207ebea95f632e25ef09195d0933bcf\"" Jan 29 16:10:00.016716 containerd[1518]: time="2025-01-29T16:10:00.016674560Z" level=info msg="StartContainer for \"9338bd0143c84aa5288fafc3888aded8e207ebea95f632e25ef09195d0933bcf\"" Jan 29 16:10:00.052217 systemd[1]: Started cri-containerd-e60c5a45e52394eeba7dc4ef6a6e07940c9a9f0b2a18b2561027396f7379d828.scope - libcontainer container e60c5a45e52394eeba7dc4ef6a6e07940c9a9f0b2a18b2561027396f7379d828. Jan 29 16:10:00.064480 systemd[1]: Started cri-containerd-9338bd0143c84aa5288fafc3888aded8e207ebea95f632e25ef09195d0933bcf.scope - libcontainer container 9338bd0143c84aa5288fafc3888aded8e207ebea95f632e25ef09195d0933bcf. Jan 29 16:10:00.105099 containerd[1518]: time="2025-01-29T16:10:00.104030198Z" level=info msg="StartContainer for \"e60c5a45e52394eeba7dc4ef6a6e07940c9a9f0b2a18b2561027396f7379d828\" returns successfully" Jan 29 16:10:00.115398 containerd[1518]: time="2025-01-29T16:10:00.115250669Z" level=info msg="StartContainer for \"9338bd0143c84aa5288fafc3888aded8e207ebea95f632e25ef09195d0933bcf\" returns successfully" Jan 29 16:10:00.471782 kubelet[2810]: I0129 16:10:00.471506 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kxnps" podStartSLOduration=19.471485119 podStartE2EDuration="19.471485119s" podCreationTimestamp="2025-01-29 16:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:10:00.468499471 +0000 UTC m=+25.296070185" watchObservedRunningTime="2025-01-29 16:10:00.471485119 +0000 UTC m=+25.299055833" Jan 29 16:10:00.521735 kubelet[2810]: I0129 16:10:00.520395 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-982fw" podStartSLOduration=19.520374972 podStartE2EDuration="19.520374972s" podCreationTimestamp="2025-01-29 16:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:10:00.514865677 +0000 UTC m=+25.342436391" watchObservedRunningTime="2025-01-29 16:10:00.520374972 +0000 UTC m=+25.347945686" Jan 29 16:10:06.284327 systemd[1]: Started sshd@28-167.235.198.80:22-134.122.8.241:55548.service - OpenSSH per-connection server daemon (134.122.8.241:55548). Jan 29 16:10:06.936849 sshd[4192]: Received disconnect from 134.122.8.241 port 55548:11: Bye Bye [preauth] Jan 29 16:10:06.936849 sshd[4192]: Disconnected from authenticating user root 134.122.8.241 port 55548 [preauth] Jan 29 16:10:06.938675 systemd[1]: sshd@28-167.235.198.80:22-134.122.8.241:55548.service: Deactivated successfully. Jan 29 16:10:26.685351 systemd[1]: Started sshd@29-167.235.198.80:22-149.50.252.131:38656.service - OpenSSH per-connection server daemon (149.50.252.131:38656). Jan 29 16:10:26.690456 systemd[1]: Started sshd@30-167.235.198.80:22-149.50.252.131:38670.service - OpenSSH per-connection server daemon (149.50.252.131:38670). Jan 29 16:10:26.855665 sshd[4203]: Connection closed by 149.50.252.131 port 38656 [preauth] Jan 29 16:10:26.857319 systemd[1]: sshd@29-167.235.198.80:22-149.50.252.131:38656.service: Deactivated successfully. Jan 29 16:10:26.864412 sshd[4204]: Connection closed by 149.50.252.131 port 38670 [preauth] Jan 29 16:10:26.865321 systemd[1]: sshd@30-167.235.198.80:22-149.50.252.131:38670.service: Deactivated successfully. Jan 29 16:11:38.937237 systemd[1]: Started sshd@31-167.235.198.80:22-149.50.252.131:51604.service - OpenSSH per-connection server daemon (149.50.252.131:51604). Jan 29 16:11:38.953138 systemd[1]: Started sshd@32-167.235.198.80:22-149.50.252.131:51616.service - OpenSSH per-connection server daemon (149.50.252.131:51616). Jan 29 16:11:39.102945 sshd[4223]: Connection closed by 149.50.252.131 port 51604 [preauth] Jan 29 16:11:39.107029 systemd[1]: sshd@31-167.235.198.80:22-149.50.252.131:51604.service: Deactivated successfully. Jan 29 16:11:39.137711 sshd[4225]: Connection closed by 149.50.252.131 port 51616 [preauth] Jan 29 16:11:39.139553 systemd[1]: sshd@32-167.235.198.80:22-149.50.252.131:51616.service: Deactivated successfully. Jan 29 16:13:09.835475 systemd[1]: Started sshd@33-167.235.198.80:22-134.122.8.241:42192.service - OpenSSH per-connection server daemon (134.122.8.241:42192). Jan 29 16:13:10.355612 sshd[4244]: Invalid user www-user from 134.122.8.241 port 42192 Jan 29 16:13:10.451039 sshd[4244]: Received disconnect from 134.122.8.241 port 42192:11: Bye Bye [preauth] Jan 29 16:13:10.451039 sshd[4244]: Disconnected from invalid user www-user 134.122.8.241 port 42192 [preauth] Jan 29 16:13:10.457478 systemd[1]: sshd@33-167.235.198.80:22-134.122.8.241:42192.service: Deactivated successfully. Jan 29 16:13:17.176772 systemd[1]: Started sshd@34-167.235.198.80:22-149.50.252.131:60712.service - OpenSSH per-connection server daemon (149.50.252.131:60712). Jan 29 16:13:17.212978 systemd[1]: Started sshd@35-167.235.198.80:22-149.50.252.131:60718.service - OpenSSH per-connection server daemon (149.50.252.131:60718). Jan 29 16:13:17.353119 sshd[4253]: Connection closed by 149.50.252.131 port 60712 [preauth] Jan 29 16:13:17.355878 systemd[1]: sshd@34-167.235.198.80:22-149.50.252.131:60712.service: Deactivated successfully. Jan 29 16:13:17.405853 sshd[4256]: Connection closed by 149.50.252.131 port 60718 [preauth] Jan 29 16:13:17.406559 systemd[1]: sshd@35-167.235.198.80:22-149.50.252.131:60718.service: Deactivated successfully. Jan 29 16:13:24.115290 systemd[1]: Started sshd@36-167.235.198.80:22-103.142.199.159:33796.service - OpenSSH per-connection server daemon (103.142.199.159:33796). Jan 29 16:13:24.997486 sshd[4264]: Invalid user elasticsearch from 103.142.199.159 port 33796 Jan 29 16:13:25.155958 sshd[4264]: Received disconnect from 103.142.199.159 port 33796:11: Bye Bye [preauth] Jan 29 16:13:25.155958 sshd[4264]: Disconnected from invalid user elasticsearch 103.142.199.159 port 33796 [preauth] Jan 29 16:13:25.160575 systemd[1]: sshd@36-167.235.198.80:22-103.142.199.159:33796.service: Deactivated successfully. Jan 29 16:13:27.512362 update_engine[1496]: I20250129 16:13:27.512267 1496 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 16:13:27.512362 update_engine[1496]: I20250129 16:13:27.512346 1496 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 16:13:27.512882 update_engine[1496]: I20250129 16:13:27.512738 1496 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 16:13:27.514806 update_engine[1496]: I20250129 16:13:27.513399 1496 omaha_request_params.cc:62] Current group set to alpha Jan 29 16:13:27.514806 update_engine[1496]: I20250129 16:13:27.513552 1496 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 16:13:27.514806 update_engine[1496]: I20250129 16:13:27.513569 1496 update_attempter.cc:643] Scheduling an action processor start. Jan 29 16:13:27.514806 update_engine[1496]: I20250129 16:13:27.513608 1496 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 16:13:27.514806 update_engine[1496]: I20250129 16:13:27.513706 1496 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 16:13:27.514806 update_engine[1496]: I20250129 16:13:27.513784 1496 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 16:13:27.514806 update_engine[1496]: I20250129 16:13:27.514403 1496 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Jan 29 16:13:27.514806 update_engine[1496]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Jan 29 16:13:27.514806 update_engine[1496]: <os version="Chateau" platform="CoreOS" sp="4230.0.0_aarch64"></os> Jan 29 16:13:27.514806 update_engine[1496]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4230.0.0" track="alpha" bootid="{6d6e0e5a-cd05-44d3-8a6b-08e6e54589c3}" oem="hetzner" oemversion="0" alephversion="4230.0.0" machineid="a994ee601bac428baa7c7f30b4b3d756" machinealias="" lang="en-US" board="arm64-usr" hardware_class="" delta_okay="false" > Jan 29 16:13:27.514806 update_engine[1496]: <ping active="1"></ping> Jan 29 16:13:27.514806 update_engine[1496]: <updatecheck></updatecheck> Jan 29 16:13:27.514806 update_engine[1496]: <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event> Jan 29 16:13:27.514806 update_engine[1496]: </app> Jan 29 16:13:27.514806 update_engine[1496]: </request> Jan 29 16:13:27.514806 update_engine[1496]: I20250129 16:13:27.514437 1496 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:13:27.515238 locksmithd[1528]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 16:13:27.517079 update_engine[1496]: I20250129 16:13:27.517004 1496 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:13:27.517640 update_engine[1496]: I20250129 16:13:27.517566 1496 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:13:27.518296 update_engine[1496]: E20250129 16:13:27.518239 1496 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:13:27.518398 update_engine[1496]: I20250129 16:13:27.518317 1496 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 16:13:37.423591 update_engine[1496]: I20250129 16:13:37.422708 1496 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:13:37.423591 update_engine[1496]: I20250129 16:13:37.423054 1496 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:13:37.423591 update_engine[1496]: I20250129 16:13:37.423365 1496 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:13:37.424556 update_engine[1496]: E20250129 16:13:37.424482 1496 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:13:37.424556 update_engine[1496]: I20250129 16:13:37.424556 1496 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 29 16:13:47.426851 update_engine[1496]: I20250129 16:13:47.426692 1496 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:13:47.428992 update_engine[1496]: I20250129 16:13:47.426980 1496 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:13:47.428992 update_engine[1496]: I20250129 16:13:47.427238 1496 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:13:47.428992 update_engine[1496]: E20250129 16:13:47.428668 1496 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:13:47.428992 update_engine[1496]: I20250129 16:13:47.428731 1496 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 29 16:13:57.424049 update_engine[1496]: I20250129 16:13:57.423923 1496 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:13:57.425231 update_engine[1496]: I20250129 16:13:57.424333 1496 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:13:57.425231 update_engine[1496]: I20250129 16:13:57.424944 1496 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:13:57.425430 update_engine[1496]: E20250129 16:13:57.425318 1496 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:13:57.425430 update_engine[1496]: I20250129 16:13:57.425424 1496 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 16:13:57.425596 update_engine[1496]: I20250129 16:13:57.425444 1496 omaha_request_action.cc:617] Omaha request response: Jan 29 16:13:57.425639 update_engine[1496]: E20250129 16:13:57.425588 1496 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 29 16:13:57.425639 update_engine[1496]: I20250129 16:13:57.425621 1496 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 29 16:13:57.425714 update_engine[1496]: I20250129 16:13:57.425633 1496 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 16:13:57.425714 update_engine[1496]: I20250129 16:13:57.425644 1496 update_attempter.cc:306] Processing Done. Jan 29 16:13:57.425714 update_engine[1496]: E20250129 16:13:57.425668 1496 update_attempter.cc:619] Update failed. Jan 29 16:13:57.425714 update_engine[1496]: I20250129 16:13:57.425680 1496 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 29 16:13:57.425714 update_engine[1496]: I20250129 16:13:57.425691 1496 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 29 16:13:57.425714 update_engine[1496]: I20250129 16:13:57.425704 1496 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 29 16:13:57.425970 update_engine[1496]: I20250129 16:13:57.425925 1496 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 16:13:57.425970 update_engine[1496]: I20250129 16:13:57.425964 1496 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 16:13:57.426034 update_engine[1496]: I20250129 16:13:57.425974 1496 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Jan 29 16:13:57.426034 update_engine[1496]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Jan 29 16:13:57.426034 update_engine[1496]: <os version="Chateau" platform="CoreOS" sp="4230.0.0_aarch64"></os> Jan 29 16:13:57.426034 update_engine[1496]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4230.0.0" track="alpha" bootid="{6d6e0e5a-cd05-44d3-8a6b-08e6e54589c3}" oem="hetzner" oemversion="0" alephversion="4230.0.0" machineid="a994ee601bac428baa7c7f30b4b3d756" machinealias="" lang="en-US" board="arm64-usr" hardware_class="" delta_okay="false" > Jan 29 16:13:57.426034 update_engine[1496]: <event eventtype="3" eventresult="0" errorcode="268437456"></event> Jan 29 16:13:57.426034 update_engine[1496]: </app> Jan 29 16:13:57.426034 update_engine[1496]: </request> Jan 29 16:13:57.426034 update_engine[1496]: I20250129 16:13:57.425981 1496 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:13:57.426216 update_engine[1496]: I20250129 16:13:57.426147 1496 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:13:57.426471 update_engine[1496]: I20250129 16:13:57.426393 1496 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:13:57.426558 locksmithd[1528]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 29 16:13:57.427052 update_engine[1496]: E20250129 16:13:57.426978 1496 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:13:57.427052 update_engine[1496]: I20250129 16:13:57.427042 1496 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 16:13:57.427052 update_engine[1496]: I20250129 16:13:57.427052 1496 omaha_request_action.cc:617] Omaha request response: Jan 29 16:13:57.427189 update_engine[1496]: I20250129 16:13:57.427059 1496 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 16:13:57.427189 update_engine[1496]: I20250129 16:13:57.427068 1496 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 16:13:57.427189 update_engine[1496]: I20250129 16:13:57.427075 1496 update_attempter.cc:306] Processing Done. Jan 29 16:13:57.427189 update_engine[1496]: I20250129 16:13:57.427083 1496 update_attempter.cc:310] Error event sent. Jan 29 16:13:57.427189 update_engine[1496]: I20250129 16:13:57.427098 1496 update_check_scheduler.cc:74] Next update check in 42m22s Jan 29 16:13:57.427478 locksmithd[1528]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 29 16:14:22.810387 systemd[1]: Started sshd@37-167.235.198.80:22-134.122.8.241:40546.service - OpenSSH per-connection server daemon (134.122.8.241:40546). Jan 29 16:14:23.325571 sshd[4278]: Invalid user user1 from 134.122.8.241 port 40546 Jan 29 16:14:23.421983 sshd[4278]: Received disconnect from 134.122.8.241 port 40546:11: Bye Bye [preauth] Jan 29 16:14:23.421983 sshd[4278]: Disconnected from invalid user user1 134.122.8.241 port 40546 [preauth] Jan 29 16:14:23.423950 systemd[1]: sshd@37-167.235.198.80:22-134.122.8.241:40546.service: Deactivated successfully. Jan 29 16:14:24.577339 systemd[1]: Started sshd@38-167.235.198.80:22-139.178.68.195:40470.service - OpenSSH per-connection server daemon (139.178.68.195:40470). Jan 29 16:14:25.581767 sshd[4283]: Accepted publickey for core from 139.178.68.195 port 40470 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:14:25.585085 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:14:25.592918 systemd-logind[1494]: New session 8 of user core. Jan 29 16:14:25.599077 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:14:26.366623 sshd[4285]: Connection closed by 139.178.68.195 port 40470 Jan 29 16:14:26.367424 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Jan 29 16:14:26.371675 systemd[1]: sshd@38-167.235.198.80:22-139.178.68.195:40470.service: Deactivated successfully. Jan 29 16:14:26.374168 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:14:26.375784 systemd-logind[1494]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:14:26.376775 systemd-logind[1494]: Removed session 8. Jan 29 16:14:31.545532 systemd[1]: Started sshd@39-167.235.198.80:22-139.178.68.195:37286.service - OpenSSH per-connection server daemon (139.178.68.195:37286). Jan 29 16:14:32.523233 sshd[4298]: Accepted publickey for core from 139.178.68.195 port 37286 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:14:32.526279 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:14:32.533872 systemd-logind[1494]: New session 9 of user core. Jan 29 16:14:32.539101 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:14:33.273528 sshd[4300]: Connection closed by 139.178.68.195 port 37286 Jan 29 16:14:33.274217 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Jan 29 16:14:33.281433 systemd[1]: sshd@39-167.235.198.80:22-139.178.68.195:37286.service: Deactivated successfully. Jan 29 16:14:33.283677 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:14:33.285143 systemd-logind[1494]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:14:33.286419 systemd-logind[1494]: Removed session 9. Jan 29 16:14:38.451105 systemd[1]: Started sshd@40-167.235.198.80:22-139.178.68.195:35474.service - OpenSSH per-connection server daemon (139.178.68.195:35474). Jan 29 16:14:39.440843 sshd[4314]: Accepted publickey for core from 139.178.68.195 port 35474 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:14:39.444164 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:14:39.451062 systemd-logind[1494]: New session 10 of user core. Jan 29 16:14:39.457203 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:14:40.195732 sshd[4316]: Connection closed by 139.178.68.195 port 35474 Jan 29 16:14:40.195589 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Jan 29 16:14:40.202605 systemd-logind[1494]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:14:40.203408 systemd[1]: sshd@40-167.235.198.80:22-139.178.68.195:35474.service: Deactivated successfully. Jan 29 16:14:40.206980 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:14:40.208784 systemd-logind[1494]: Removed session 10. Jan 29 16:14:45.376428 systemd[1]: Started sshd@41-167.235.198.80:22-139.178.68.195:60516.service - OpenSSH per-connection server daemon (139.178.68.195:60516). Jan 29 16:14:46.361273 sshd[4330]: Accepted publickey for core from 139.178.68.195 port 60516 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:14:46.363770 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:14:46.369857 systemd-logind[1494]: New session 11 of user core. Jan 29 16:14:46.377220 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:14:47.122122 sshd[4332]: Connection closed by 139.178.68.195 port 60516 Jan 29 16:14:47.122775 sshd-session[4330]: pam_unix(sshd:session): session closed for user core Jan 29 16:14:47.127725 systemd-logind[1494]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:14:47.128007 systemd[1]: sshd@41-167.235.198.80:22-139.178.68.195:60516.service: Deactivated successfully. Jan 29 16:14:47.130349 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:14:47.131963 systemd-logind[1494]: Removed session 11. Jan 29 16:14:47.306271 systemd[1]: Started sshd@42-167.235.198.80:22-139.178.68.195:60532.service - OpenSSH per-connection server daemon (139.178.68.195:60532). Jan 29 16:14:48.307359 sshd[4345]: Accepted publickey for core from 139.178.68.195 port 60532 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:14:48.309814 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:14:48.317040 systemd-logind[1494]: New session 12 of user core. Jan 29 16:14:48.322685 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:14:49.133082 sshd[4347]: Connection closed by 139.178.68.195 port 60532 Jan 29 16:14:49.132950 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Jan 29 16:14:49.139287 systemd[1]: sshd@42-167.235.198.80:22-139.178.68.195:60532.service: Deactivated successfully. Jan 29 16:14:49.141906 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:14:49.143514 systemd-logind[1494]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:14:49.144670 systemd-logind[1494]: Removed session 12. Jan 29 16:14:49.312421 systemd[1]: Started sshd@43-167.235.198.80:22-139.178.68.195:60534.service - OpenSSH per-connection server daemon (139.178.68.195:60534). Jan 29 16:14:50.300511 sshd[4356]: Accepted publickey for core from 139.178.68.195 port 60534 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:14:50.302552 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:14:50.310865 systemd-logind[1494]: New session 13 of user core. Jan 29 16:14:50.315242 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:14:51.051896 sshd[4358]: Connection closed by 139.178.68.195 port 60534 Jan 29 16:14:51.052489 sshd-session[4356]: pam_unix(sshd:session): session closed for user core Jan 29 16:14:51.057250 systemd[1]: sshd@43-167.235.198.80:22-139.178.68.195:60534.service: Deactivated successfully. Jan 29 16:14:51.063377 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:14:51.064362 systemd-logind[1494]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:14:51.065245 systemd-logind[1494]: Removed session 13. Jan 29 16:14:56.228347 systemd[1]: Started sshd@44-167.235.198.80:22-139.178.68.195:49736.service - OpenSSH per-connection server daemon (139.178.68.195:49736). Jan 29 16:14:57.228005 sshd[4370]: Accepted publickey for core from 139.178.68.195 port 49736 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:14:57.231267 sshd-session[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:14:57.236873 systemd-logind[1494]: New session 14 of user core. Jan 29 16:14:57.244159 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:14:57.998622 sshd[4372]: Connection closed by 139.178.68.195 port 49736 Jan 29 16:14:57.998443 sshd-session[4370]: pam_unix(sshd:session): session closed for user core Jan 29 16:14:58.005099 systemd[1]: sshd@44-167.235.198.80:22-139.178.68.195:49736.service: Deactivated successfully. Jan 29 16:14:58.008383 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:14:58.013526 systemd-logind[1494]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:14:58.015631 systemd-logind[1494]: Removed session 14. Jan 29 16:14:58.178348 systemd[1]: Started sshd@45-167.235.198.80:22-139.178.68.195:49742.service - OpenSSH per-connection server daemon (139.178.68.195:49742). Jan 29 16:14:59.177481 sshd[4383]: Accepted publickey for core from 139.178.68.195 port 49742 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:14:59.180076 sshd-session[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:14:59.187298 systemd-logind[1494]: New session 15 of user core. Jan 29 16:14:59.195163 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:14:59.985956 sshd[4385]: Connection closed by 139.178.68.195 port 49742 Jan 29 16:14:59.985835 sshd-session[4383]: pam_unix(sshd:session): session closed for user core Jan 29 16:14:59.991263 systemd[1]: sshd@45-167.235.198.80:22-139.178.68.195:49742.service: Deactivated successfully. Jan 29 16:14:59.995180 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:14:59.997888 systemd-logind[1494]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:14:59.999644 systemd-logind[1494]: Removed session 15. Jan 29 16:15:00.165288 systemd[1]: Started sshd@46-167.235.198.80:22-139.178.68.195:49752.service - OpenSSH per-connection server daemon (139.178.68.195:49752). Jan 29 16:15:00.959754 systemd[1]: Started sshd@47-167.235.198.80:22-103.142.199.159:60746.service - OpenSSH per-connection server daemon (103.142.199.159:60746). Jan 29 16:15:01.141553 sshd[4395]: Accepted publickey for core from 139.178.68.195 port 49752 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:01.142652 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:01.148896 systemd-logind[1494]: New session 16 of user core. Jan 29 16:15:01.155077 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:15:01.849326 sshd[4398]: Invalid user admin from 103.142.199.159 port 60746 Jan 29 16:15:02.011595 sshd[4398]: Received disconnect from 103.142.199.159 port 60746:11: Bye Bye [preauth] Jan 29 16:15:02.011595 sshd[4398]: Disconnected from invalid user admin 103.142.199.159 port 60746 [preauth] Jan 29 16:15:02.013292 systemd[1]: sshd@47-167.235.198.80:22-103.142.199.159:60746.service: Deactivated successfully. Jan 29 16:15:02.829311 sshd[4400]: Connection closed by 139.178.68.195 port 49752 Jan 29 16:15:02.830047 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:02.834221 systemd-logind[1494]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:15:02.835828 systemd[1]: sshd@46-167.235.198.80:22-139.178.68.195:49752.service: Deactivated successfully. Jan 29 16:15:02.838955 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:15:02.840301 systemd-logind[1494]: Removed session 16. Jan 29 16:15:03.011359 systemd[1]: Started sshd@48-167.235.198.80:22-139.178.68.195:49754.service - OpenSSH per-connection server daemon (139.178.68.195:49754). Jan 29 16:15:03.997922 sshd[4420]: Accepted publickey for core from 139.178.68.195 port 49754 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:03.998846 sshd-session[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:04.006364 systemd-logind[1494]: New session 17 of user core. Jan 29 16:15:04.013121 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:15:04.893827 sshd[4422]: Connection closed by 139.178.68.195 port 49754 Jan 29 16:15:04.894684 sshd-session[4420]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:04.900787 systemd[1]: sshd@48-167.235.198.80:22-139.178.68.195:49754.service: Deactivated successfully. Jan 29 16:15:04.903270 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:15:04.904198 systemd-logind[1494]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:15:04.905141 systemd-logind[1494]: Removed session 17. Jan 29 16:15:05.063075 systemd[1]: Started sshd@49-167.235.198.80:22-139.178.68.195:58042.service - OpenSSH per-connection server daemon (139.178.68.195:58042). Jan 29 16:15:06.061191 sshd[4432]: Accepted publickey for core from 139.178.68.195 port 58042 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:06.063429 sshd-session[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:06.071425 systemd-logind[1494]: New session 18 of user core. Jan 29 16:15:06.078022 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:15:06.819523 sshd[4434]: Connection closed by 139.178.68.195 port 58042 Jan 29 16:15:06.820312 sshd-session[4432]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:06.825521 systemd[1]: sshd@49-167.235.198.80:22-139.178.68.195:58042.service: Deactivated successfully. Jan 29 16:15:06.828671 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:15:06.831422 systemd-logind[1494]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:15:06.832733 systemd-logind[1494]: Removed session 18. Jan 29 16:15:12.003697 systemd[1]: Started sshd@50-167.235.198.80:22-139.178.68.195:58048.service - OpenSSH per-connection server daemon (139.178.68.195:58048). Jan 29 16:15:12.991813 sshd[4450]: Accepted publickey for core from 139.178.68.195 port 58048 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:12.994893 sshd-session[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:13.000639 systemd-logind[1494]: New session 19 of user core. Jan 29 16:15:13.009186 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:15:13.751733 sshd[4452]: Connection closed by 139.178.68.195 port 58048 Jan 29 16:15:13.752656 sshd-session[4450]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:13.758573 systemd-logind[1494]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:15:13.760295 systemd[1]: sshd@50-167.235.198.80:22-139.178.68.195:58048.service: Deactivated successfully. Jan 29 16:15:13.762830 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:15:13.765254 systemd-logind[1494]: Removed session 19. Jan 29 16:15:18.931500 systemd[1]: Started sshd@51-167.235.198.80:22-139.178.68.195:54554.service - OpenSSH per-connection server daemon (139.178.68.195:54554). Jan 29 16:15:19.911838 sshd[4464]: Accepted publickey for core from 139.178.68.195 port 54554 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:19.914321 sshd-session[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:19.920011 systemd-logind[1494]: New session 20 of user core. Jan 29 16:15:19.928142 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:15:20.661664 sshd[4466]: Connection closed by 139.178.68.195 port 54554 Jan 29 16:15:20.663373 sshd-session[4464]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:20.667194 systemd[1]: sshd@51-167.235.198.80:22-139.178.68.195:54554.service: Deactivated successfully. Jan 29 16:15:20.669819 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:15:20.673766 systemd-logind[1494]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:15:20.674815 systemd-logind[1494]: Removed session 20. Jan 29 16:15:20.842612 systemd[1]: Started sshd@52-167.235.198.80:22-139.178.68.195:54568.service - OpenSSH per-connection server daemon (139.178.68.195:54568). Jan 29 16:15:21.831129 sshd[4477]: Accepted publickey for core from 139.178.68.195 port 54568 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:21.833817 sshd-session[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:21.839505 systemd-logind[1494]: New session 21 of user core. Jan 29 16:15:21.844012 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 16:15:24.555801 containerd[1518]: time="2025-01-29T16:15:24.554371433Z" level=info msg="StopContainer for \"7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7\" with timeout 30 (s)" Jan 29 16:15:24.558236 containerd[1518]: time="2025-01-29T16:15:24.558044643Z" level=info msg="Stop container \"7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7\" with signal terminated" Jan 29 16:15:24.564086 systemd[1]: run-containerd-runc-k8s.io-2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf-runc.qQ6FRm.mount: Deactivated successfully. Jan 29 16:15:24.578190 containerd[1518]: time="2025-01-29T16:15:24.578125017Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:15:24.583145 systemd[1]: cri-containerd-7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7.scope: Deactivated successfully. Jan 29 16:15:24.592242 containerd[1518]: time="2025-01-29T16:15:24.592174295Z" level=info msg="StopContainer for \"2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf\" with timeout 2 (s)" Jan 29 16:15:24.592736 containerd[1518]: time="2025-01-29T16:15:24.592676536Z" level=info msg="Stop container \"2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf\" with signal terminated" Jan 29 16:15:24.602217 systemd-networkd[1417]: lxc_health: Link DOWN Jan 29 16:15:24.602667 systemd-networkd[1417]: lxc_health: Lost carrier Jan 29 16:15:24.627956 systemd[1]: cri-containerd-2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf.scope: Deactivated successfully. Jan 29 16:15:24.628371 systemd[1]: cri-containerd-2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf.scope: Consumed 8.185s CPU time, 123.1M memory peak, 144K read from disk, 12.9M written to disk. Jan 29 16:15:24.639580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7-rootfs.mount: Deactivated successfully. Jan 29 16:15:24.654552 containerd[1518]: time="2025-01-29T16:15:24.654425182Z" level=info msg="shim disconnected" id=7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7 namespace=k8s.io Jan 29 16:15:24.654552 containerd[1518]: time="2025-01-29T16:15:24.654537542Z" level=warning msg="cleaning up after shim disconnected" id=7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7 namespace=k8s.io Jan 29 16:15:24.654552 containerd[1518]: time="2025-01-29T16:15:24.654559782Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:15:24.672418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf-rootfs.mount: Deactivated successfully. Jan 29 16:15:24.675995 containerd[1518]: time="2025-01-29T16:15:24.675764759Z" level=info msg="shim disconnected" id=2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf namespace=k8s.io Jan 29 16:15:24.675995 containerd[1518]: time="2025-01-29T16:15:24.675883999Z" level=warning msg="cleaning up after shim disconnected" id=2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf namespace=k8s.io Jan 29 16:15:24.675995 containerd[1518]: time="2025-01-29T16:15:24.675897159Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:15:24.689921 containerd[1518]: time="2025-01-29T16:15:24.689872917Z" level=info msg="StopContainer for \"7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7\" returns successfully" Jan 29 16:15:24.691260 containerd[1518]: time="2025-01-29T16:15:24.691081280Z" level=info msg="StopPodSandbox for \"e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27\"" Jan 29 16:15:24.691260 containerd[1518]: time="2025-01-29T16:15:24.691144600Z" level=info msg="Container to stop \"7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:15:24.695573 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27-shm.mount: Deactivated successfully. Jan 29 16:15:24.700776 containerd[1518]: time="2025-01-29T16:15:24.700719666Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:15:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:15:24.705637 systemd[1]: cri-containerd-e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27.scope: Deactivated successfully. Jan 29 16:15:24.707775 containerd[1518]: time="2025-01-29T16:15:24.707731245Z" level=info msg="StopContainer for \"2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf\" returns successfully" Jan 29 16:15:24.709539 containerd[1518]: time="2025-01-29T16:15:24.709500890Z" level=info msg="StopPodSandbox for \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\"" Jan 29 16:15:24.709658 containerd[1518]: time="2025-01-29T16:15:24.709557770Z" level=info msg="Container to stop \"ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:15:24.709658 containerd[1518]: time="2025-01-29T16:15:24.709571330Z" level=info msg="Container to stop \"c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:15:24.709658 containerd[1518]: time="2025-01-29T16:15:24.709581210Z" level=info msg="Container to stop \"0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:15:24.709658 containerd[1518]: time="2025-01-29T16:15:24.709591930Z" level=info msg="Container to stop \"fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:15:24.709658 containerd[1518]: time="2025-01-29T16:15:24.709601570Z" level=info msg="Container to stop \"2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:15:24.720721 systemd[1]: cri-containerd-a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1.scope: Deactivated successfully. Jan 29 16:15:24.748558 containerd[1518]: time="2025-01-29T16:15:24.748488594Z" level=info msg="shim disconnected" id=e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27 namespace=k8s.io Jan 29 16:15:24.748558 containerd[1518]: time="2025-01-29T16:15:24.748559154Z" level=warning msg="cleaning up after shim disconnected" id=e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27 namespace=k8s.io Jan 29 16:15:24.748930 containerd[1518]: time="2025-01-29T16:15:24.748573194Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:15:24.757705 containerd[1518]: time="2025-01-29T16:15:24.757593059Z" level=info msg="shim disconnected" id=a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1 namespace=k8s.io Jan 29 16:15:24.757705 containerd[1518]: time="2025-01-29T16:15:24.757672659Z" level=warning msg="cleaning up after shim disconnected" id=a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1 namespace=k8s.io Jan 29 16:15:24.757705 containerd[1518]: time="2025-01-29T16:15:24.757686819Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:15:24.769869 containerd[1518]: time="2025-01-29T16:15:24.769778331Z" level=info msg="TearDown network for sandbox \"e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27\" successfully" Jan 29 16:15:24.769869 containerd[1518]: time="2025-01-29T16:15:24.769862012Z" level=info msg="StopPodSandbox for \"e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27\" returns successfully" Jan 29 16:15:24.783450 containerd[1518]: time="2025-01-29T16:15:24.783402288Z" level=info msg="TearDown network for sandbox \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\" successfully" Jan 29 16:15:24.783450 containerd[1518]: time="2025-01-29T16:15:24.783446088Z" level=info msg="StopPodSandbox for \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\" returns successfully" Jan 29 16:15:24.901350 kubelet[2810]: I0129 16:15:24.901018 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56526d92-8267-4c39-b176-3a2d0823d621-hubble-tls\") pod \"56526d92-8267-4c39-b176-3a2d0823d621\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " Jan 29 16:15:24.901350 kubelet[2810]: I0129 16:15:24.901105 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-lib-modules\") pod \"56526d92-8267-4c39-b176-3a2d0823d621\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " Jan 29 16:15:24.901350 kubelet[2810]: I0129 16:15:24.901148 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-etc-cni-netd\") pod \"56526d92-8267-4c39-b176-3a2d0823d621\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " Jan 29 16:15:24.901350 kubelet[2810]: I0129 16:15:24.901192 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56526d92-8267-4c39-b176-3a2d0823d621-clustermesh-secrets\") pod \"56526d92-8267-4c39-b176-3a2d0823d621\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " Jan 29 16:15:24.901350 kubelet[2810]: I0129 16:15:24.901235 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56526d92-8267-4c39-b176-3a2d0823d621-cilium-config-path\") pod \"56526d92-8267-4c39-b176-3a2d0823d621\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " Jan 29 16:15:24.901350 kubelet[2810]: I0129 16:15:24.901273 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-host-proc-sys-kernel\") pod \"56526d92-8267-4c39-b176-3a2d0823d621\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " Jan 29 16:15:24.902291 kubelet[2810]: I0129 16:15:24.901325 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-xtables-lock\") pod \"56526d92-8267-4c39-b176-3a2d0823d621\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " Jan 29 16:15:24.902291 kubelet[2810]: I0129 16:15:24.901367 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-cilium-cgroup\") pod \"56526d92-8267-4c39-b176-3a2d0823d621\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " Jan 29 16:15:24.902291 kubelet[2810]: I0129 16:15:24.901406 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txtgb\" (UniqueName: \"kubernetes.io/projected/e8c82f51-dc50-45ff-aa5a-b6636f16ce22-kube-api-access-txtgb\") pod \"e8c82f51-dc50-45ff-aa5a-b6636f16ce22\" (UID: \"e8c82f51-dc50-45ff-aa5a-b6636f16ce22\") " Jan 29 16:15:24.902291 kubelet[2810]: I0129 16:15:24.901443 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8c82f51-dc50-45ff-aa5a-b6636f16ce22-cilium-config-path\") pod \"e8c82f51-dc50-45ff-aa5a-b6636f16ce22\" (UID: \"e8c82f51-dc50-45ff-aa5a-b6636f16ce22\") " Jan 29 16:15:24.902291 kubelet[2810]: I0129 16:15:24.901486 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nprg4\" (UniqueName: \"kubernetes.io/projected/56526d92-8267-4c39-b176-3a2d0823d621-kube-api-access-nprg4\") pod \"56526d92-8267-4c39-b176-3a2d0823d621\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " Jan 29 16:15:24.902291 kubelet[2810]: I0129 16:15:24.901519 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-hostproc\") pod \"56526d92-8267-4c39-b176-3a2d0823d621\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " Jan 29 16:15:24.902604 kubelet[2810]: I0129 16:15:24.901554 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-host-proc-sys-net\") pod \"56526d92-8267-4c39-b176-3a2d0823d621\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " Jan 29 16:15:24.902604 kubelet[2810]: I0129 16:15:24.901611 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-cni-path\") pod \"56526d92-8267-4c39-b176-3a2d0823d621\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " Jan 29 16:15:24.902604 kubelet[2810]: I0129 16:15:24.901657 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-cilium-run\") pod \"56526d92-8267-4c39-b176-3a2d0823d621\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " Jan 29 16:15:24.902604 kubelet[2810]: I0129 16:15:24.901696 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-bpf-maps\") pod \"56526d92-8267-4c39-b176-3a2d0823d621\" (UID: \"56526d92-8267-4c39-b176-3a2d0823d621\") " Jan 29 16:15:24.902604 kubelet[2810]: I0129 16:15:24.901840 2810 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "56526d92-8267-4c39-b176-3a2d0823d621" (UID: "56526d92-8267-4c39-b176-3a2d0823d621"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:15:24.905208 kubelet[2810]: I0129 16:15:24.903092 2810 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "56526d92-8267-4c39-b176-3a2d0823d621" (UID: "56526d92-8267-4c39-b176-3a2d0823d621"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:15:24.905208 kubelet[2810]: I0129 16:15:24.903186 2810 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "56526d92-8267-4c39-b176-3a2d0823d621" (UID: "56526d92-8267-4c39-b176-3a2d0823d621"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:15:24.905208 kubelet[2810]: I0129 16:15:24.903234 2810 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "56526d92-8267-4c39-b176-3a2d0823d621" (UID: "56526d92-8267-4c39-b176-3a2d0823d621"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:15:24.907871 kubelet[2810]: I0129 16:15:24.907228 2810 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "56526d92-8267-4c39-b176-3a2d0823d621" (UID: "56526d92-8267-4c39-b176-3a2d0823d621"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:15:24.907871 kubelet[2810]: I0129 16:15:24.907288 2810 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "56526d92-8267-4c39-b176-3a2d0823d621" (UID: "56526d92-8267-4c39-b176-3a2d0823d621"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:15:24.910680 kubelet[2810]: I0129 16:15:24.910541 2810 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-hostproc" (OuterVolumeSpecName: "hostproc") pod "56526d92-8267-4c39-b176-3a2d0823d621" (UID: "56526d92-8267-4c39-b176-3a2d0823d621"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:15:24.911366 kubelet[2810]: I0129 16:15:24.910967 2810 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "56526d92-8267-4c39-b176-3a2d0823d621" (UID: "56526d92-8267-4c39-b176-3a2d0823d621"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:15:24.911366 kubelet[2810]: I0129 16:15:24.911264 2810 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-cni-path" (OuterVolumeSpecName: "cni-path") pod "56526d92-8267-4c39-b176-3a2d0823d621" (UID: "56526d92-8267-4c39-b176-3a2d0823d621"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:15:24.911366 kubelet[2810]: I0129 16:15:24.911279 2810 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "56526d92-8267-4c39-b176-3a2d0823d621" (UID: "56526d92-8267-4c39-b176-3a2d0823d621"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:15:24.911725 kubelet[2810]: I0129 16:15:24.911698 2810 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8c82f51-dc50-45ff-aa5a-b6636f16ce22-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e8c82f51-dc50-45ff-aa5a-b6636f16ce22" (UID: "e8c82f51-dc50-45ff-aa5a-b6636f16ce22"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 16:15:24.912480 kubelet[2810]: I0129 16:15:24.912437 2810 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56526d92-8267-4c39-b176-3a2d0823d621-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "56526d92-8267-4c39-b176-3a2d0823d621" (UID: "56526d92-8267-4c39-b176-3a2d0823d621"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 29 16:15:24.912480 kubelet[2810]: I0129 16:15:24.912441 2810 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56526d92-8267-4c39-b176-3a2d0823d621-kube-api-access-nprg4" (OuterVolumeSpecName: "kube-api-access-nprg4") pod "56526d92-8267-4c39-b176-3a2d0823d621" (UID: "56526d92-8267-4c39-b176-3a2d0823d621"). InnerVolumeSpecName "kube-api-access-nprg4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 16:15:24.914584 kubelet[2810]: I0129 16:15:24.914515 2810 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56526d92-8267-4c39-b176-3a2d0823d621-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "56526d92-8267-4c39-b176-3a2d0823d621" (UID: "56526d92-8267-4c39-b176-3a2d0823d621"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 16:15:24.915817 kubelet[2810]: I0129 16:15:24.915752 2810 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56526d92-8267-4c39-b176-3a2d0823d621-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "56526d92-8267-4c39-b176-3a2d0823d621" (UID: "56526d92-8267-4c39-b176-3a2d0823d621"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 16:15:24.916384 kubelet[2810]: I0129 16:15:24.916333 2810 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c82f51-dc50-45ff-aa5a-b6636f16ce22-kube-api-access-txtgb" (OuterVolumeSpecName: "kube-api-access-txtgb") pod "e8c82f51-dc50-45ff-aa5a-b6636f16ce22" (UID: "e8c82f51-dc50-45ff-aa5a-b6636f16ce22"). InnerVolumeSpecName "kube-api-access-txtgb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 16:15:25.002898 kubelet[2810]: I0129 16:15:25.002760 2810 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-bpf-maps\") on node \"ci-4230-0-0-d-0116a6be22\" DevicePath \"\"" Jan 29 16:15:25.002898 kubelet[2810]: I0129 16:15:25.002826 2810 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56526d92-8267-4c39-b176-3a2d0823d621-hubble-tls\") on node \"ci-4230-0-0-d-0116a6be22\" DevicePath \"\"" Jan 29 16:15:25.002898 kubelet[2810]: I0129 16:15:25.002859 2810 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-lib-modules\") on node \"ci-4230-0-0-d-0116a6be22\" DevicePath \"\"" Jan 29 16:15:25.002898 kubelet[2810]: I0129 16:15:25.002876 2810 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56526d92-8267-4c39-b176-3a2d0823d621-cilium-config-path\") on node \"ci-4230-0-0-d-0116a6be22\" DevicePath \"\"" Jan 29 16:15:25.002898 kubelet[2810]: I0129 16:15:25.002889 2810 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-etc-cni-netd\") on node \"ci-4230-0-0-d-0116a6be22\" DevicePath \"\"" Jan 29 16:15:25.002898 kubelet[2810]: I0129 16:15:25.002902 2810 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56526d92-8267-4c39-b176-3a2d0823d621-clustermesh-secrets\") on node \"ci-4230-0-0-d-0116a6be22\" DevicePath \"\"" Jan 29 16:15:25.002898 kubelet[2810]: I0129 16:15:25.002920 2810 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-xtables-lock\") on node \"ci-4230-0-0-d-0116a6be22\" DevicePath \"\"" Jan 29 16:15:25.003273 kubelet[2810]: I0129 16:15:25.002933 2810 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-host-proc-sys-kernel\") on node \"ci-4230-0-0-d-0116a6be22\" DevicePath \"\"" Jan 29 16:15:25.003273 kubelet[2810]: I0129 16:15:25.002947 2810 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-cilium-cgroup\") on node \"ci-4230-0-0-d-0116a6be22\" DevicePath \"\"" Jan 29 16:15:25.003273 kubelet[2810]: I0129 16:15:25.002959 2810 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-hostproc\") on node \"ci-4230-0-0-d-0116a6be22\" DevicePath \"\"" Jan 29 16:15:25.003273 kubelet[2810]: I0129 16:15:25.002970 2810 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-host-proc-sys-net\") on node \"ci-4230-0-0-d-0116a6be22\" DevicePath \"\"" Jan 29 16:15:25.003273 kubelet[2810]: I0129 16:15:25.002982 2810 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-txtgb\" (UniqueName: \"kubernetes.io/projected/e8c82f51-dc50-45ff-aa5a-b6636f16ce22-kube-api-access-txtgb\") on node \"ci-4230-0-0-d-0116a6be22\" DevicePath \"\"" Jan 29 16:15:25.003273 kubelet[2810]: I0129 16:15:25.002994 2810 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8c82f51-dc50-45ff-aa5a-b6636f16ce22-cilium-config-path\") on node \"ci-4230-0-0-d-0116a6be22\" DevicePath \"\"" Jan 29 16:15:25.003273 kubelet[2810]: I0129 16:15:25.003005 2810 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nprg4\" (UniqueName: \"kubernetes.io/projected/56526d92-8267-4c39-b176-3a2d0823d621-kube-api-access-nprg4\") on node \"ci-4230-0-0-d-0116a6be22\" DevicePath \"\"" Jan 29 16:15:25.003273 kubelet[2810]: I0129 16:15:25.003017 2810 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-cilium-run\") on node \"ci-4230-0-0-d-0116a6be22\" DevicePath \"\"" Jan 29 16:15:25.003536 kubelet[2810]: I0129 16:15:25.003029 2810 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56526d92-8267-4c39-b176-3a2d0823d621-cni-path\") on node \"ci-4230-0-0-d-0116a6be22\" DevicePath \"\"" Jan 29 16:15:25.284448 systemd[1]: Removed slice kubepods-besteffort-pode8c82f51_dc50_45ff_aa5a_b6636f16ce22.slice - libcontainer container kubepods-besteffort-pode8c82f51_dc50_45ff_aa5a_b6636f16ce22.slice. Jan 29 16:15:25.287661 systemd[1]: Removed slice kubepods-burstable-pod56526d92_8267_4c39_b176_3a2d0823d621.slice - libcontainer container kubepods-burstable-pod56526d92_8267_4c39_b176_3a2d0823d621.slice. Jan 29 16:15:25.288321 systemd[1]: kubepods-burstable-pod56526d92_8267_4c39_b176_3a2d0823d621.slice: Consumed 8.278s CPU time, 123.5M memory peak, 144K read from disk, 12.9M written to disk. Jan 29 16:15:25.308553 kubelet[2810]: I0129 16:15:25.308523 2810 scope.go:117] "RemoveContainer" containerID="2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf" Jan 29 16:15:25.313918 containerd[1518]: time="2025-01-29T16:15:25.313727465Z" level=info msg="RemoveContainer for \"2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf\"" Jan 29 16:15:25.329894 containerd[1518]: time="2025-01-29T16:15:25.327341781Z" level=info msg="RemoveContainer for \"2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf\" returns successfully" Jan 29 16:15:25.329987 kubelet[2810]: I0129 16:15:25.328294 2810 scope.go:117] "RemoveContainer" containerID="c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702" Jan 29 16:15:25.334610 containerd[1518]: time="2025-01-29T16:15:25.334569680Z" level=info msg="RemoveContainer for \"c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702\"" Jan 29 16:15:25.338020 containerd[1518]: time="2025-01-29T16:15:25.337977730Z" level=info msg="RemoveContainer for \"c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702\" returns successfully" Jan 29 16:15:25.340926 kubelet[2810]: I0129 16:15:25.338527 2810 scope.go:117] "RemoveContainer" containerID="fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2" Jan 29 16:15:25.341836 containerd[1518]: time="2025-01-29T16:15:25.341807900Z" level=info msg="RemoveContainer for \"fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2\"" Jan 29 16:15:25.347729 containerd[1518]: time="2025-01-29T16:15:25.347676915Z" level=info msg="RemoveContainer for \"fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2\" returns successfully" Jan 29 16:15:25.348439 kubelet[2810]: I0129 16:15:25.348420 2810 scope.go:117] "RemoveContainer" containerID="0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f" Jan 29 16:15:25.350388 containerd[1518]: time="2025-01-29T16:15:25.350346643Z" level=info msg="RemoveContainer for \"0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f\"" Jan 29 16:15:25.354956 containerd[1518]: time="2025-01-29T16:15:25.354913415Z" level=info msg="RemoveContainer for \"0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f\" returns successfully" Jan 29 16:15:25.355325 kubelet[2810]: I0129 16:15:25.355293 2810 scope.go:117] "RemoveContainer" containerID="ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409" Jan 29 16:15:25.356538 containerd[1518]: time="2025-01-29T16:15:25.356510379Z" level=info msg="RemoveContainer for \"ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409\"" Jan 29 16:15:25.359991 containerd[1518]: time="2025-01-29T16:15:25.359864948Z" level=info msg="RemoveContainer for \"ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409\" returns successfully" Jan 29 16:15:25.360471 kubelet[2810]: I0129 16:15:25.360235 2810 scope.go:117] "RemoveContainer" containerID="2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf" Jan 29 16:15:25.360718 containerd[1518]: time="2025-01-29T16:15:25.360619750Z" level=error msg="ContainerStatus for \"2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf\": not found" Jan 29 16:15:25.360836 kubelet[2810]: E0129 16:15:25.360787 2810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf\": not found" containerID="2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf" Jan 29 16:15:25.360997 kubelet[2810]: I0129 16:15:25.360865 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf"} err="failed to get container status \"2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d3e532c4a3909a7cf362f43d3d22e21bccfa596eaf57097ba0dd04b13046aaf\": not found" Jan 29 16:15:25.361044 kubelet[2810]: I0129 16:15:25.361005 2810 scope.go:117] "RemoveContainer" containerID="c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702" Jan 29 16:15:25.361364 containerd[1518]: time="2025-01-29T16:15:25.361340112Z" level=error msg="ContainerStatus for \"c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702\": not found" Jan 29 16:15:25.361633 kubelet[2810]: E0129 16:15:25.361530 2810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702\": not found" containerID="c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702" Jan 29 16:15:25.361633 kubelet[2810]: I0129 16:15:25.361557 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702"} err="failed to get container status \"c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702\": rpc error: code = NotFound desc = an error occurred when try to find container \"c780dbf7c991db0fa54acacd25c238b4aad3060644714414f533f9c0f0249702\": not found" Jan 29 16:15:25.361633 kubelet[2810]: I0129 16:15:25.361575 2810 scope.go:117] "RemoveContainer" containerID="fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2" Jan 29 16:15:25.361995 containerd[1518]: time="2025-01-29T16:15:25.361913873Z" level=error msg="ContainerStatus for \"fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2\": not found" Jan 29 16:15:25.362107 kubelet[2810]: E0129 16:15:25.362062 2810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2\": not found" containerID="fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2" Jan 29 16:15:25.362158 kubelet[2810]: I0129 16:15:25.362101 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2"} err="failed to get container status \"fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2\": rpc error: code = NotFound desc = an error occurred when try to find container \"fdbcb7745c31389352f8c53c743d467272a18e03a64bb04f646a08c15c009ec2\": not found" Jan 29 16:15:25.362158 kubelet[2810]: I0129 16:15:25.362128 2810 scope.go:117] "RemoveContainer" containerID="0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f" Jan 29 16:15:25.362464 containerd[1518]: time="2025-01-29T16:15:25.362424155Z" level=error msg="ContainerStatus for \"0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f\": not found" Jan 29 16:15:25.362716 kubelet[2810]: E0129 16:15:25.362610 2810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f\": not found" containerID="0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f" Jan 29 16:15:25.362716 kubelet[2810]: I0129 16:15:25.362634 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f"} err="failed to get container status \"0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e671473fa351e0fb813cd563ea481e0d104d94b6e8f2124bcfc4bef9990a01f\": not found" Jan 29 16:15:25.362716 kubelet[2810]: I0129 16:15:25.362654 2810 scope.go:117] "RemoveContainer" containerID="ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409" Jan 29 16:15:25.363035 containerd[1518]: time="2025-01-29T16:15:25.362974556Z" level=error msg="ContainerStatus for \"ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409\": not found" Jan 29 16:15:25.363233 kubelet[2810]: E0129 16:15:25.363110 2810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409\": not found" containerID="ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409" Jan 29 16:15:25.363233 kubelet[2810]: I0129 16:15:25.363146 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409"} err="failed to get container status \"ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea50c84bbdbd1150059186a12fedd4eb972a3c146ca96a7686afd749e46e4409\": not found" Jan 29 16:15:25.363233 kubelet[2810]: I0129 16:15:25.363171 2810 scope.go:117] "RemoveContainer" containerID="7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7" Jan 29 16:15:25.364834 containerd[1518]: time="2025-01-29T16:15:25.364570880Z" level=info msg="RemoveContainer for \"7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7\"" Jan 29 16:15:25.367605 containerd[1518]: time="2025-01-29T16:15:25.367543408Z" level=info msg="RemoveContainer for \"7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7\" returns successfully" Jan 29 16:15:25.368084 kubelet[2810]: I0129 16:15:25.367966 2810 scope.go:117] "RemoveContainer" containerID="7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7" Jan 29 16:15:25.368301 containerd[1518]: time="2025-01-29T16:15:25.368195290Z" level=error msg="ContainerStatus for \"7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7\": not found" Jan 29 16:15:25.368460 kubelet[2810]: E0129 16:15:25.368399 2810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7\": not found" containerID="7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7" Jan 29 16:15:25.368460 kubelet[2810]: I0129 16:15:25.368428 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7"} err="failed to get container status \"7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f6593f43332baa145e0abba14e91554eca38719d8611af359313a07ba328be7\": not found" Jan 29 16:15:25.424188 systemd[1]: Started sshd@53-167.235.198.80:22-149.50.252.131:56284.service - OpenSSH per-connection server daemon (149.50.252.131:56284). Jan 29 16:15:25.480999 kubelet[2810]: E0129 16:15:25.480953 2810 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:15:25.484472 systemd[1]: Started sshd@54-167.235.198.80:22-149.50.252.131:56294.service - OpenSSH per-connection server daemon (149.50.252.131:56294). Jan 29 16:15:25.556533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27-rootfs.mount: Deactivated successfully. Jan 29 16:15:25.556648 systemd[1]: var-lib-kubelet-pods-e8c82f51\x2ddc50\x2d45ff\x2daa5a\x2db6636f16ce22-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtxtgb.mount: Deactivated successfully. Jan 29 16:15:25.556714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1-rootfs.mount: Deactivated successfully. Jan 29 16:15:25.556780 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1-shm.mount: Deactivated successfully. Jan 29 16:15:25.556906 systemd[1]: var-lib-kubelet-pods-56526d92\x2d8267\x2d4c39\x2db176\x2d3a2d0823d621-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnprg4.mount: Deactivated successfully. Jan 29 16:15:25.556976 systemd[1]: var-lib-kubelet-pods-56526d92\x2d8267\x2d4c39\x2db176\x2d3a2d0823d621-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 16:15:25.557038 systemd[1]: var-lib-kubelet-pods-56526d92\x2d8267\x2d4c39\x2db176\x2d3a2d0823d621-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 16:15:25.590937 sshd[4640]: Connection closed by 149.50.252.131 port 56284 [preauth] Jan 29 16:15:25.594004 systemd[1]: sshd@53-167.235.198.80:22-149.50.252.131:56284.service: Deactivated successfully. Jan 29 16:15:25.671935 sshd[4643]: Connection closed by 149.50.252.131 port 56294 [preauth] Jan 29 16:15:25.672982 systemd[1]: sshd@54-167.235.198.80:22-149.50.252.131:56294.service: Deactivated successfully. Jan 29 16:15:26.635408 sshd[4479]: Connection closed by 139.178.68.195 port 54568 Jan 29 16:15:26.636214 sshd-session[4477]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:26.641698 systemd[1]: sshd@52-167.235.198.80:22-139.178.68.195:54568.service: Deactivated successfully. Jan 29 16:15:26.645727 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 16:15:26.646000 systemd[1]: session-21.scope: Consumed 1.545s CPU time, 25.7M memory peak. Jan 29 16:15:26.649062 systemd-logind[1494]: Session 21 logged out. Waiting for processes to exit. Jan 29 16:15:26.651087 systemd-logind[1494]: Removed session 21. Jan 29 16:15:26.817217 systemd[1]: Started sshd@55-167.235.198.80:22-139.178.68.195:45544.service - OpenSSH per-connection server daemon (139.178.68.195:45544). Jan 29 16:15:27.275627 kubelet[2810]: I0129 16:15:27.275558 2810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56526d92-8267-4c39-b176-3a2d0823d621" path="/var/lib/kubelet/pods/56526d92-8267-4c39-b176-3a2d0823d621/volumes" Jan 29 16:15:27.277078 kubelet[2810]: I0129 16:15:27.277038 2810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c82f51-dc50-45ff-aa5a-b6636f16ce22" path="/var/lib/kubelet/pods/e8c82f51-dc50-45ff-aa5a-b6636f16ce22/volumes" Jan 29 16:15:27.815889 sshd[4653]: Accepted publickey for core from 139.178.68.195 port 45544 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:27.818102 sshd-session[4653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:27.824595 systemd-logind[1494]: New session 22 of user core. Jan 29 16:15:27.835030 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 16:15:29.179683 kubelet[2810]: I0129 16:15:29.179635 2810 memory_manager.go:355] "RemoveStaleState removing state" podUID="e8c82f51-dc50-45ff-aa5a-b6636f16ce22" containerName="cilium-operator" Jan 29 16:15:29.179683 kubelet[2810]: I0129 16:15:29.179670 2810 memory_manager.go:355] "RemoveStaleState removing state" podUID="56526d92-8267-4c39-b176-3a2d0823d621" containerName="cilium-agent" Jan 29 16:15:29.189639 systemd[1]: Created slice kubepods-burstable-pod24240ace_916e_44d8_91ee_767595054588.slice - libcontainer container kubepods-burstable-pod24240ace_916e_44d8_91ee_767595054588.slice. Jan 29 16:15:29.333532 kubelet[2810]: I0129 16:15:29.333454 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24240ace-916e-44d8-91ee-767595054588-bpf-maps\") pod \"cilium-xs782\" (UID: \"24240ace-916e-44d8-91ee-767595054588\") " pod="kube-system/cilium-xs782" Jan 29 16:15:29.333532 kubelet[2810]: I0129 16:15:29.333517 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24240ace-916e-44d8-91ee-767595054588-etc-cni-netd\") pod \"cilium-xs782\" (UID: \"24240ace-916e-44d8-91ee-767595054588\") " pod="kube-system/cilium-xs782" Jan 29 16:15:29.333846 kubelet[2810]: I0129 16:15:29.333557 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/24240ace-916e-44d8-91ee-767595054588-cilium-ipsec-secrets\") pod \"cilium-xs782\" (UID: \"24240ace-916e-44d8-91ee-767595054588\") " pod="kube-system/cilium-xs782" Jan 29 16:15:29.333846 kubelet[2810]: I0129 16:15:29.333586 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24240ace-916e-44d8-91ee-767595054588-lib-modules\") pod \"cilium-xs782\" (UID: \"24240ace-916e-44d8-91ee-767595054588\") " pod="kube-system/cilium-xs782" Jan 29 16:15:29.333846 kubelet[2810]: I0129 16:15:29.333622 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24240ace-916e-44d8-91ee-767595054588-hostproc\") pod \"cilium-xs782\" (UID: \"24240ace-916e-44d8-91ee-767595054588\") " pod="kube-system/cilium-xs782" Jan 29 16:15:29.333846 kubelet[2810]: I0129 16:15:29.333653 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24240ace-916e-44d8-91ee-767595054588-cni-path\") pod \"cilium-xs782\" (UID: \"24240ace-916e-44d8-91ee-767595054588\") " pod="kube-system/cilium-xs782" Jan 29 16:15:29.333846 kubelet[2810]: I0129 16:15:29.333679 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24240ace-916e-44d8-91ee-767595054588-xtables-lock\") pod \"cilium-xs782\" (UID: \"24240ace-916e-44d8-91ee-767595054588\") " pod="kube-system/cilium-xs782" Jan 29 16:15:29.333846 kubelet[2810]: I0129 16:15:29.333710 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24240ace-916e-44d8-91ee-767595054588-clustermesh-secrets\") pod \"cilium-xs782\" (UID: \"24240ace-916e-44d8-91ee-767595054588\") " pod="kube-system/cilium-xs782" Jan 29 16:15:29.334169 kubelet[2810]: I0129 16:15:29.333746 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24240ace-916e-44d8-91ee-767595054588-hubble-tls\") pod \"cilium-xs782\" (UID: \"24240ace-916e-44d8-91ee-767595054588\") " pod="kube-system/cilium-xs782" Jan 29 16:15:29.334169 kubelet[2810]: I0129 16:15:29.333836 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24240ace-916e-44d8-91ee-767595054588-cilium-config-path\") pod \"cilium-xs782\" (UID: \"24240ace-916e-44d8-91ee-767595054588\") " pod="kube-system/cilium-xs782" Jan 29 16:15:29.334169 kubelet[2810]: I0129 16:15:29.333878 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24240ace-916e-44d8-91ee-767595054588-host-proc-sys-net\") pod \"cilium-xs782\" (UID: \"24240ace-916e-44d8-91ee-767595054588\") " pod="kube-system/cilium-xs782" Jan 29 16:15:29.334169 kubelet[2810]: I0129 16:15:29.333912 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24240ace-916e-44d8-91ee-767595054588-host-proc-sys-kernel\") pod \"cilium-xs782\" (UID: \"24240ace-916e-44d8-91ee-767595054588\") " pod="kube-system/cilium-xs782" Jan 29 16:15:29.334169 kubelet[2810]: I0129 16:15:29.333949 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24240ace-916e-44d8-91ee-767595054588-cilium-cgroup\") pod \"cilium-xs782\" (UID: \"24240ace-916e-44d8-91ee-767595054588\") " pod="kube-system/cilium-xs782" Jan 29 16:15:29.334447 kubelet[2810]: I0129 16:15:29.333977 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzxpf\" (UniqueName: \"kubernetes.io/projected/24240ace-916e-44d8-91ee-767595054588-kube-api-access-qzxpf\") pod \"cilium-xs782\" (UID: \"24240ace-916e-44d8-91ee-767595054588\") " pod="kube-system/cilium-xs782" Jan 29 16:15:29.334447 kubelet[2810]: I0129 16:15:29.334008 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24240ace-916e-44d8-91ee-767595054588-cilium-run\") pod \"cilium-xs782\" (UID: \"24240ace-916e-44d8-91ee-767595054588\") " pod="kube-system/cilium-xs782" Jan 29 16:15:29.379887 sshd[4655]: Connection closed by 139.178.68.195 port 45544 Jan 29 16:15:29.380198 sshd-session[4653]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:29.384997 systemd[1]: sshd@55-167.235.198.80:22-139.178.68.195:45544.service: Deactivated successfully. Jan 29 16:15:29.388768 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 16:15:29.390998 systemd-logind[1494]: Session 22 logged out. Waiting for processes to exit. Jan 29 16:15:29.392337 systemd-logind[1494]: Removed session 22. Jan 29 16:15:29.496920 containerd[1518]: time="2025-01-29T16:15:29.496566254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xs782,Uid:24240ace-916e-44d8-91ee-767595054588,Namespace:kube-system,Attempt:0,}" Jan 29 16:15:29.522193 containerd[1518]: time="2025-01-29T16:15:29.521909360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:15:29.522193 containerd[1518]: time="2025-01-29T16:15:29.521970440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:15:29.522193 containerd[1518]: time="2025-01-29T16:15:29.521982200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:15:29.522193 containerd[1518]: time="2025-01-29T16:15:29.522063440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:15:29.559159 systemd[1]: Started cri-containerd-3990a03adbafbe4bc5aa491b7f5970f452a2a7dd3d7e373989b250e97b695ae7.scope - libcontainer container 3990a03adbafbe4bc5aa491b7f5970f452a2a7dd3d7e373989b250e97b695ae7. Jan 29 16:15:29.565202 systemd[1]: Started sshd@56-167.235.198.80:22-139.178.68.195:45552.service - OpenSSH per-connection server daemon (139.178.68.195:45552). Jan 29 16:15:29.602326 containerd[1518]: time="2025-01-29T16:15:29.602166048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xs782,Uid:24240ace-916e-44d8-91ee-767595054588,Namespace:kube-system,Attempt:0,} returns sandbox id \"3990a03adbafbe4bc5aa491b7f5970f452a2a7dd3d7e373989b250e97b695ae7\"" Jan 29 16:15:29.607193 containerd[1518]: time="2025-01-29T16:15:29.607152421Z" level=info msg="CreateContainer within sandbox \"3990a03adbafbe4bc5aa491b7f5970f452a2a7dd3d7e373989b250e97b695ae7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:15:29.618663 containerd[1518]: time="2025-01-29T16:15:29.618562890Z" level=info msg="CreateContainer within sandbox \"3990a03adbafbe4bc5aa491b7f5970f452a2a7dd3d7e373989b250e97b695ae7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ca40234f932d18523b334f79caa6d05ce31fa61667b2ae873b6f1f4b66410236\"" Jan 29 16:15:29.620764 containerd[1518]: time="2025-01-29T16:15:29.620710576Z" level=info msg="StartContainer for \"ca40234f932d18523b334f79caa6d05ce31fa61667b2ae873b6f1f4b66410236\"" Jan 29 16:15:29.648063 systemd[1]: Started cri-containerd-ca40234f932d18523b334f79caa6d05ce31fa61667b2ae873b6f1f4b66410236.scope - libcontainer container ca40234f932d18523b334f79caa6d05ce31fa61667b2ae873b6f1f4b66410236. Jan 29 16:15:29.681063 containerd[1518]: time="2025-01-29T16:15:29.681005452Z" level=info msg="StartContainer for \"ca40234f932d18523b334f79caa6d05ce31fa61667b2ae873b6f1f4b66410236\" returns successfully" Jan 29 16:15:29.690718 systemd[1]: cri-containerd-ca40234f932d18523b334f79caa6d05ce31fa61667b2ae873b6f1f4b66410236.scope: Deactivated successfully. Jan 29 16:15:29.726356 containerd[1518]: time="2025-01-29T16:15:29.726272889Z" level=info msg="shim disconnected" id=ca40234f932d18523b334f79caa6d05ce31fa61667b2ae873b6f1f4b66410236 namespace=k8s.io Jan 29 16:15:29.726639 containerd[1518]: time="2025-01-29T16:15:29.726618730Z" level=warning msg="cleaning up after shim disconnected" id=ca40234f932d18523b334f79caa6d05ce31fa61667b2ae873b6f1f4b66410236 namespace=k8s.io Jan 29 16:15:29.726780 containerd[1518]: time="2025-01-29T16:15:29.726691890Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:15:30.342230 containerd[1518]: time="2025-01-29T16:15:30.341957716Z" level=info msg="CreateContainer within sandbox \"3990a03adbafbe4bc5aa491b7f5970f452a2a7dd3d7e373989b250e97b695ae7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:15:30.352982 containerd[1518]: time="2025-01-29T16:15:30.352786424Z" level=info msg="CreateContainer within sandbox \"3990a03adbafbe4bc5aa491b7f5970f452a2a7dd3d7e373989b250e97b695ae7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"08544b675f364af09658b2e523c2d1c00e804dd5dbd3cb77da7ed388038c8069\"" Jan 29 16:15:30.354874 containerd[1518]: time="2025-01-29T16:15:30.354325828Z" level=info msg="StartContainer for \"08544b675f364af09658b2e523c2d1c00e804dd5dbd3cb77da7ed388038c8069\"" Jan 29 16:15:30.389447 systemd[1]: Started cri-containerd-08544b675f364af09658b2e523c2d1c00e804dd5dbd3cb77da7ed388038c8069.scope - libcontainer container 08544b675f364af09658b2e523c2d1c00e804dd5dbd3cb77da7ed388038c8069. Jan 29 16:15:30.419382 containerd[1518]: time="2025-01-29T16:15:30.419309155Z" level=info msg="StartContainer for \"08544b675f364af09658b2e523c2d1c00e804dd5dbd3cb77da7ed388038c8069\" returns successfully" Jan 29 16:15:30.428949 systemd[1]: cri-containerd-08544b675f364af09658b2e523c2d1c00e804dd5dbd3cb77da7ed388038c8069.scope: Deactivated successfully. Jan 29 16:15:30.457760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08544b675f364af09658b2e523c2d1c00e804dd5dbd3cb77da7ed388038c8069-rootfs.mount: Deactivated successfully. Jan 29 16:15:30.463242 containerd[1518]: time="2025-01-29T16:15:30.463139547Z" level=info msg="shim disconnected" id=08544b675f364af09658b2e523c2d1c00e804dd5dbd3cb77da7ed388038c8069 namespace=k8s.io Jan 29 16:15:30.463242 containerd[1518]: time="2025-01-29T16:15:30.463225508Z" level=warning msg="cleaning up after shim disconnected" id=08544b675f364af09658b2e523c2d1c00e804dd5dbd3cb77da7ed388038c8069 namespace=k8s.io Jan 29 16:15:30.463242 containerd[1518]: time="2025-01-29T16:15:30.463244108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:15:30.482528 kubelet[2810]: E0129 16:15:30.482433 2810 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:15:30.569043 sshd[4697]: Accepted publickey for core from 139.178.68.195 port 45552 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:30.571405 sshd-session[4697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:30.577251 systemd-logind[1494]: New session 23 of user core. Jan 29 16:15:30.584084 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 16:15:31.248221 sshd[4838]: Connection closed by 139.178.68.195 port 45552 Jan 29 16:15:31.248909 sshd-session[4697]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:31.255683 systemd[1]: sshd@56-167.235.198.80:22-139.178.68.195:45552.service: Deactivated successfully. Jan 29 16:15:31.260390 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 16:15:31.261549 systemd-logind[1494]: Session 23 logged out. Waiting for processes to exit. Jan 29 16:15:31.262842 systemd-logind[1494]: Removed session 23. Jan 29 16:15:31.350279 containerd[1518]: time="2025-01-29T16:15:31.350233380Z" level=info msg="CreateContainer within sandbox \"3990a03adbafbe4bc5aa491b7f5970f452a2a7dd3d7e373989b250e97b695ae7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:15:31.383076 containerd[1518]: time="2025-01-29T16:15:31.382908784Z" level=info msg="CreateContainer within sandbox \"3990a03adbafbe4bc5aa491b7f5970f452a2a7dd3d7e373989b250e97b695ae7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"64f4075319c8c782838f71b9d27e03e90a1758ac19301fa09a169c907eba1161\"" Jan 29 16:15:31.384514 containerd[1518]: time="2025-01-29T16:15:31.383478745Z" level=info msg="StartContainer for \"64f4075319c8c782838f71b9d27e03e90a1758ac19301fa09a169c907eba1161\"" Jan 29 16:15:31.427034 systemd[1]: Started cri-containerd-64f4075319c8c782838f71b9d27e03e90a1758ac19301fa09a169c907eba1161.scope - libcontainer container 64f4075319c8c782838f71b9d27e03e90a1758ac19301fa09a169c907eba1161. Jan 29 16:15:31.431729 systemd[1]: Started sshd@57-167.235.198.80:22-139.178.68.195:45560.service - OpenSSH per-connection server daemon (139.178.68.195:45560). Jan 29 16:15:31.473006 containerd[1518]: time="2025-01-29T16:15:31.472944774Z" level=info msg="StartContainer for \"64f4075319c8c782838f71b9d27e03e90a1758ac19301fa09a169c907eba1161\" returns successfully" Jan 29 16:15:31.478134 systemd[1]: cri-containerd-64f4075319c8c782838f71b9d27e03e90a1758ac19301fa09a169c907eba1161.scope: Deactivated successfully. Jan 29 16:15:31.501784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64f4075319c8c782838f71b9d27e03e90a1758ac19301fa09a169c907eba1161-rootfs.mount: Deactivated successfully. Jan 29 16:15:31.511803 containerd[1518]: time="2025-01-29T16:15:31.511680032Z" level=info msg="shim disconnected" id=64f4075319c8c782838f71b9d27e03e90a1758ac19301fa09a169c907eba1161 namespace=k8s.io Jan 29 16:15:31.512019 containerd[1518]: time="2025-01-29T16:15:31.511781953Z" level=warning msg="cleaning up after shim disconnected" id=64f4075319c8c782838f71b9d27e03e90a1758ac19301fa09a169c907eba1161 namespace=k8s.io Jan 29 16:15:31.512019 containerd[1518]: time="2025-01-29T16:15:31.511872033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:15:31.527564 containerd[1518]: time="2025-01-29T16:15:31.527515193Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:15:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:15:32.353978 containerd[1518]: time="2025-01-29T16:15:32.353934855Z" level=info msg="CreateContainer within sandbox \"3990a03adbafbe4bc5aa491b7f5970f452a2a7dd3d7e373989b250e97b695ae7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:15:32.373514 containerd[1518]: time="2025-01-29T16:15:32.373443265Z" level=info msg="CreateContainer within sandbox \"3990a03adbafbe4bc5aa491b7f5970f452a2a7dd3d7e373989b250e97b695ae7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0d670cafb8997fc75a523d69c1b82c28d196a175aac67a6bc4fa792a2b58d36e\"" Jan 29 16:15:32.374080 containerd[1518]: time="2025-01-29T16:15:32.374049906Z" level=info msg="StartContainer for \"0d670cafb8997fc75a523d69c1b82c28d196a175aac67a6bc4fa792a2b58d36e\"" Jan 29 16:15:32.409025 systemd[1]: Started cri-containerd-0d670cafb8997fc75a523d69c1b82c28d196a175aac67a6bc4fa792a2b58d36e.scope - libcontainer container 0d670cafb8997fc75a523d69c1b82c28d196a175aac67a6bc4fa792a2b58d36e. Jan 29 16:15:32.436219 systemd[1]: cri-containerd-0d670cafb8997fc75a523d69c1b82c28d196a175aac67a6bc4fa792a2b58d36e.scope: Deactivated successfully. Jan 29 16:15:32.439867 sshd[4863]: Accepted publickey for core from 139.178.68.195 port 45560 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:32.441136 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:32.442617 containerd[1518]: time="2025-01-29T16:15:32.442253239Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24240ace_916e_44d8_91ee_767595054588.slice/cri-containerd-0d670cafb8997fc75a523d69c1b82c28d196a175aac67a6bc4fa792a2b58d36e.scope/memory.events\": no such file or directory" Jan 29 16:15:32.446169 containerd[1518]: time="2025-01-29T16:15:32.446105529Z" level=info msg="StartContainer for \"0d670cafb8997fc75a523d69c1b82c28d196a175aac67a6bc4fa792a2b58d36e\" returns successfully" Jan 29 16:15:32.455953 systemd-logind[1494]: New session 24 of user core. Jan 29 16:15:32.461181 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 16:15:32.471780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d670cafb8997fc75a523d69c1b82c28d196a175aac67a6bc4fa792a2b58d36e-rootfs.mount: Deactivated successfully. Jan 29 16:15:32.477832 containerd[1518]: time="2025-01-29T16:15:32.477662849Z" level=info msg="shim disconnected" id=0d670cafb8997fc75a523d69c1b82c28d196a175aac67a6bc4fa792a2b58d36e namespace=k8s.io Jan 29 16:15:32.477832 containerd[1518]: time="2025-01-29T16:15:32.477719569Z" level=warning msg="cleaning up after shim disconnected" id=0d670cafb8997fc75a523d69c1b82c28d196a175aac67a6bc4fa792a2b58d36e namespace=k8s.io Jan 29 16:15:32.477832 containerd[1518]: time="2025-01-29T16:15:32.477728729Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:15:33.212855 kubelet[2810]: I0129 16:15:33.210987 2810 setters.go:602] "Node became not ready" node="ci-4230-0-0-d-0116a6be22" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T16:15:33Z","lastTransitionTime":"2025-01-29T16:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 16:15:33.359972 containerd[1518]: time="2025-01-29T16:15:33.359928078Z" level=info msg="CreateContainer within sandbox \"3990a03adbafbe4bc5aa491b7f5970f452a2a7dd3d7e373989b250e97b695ae7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:15:33.381890 containerd[1518]: time="2025-01-29T16:15:33.381137331Z" level=info msg="CreateContainer within sandbox \"3990a03adbafbe4bc5aa491b7f5970f452a2a7dd3d7e373989b250e97b695ae7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9420e36ad5c33b07901313be47dd0c5e5de57ac26fcd628c278101bca4b2cdb6\"" Jan 29 16:15:33.382969 containerd[1518]: time="2025-01-29T16:15:33.382925936Z" level=info msg="StartContainer for \"9420e36ad5c33b07901313be47dd0c5e5de57ac26fcd628c278101bca4b2cdb6\"" Jan 29 16:15:33.415027 systemd[1]: Started cri-containerd-9420e36ad5c33b07901313be47dd0c5e5de57ac26fcd628c278101bca4b2cdb6.scope - libcontainer container 9420e36ad5c33b07901313be47dd0c5e5de57ac26fcd628c278101bca4b2cdb6. Jan 29 16:15:33.453303 containerd[1518]: time="2025-01-29T16:15:33.453178353Z" level=info msg="StartContainer for \"9420e36ad5c33b07901313be47dd0c5e5de57ac26fcd628c278101bca4b2cdb6\" returns successfully" Jan 29 16:15:33.802939 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 16:15:34.172265 systemd[1]: Started sshd@58-167.235.198.80:22-134.122.8.241:38904.service - OpenSSH per-connection server daemon (134.122.8.241:38904). Jan 29 16:15:34.383910 kubelet[2810]: I0129 16:15:34.383554 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xs782" podStartSLOduration=5.383533527 podStartE2EDuration="5.383533527s" podCreationTimestamp="2025-01-29 16:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:15:34.383293887 +0000 UTC m=+359.210864641" watchObservedRunningTime="2025-01-29 16:15:34.383533527 +0000 UTC m=+359.211104241" Jan 29 16:15:34.731705 sshd[5088]: Invalid user kernel from 134.122.8.241 port 38904 Jan 29 16:15:34.825812 sshd[5088]: Received disconnect from 134.122.8.241 port 38904:11: Bye Bye [preauth] Jan 29 16:15:34.825812 sshd[5088]: Disconnected from invalid user kernel 134.122.8.241 port 38904 [preauth] Jan 29 16:15:34.828630 systemd[1]: sshd@58-167.235.198.80:22-134.122.8.241:38904.service: Deactivated successfully. Jan 29 16:15:35.324173 containerd[1518]: time="2025-01-29T16:15:35.324124433Z" level=info msg="StopPodSandbox for \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\"" Jan 29 16:15:35.324534 containerd[1518]: time="2025-01-29T16:15:35.324230753Z" level=info msg="TearDown network for sandbox \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\" successfully" Jan 29 16:15:35.324534 containerd[1518]: time="2025-01-29T16:15:35.324242513Z" level=info msg="StopPodSandbox for \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\" returns successfully" Jan 29 16:15:35.325149 containerd[1518]: time="2025-01-29T16:15:35.325065435Z" level=info msg="RemovePodSandbox for \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\"" Jan 29 16:15:35.325149 containerd[1518]: time="2025-01-29T16:15:35.325108155Z" level=info msg="Forcibly stopping sandbox \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\"" Jan 29 16:15:35.325248 containerd[1518]: time="2025-01-29T16:15:35.325170035Z" level=info msg="TearDown network for sandbox \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\" successfully" Jan 29 16:15:35.331031 containerd[1518]: time="2025-01-29T16:15:35.330971930Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:15:35.331450 containerd[1518]: time="2025-01-29T16:15:35.331052810Z" level=info msg="RemovePodSandbox \"a3e4680e3248d799d3e3e85a3e45489c394388802fb8c62f71cbbd267ffd7be1\" returns successfully" Jan 29 16:15:35.333061 containerd[1518]: time="2025-01-29T16:15:35.333017135Z" level=info msg="StopPodSandbox for \"e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27\"" Jan 29 16:15:35.333127 containerd[1518]: time="2025-01-29T16:15:35.333113215Z" level=info msg="TearDown network for sandbox \"e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27\" successfully" Jan 29 16:15:35.333127 containerd[1518]: time="2025-01-29T16:15:35.333123735Z" level=info msg="StopPodSandbox for \"e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27\" returns successfully" Jan 29 16:15:35.333512 containerd[1518]: time="2025-01-29T16:15:35.333488576Z" level=info msg="RemovePodSandbox for \"e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27\"" Jan 29 16:15:35.333563 containerd[1518]: time="2025-01-29T16:15:35.333514216Z" level=info msg="Forcibly stopping sandbox \"e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27\"" Jan 29 16:15:35.333590 containerd[1518]: time="2025-01-29T16:15:35.333562496Z" level=info msg="TearDown network for sandbox \"e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27\" successfully" Jan 29 16:15:35.337254 containerd[1518]: time="2025-01-29T16:15:35.337172865Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:15:35.337254 containerd[1518]: time="2025-01-29T16:15:35.337249265Z" level=info msg="RemovePodSandbox \"e7a12d3dd80f871efa2e61d03335d189114b7c893414113e9f5fc1f6e898ec27\" returns successfully" Jan 29 16:15:36.752692 systemd-networkd[1417]: lxc_health: Link UP Jan 29 16:15:36.775491 systemd-networkd[1417]: lxc_health: Gained carrier Jan 29 16:15:37.824074 systemd-networkd[1417]: lxc_health: Gained IPv6LL Jan 29 16:15:39.606703 systemd[1]: run-containerd-runc-k8s.io-9420e36ad5c33b07901313be47dd0c5e5de57ac26fcd628c278101bca4b2cdb6-runc.H8FCRF.mount: Deactivated successfully. Jan 29 16:15:39.671010 kubelet[2810]: E0129 16:15:39.670958 2810 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59394->127.0.0.1:41849: write tcp 127.0.0.1:59394->127.0.0.1:41849: write: broken pipe Jan 29 16:15:41.977677 sshd[4944]: Connection closed by 139.178.68.195 port 45560 Jan 29 16:15:41.978532 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:41.983156 systemd-logind[1494]: Session 24 logged out. Waiting for processes to exit. Jan 29 16:15:41.983416 systemd[1]: sshd@57-167.235.198.80:22-139.178.68.195:45560.service: Deactivated successfully. Jan 29 16:15:41.987409 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 16:15:41.990338 systemd-logind[1494]: Removed session 24. Jan 29 16:16:05.733071 kubelet[2810]: E0129 16:16:05.732769 2810 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38210->10.0.0.2:2379: read: connection timed out" Jan 29 16:16:05.742139 systemd[1]: cri-containerd-7ef5c201b69e02b73941542b7ea3028607379f29fd9365e4dc9f1e45c11538a3.scope: Deactivated successfully. Jan 29 16:16:05.742867 systemd[1]: cri-containerd-7ef5c201b69e02b73941542b7ea3028607379f29fd9365e4dc9f1e45c11538a3.scope: Consumed 5.197s CPU time, 23.8M memory peak. Jan 29 16:16:05.771492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ef5c201b69e02b73941542b7ea3028607379f29fd9365e4dc9f1e45c11538a3-rootfs.mount: Deactivated successfully. Jan 29 16:16:05.782008 containerd[1518]: time="2025-01-29T16:16:05.781712369Z" level=info msg="shim disconnected" id=7ef5c201b69e02b73941542b7ea3028607379f29fd9365e4dc9f1e45c11538a3 namespace=k8s.io Jan 29 16:16:05.782008 containerd[1518]: time="2025-01-29T16:16:05.781918569Z" level=warning msg="cleaning up after shim disconnected" id=7ef5c201b69e02b73941542b7ea3028607379f29fd9365e4dc9f1e45c11538a3 namespace=k8s.io Jan 29 16:16:05.782008 containerd[1518]: time="2025-01-29T16:16:05.781960889Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:16:06.443483 kubelet[2810]: I0129 16:16:06.442965 2810 scope.go:117] "RemoveContainer" containerID="7ef5c201b69e02b73941542b7ea3028607379f29fd9365e4dc9f1e45c11538a3" Jan 29 16:16:06.446664 containerd[1518]: time="2025-01-29T16:16:06.446609948Z" level=info msg="CreateContainer within sandbox \"5bdbc356a7314d6e3768e49c331b58898635d7e5a0614910c2c620bc8eb26933\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 29 16:16:06.470412 containerd[1518]: time="2025-01-29T16:16:06.470276717Z" level=info msg="CreateContainer within sandbox \"5bdbc356a7314d6e3768e49c331b58898635d7e5a0614910c2c620bc8eb26933\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"97fc9ce4db7a1ae58c4dc4f6d2e4274b335c0acb82f976b8a239153147fa3176\"" Jan 29 16:16:06.470858 containerd[1518]: time="2025-01-29T16:16:06.470830198Z" level=info msg="StartContainer for \"97fc9ce4db7a1ae58c4dc4f6d2e4274b335c0acb82f976b8a239153147fa3176\"" Jan 29 16:16:06.507012 systemd[1]: Started cri-containerd-97fc9ce4db7a1ae58c4dc4f6d2e4274b335c0acb82f976b8a239153147fa3176.scope - libcontainer container 97fc9ce4db7a1ae58c4dc4f6d2e4274b335c0acb82f976b8a239153147fa3176. Jan 29 16:16:06.545895 containerd[1518]: time="2025-01-29T16:16:06.545820474Z" level=info msg="StartContainer for \"97fc9ce4db7a1ae58c4dc4f6d2e4274b335c0acb82f976b8a239153147fa3176\" returns successfully" Jan 29 16:16:06.676547 systemd[1]: cri-containerd-56283c2f09b2a4c2e7868d3f92cc118063dc40e28583dcc5e15a26cf97405477.scope: Deactivated successfully. Jan 29 16:16:06.677127 systemd[1]: cri-containerd-56283c2f09b2a4c2e7868d3f92cc118063dc40e28583dcc5e15a26cf97405477.scope: Consumed 7.091s CPU time, 56.4M memory peak. Jan 29 16:16:06.710555 containerd[1518]: time="2025-01-29T16:16:06.709924333Z" level=info msg="shim disconnected" id=56283c2f09b2a4c2e7868d3f92cc118063dc40e28583dcc5e15a26cf97405477 namespace=k8s.io Jan 29 16:16:06.710555 containerd[1518]: time="2025-01-29T16:16:06.709983734Z" level=warning msg="cleaning up after shim disconnected" id=56283c2f09b2a4c2e7868d3f92cc118063dc40e28583dcc5e15a26cf97405477 namespace=k8s.io Jan 29 16:16:06.710555 containerd[1518]: time="2025-01-29T16:16:06.709991694Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:16:06.773651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56283c2f09b2a4c2e7868d3f92cc118063dc40e28583dcc5e15a26cf97405477-rootfs.mount: Deactivated successfully. Jan 29 16:16:07.138286 kubelet[2810]: E0129 16:16:07.138134 2810 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38050->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-0-0-d-0116a6be22.181f360145e93d82 kube-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-0-0-d-0116a6be22,UID:3421003d42a59284d991bacfae98de7a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-d-0116a6be22,},FirstTimestamp:2025-01-29 16:16:00.284081538 +0000 UTC m=+385.111652292,LastTimestamp:2025-01-29 16:16:00.284081538 +0000 UTC m=+385.111652292,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-d-0116a6be22,}" Jan 29 16:16:07.451958 kubelet[2810]: I0129 16:16:07.450621 2810 scope.go:117] "RemoveContainer" containerID="56283c2f09b2a4c2e7868d3f92cc118063dc40e28583dcc5e15a26cf97405477" Jan 29 16:16:07.453192 containerd[1518]: time="2025-01-29T16:16:07.453158148Z" level=info msg="CreateContainer within sandbox \"947ff973e7b14c2e6cafaa4508e495731b8dc730180cedd5903a57830f044c86\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 16:16:07.477922 containerd[1518]: time="2025-01-29T16:16:07.476912877Z" level=info msg="CreateContainer within sandbox \"947ff973e7b14c2e6cafaa4508e495731b8dc730180cedd5903a57830f044c86\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"181b2c64408cc371e89171c357a9b323547e672ebc86c9172bf64c89ba5e49fe\"" Jan 29 16:16:07.480042 containerd[1518]: time="2025-01-29T16:16:07.479157842Z" level=info msg="StartContainer for \"181b2c64408cc371e89171c357a9b323547e672ebc86c9172bf64c89ba5e49fe\"" Jan 29 16:16:07.525213 systemd[1]: Started cri-containerd-181b2c64408cc371e89171c357a9b323547e672ebc86c9172bf64c89ba5e49fe.scope - libcontainer container 181b2c64408cc371e89171c357a9b323547e672ebc86c9172bf64c89ba5e49fe. Jan 29 16:16:07.570410 containerd[1518]: time="2025-01-29T16:16:07.570337150Z" level=info msg="StartContainer for \"181b2c64408cc371e89171c357a9b323547e672ebc86c9172bf64c89ba5e49fe\" returns successfully" Jan 29 16:16:15.734102 kubelet[2810]: E0129 16:16:15.733972 2810 controller.go:195] "Failed to update lease" err="Put \"https://167.235.198.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-d-0116a6be22?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:16:16.674616 kubelet[2810]: I0129 16:16:16.674362 2810 status_manager.go:890] "Failed to get status for pod" podUID="f0f740f29ca5f9e91bdfd69b86c424b1" pod="kube-system/kube-scheduler-ci-4230-0-0-d-0116a6be22" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38154->10.0.0.2:2379: read: connection timed out" Jan 29 16:16:25.734572 kubelet[2810]: E0129 16:16:25.734482 2810 controller.go:195] "Failed to update lease" err="Put \"https://167.235.198.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-d-0116a6be22?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:16:35.736851 kubelet[2810]: E0129 16:16:35.734984 2810 controller.go:195] "Failed to update lease" err="Put \"https://167.235.198.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-d-0116a6be22?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:16:35.914151 systemd[1]: Started sshd@59-167.235.198.80:22-103.142.199.159:57964.service - OpenSSH per-connection server daemon (103.142.199.159:57964). Jan 29 16:16:36.743604 sshd[5751]: Invalid user vpnuser1 from 103.142.199.159 port 57964 Jan 29 16:16:36.902145 sshd[5751]: Received disconnect from 103.142.199.159 port 57964:11: Bye Bye [preauth] Jan 29 16:16:36.902145 sshd[5751]: Disconnected from invalid user vpnuser1 103.142.199.159 port 57964 [preauth] Jan 29 16:16:36.904882 systemd[1]: sshd@59-167.235.198.80:22-103.142.199.159:57964.service: Deactivated successfully. Jan 29 16:16:41.141190 kubelet[2810]: E0129 16:16:41.141042 2810 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-0-0-d-0116a6be22.181f3601f48e1be4 kube-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-0-0-d-0116a6be22,UID:3421003d42a59284d991bacfae98de7a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-d-0116a6be22,},FirstTimestamp:2025-01-29 16:16:03.214121956 +0000 UTC m=+388.041692670,LastTimestamp:2025-01-29 16:16:03.214121956 +0000 UTC m=+388.041692670,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-d-0116a6be22,}" Jan 29 16:16:43.166229 systemd[1]: Started sshd@60-167.235.198.80:22-134.122.8.241:37258.service - OpenSSH per-connection server daemon (134.122.8.241:37258). Jan 29 16:16:43.722817 sshd[5759]: Invalid user administrator from 134.122.8.241 port 37258 Jan 29 16:16:43.818051 sshd[5759]: Received disconnect from 134.122.8.241 port 37258:11: Bye Bye [preauth] Jan 29 16:16:43.818051 sshd[5759]: Disconnected from invalid user administrator 134.122.8.241 port 37258 [preauth] Jan 29 16:16:43.819959 systemd[1]: sshd@60-167.235.198.80:22-134.122.8.241:37258.service: Deactivated successfully. Jan 29 16:16:45.736413 kubelet[2810]: E0129 16:16:45.736087 2810 controller.go:195] "Failed to update lease" err="Put \"https://167.235.198.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-d-0116a6be22?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:16:45.736413 kubelet[2810]: I0129 16:16:45.736152 2810 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"