Jan 13 20:16:53.934585 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Jan 13 20:16:53.934623 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:56:28 -00 2025
Jan 13 20:16:53.934637 kernel: KASLR enabled
Jan 13 20:16:53.934643 kernel: efi: EFI v2.7 by EDK II
Jan 13 20:16:53.934650 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x13479b218 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x132357218 
Jan 13 20:16:53.934656 kernel: random: crng init done
Jan 13 20:16:53.934664 kernel: secureboot: Secure boot disabled
Jan 13 20:16:53.934670 kernel: ACPI: Early table checksum verification disabled
Jan 13 20:16:53.934677 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS )
Jan 13 20:16:53.934683 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS  BXPC     00000001      01000013)
Jan 13 20:16:53.934692 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:16:53.934698 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:16:53.934706 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:16:53.934712 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:16:53.934720 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:16:53.934728 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:16:53.934735 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:16:53.934742 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:16:53.934748 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:16:53.934755 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL  EDK2     00000002      01000013)
Jan 13 20:16:53.934762 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600
Jan 13 20:16:53.934768 kernel: NUMA: Failed to initialise from firmware
Jan 13 20:16:53.934775 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff]
Jan 13 20:16:53.934830 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff]
Jan 13 20:16:53.934852 kernel: Zone ranges:
Jan 13 20:16:53.934860 kernel:   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
Jan 13 20:16:53.934870 kernel:   DMA32    empty
Jan 13 20:16:53.934876 kernel:   Normal   [mem 0x0000000100000000-0x0000000139ffffff]
Jan 13 20:16:53.934883 kernel: Movable zone start for each node
Jan 13 20:16:53.934890 kernel: Early memory node ranges
Jan 13 20:16:53.934897 kernel:   node   0: [mem 0x0000000040000000-0x000000013243ffff]
Jan 13 20:16:53.934904 kernel:   node   0: [mem 0x0000000132440000-0x000000013272ffff]
Jan 13 20:16:53.934911 kernel:   node   0: [mem 0x0000000132730000-0x0000000135bfffff]
Jan 13 20:16:53.934917 kernel:   node   0: [mem 0x0000000135c00000-0x0000000135fdffff]
Jan 13 20:16:53.934924 kernel:   node   0: [mem 0x0000000135fe0000-0x0000000139ffffff]
Jan 13 20:16:53.934931 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff]
Jan 13 20:16:53.934937 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges
Jan 13 20:16:53.934945 kernel: psci: probing for conduit method from ACPI.
Jan 13 20:16:53.934952 kernel: psci: PSCIv1.1 detected in firmware.
Jan 13 20:16:53.934959 kernel: psci: Using standard PSCI v0.2 function IDs
Jan 13 20:16:53.934969 kernel: psci: Trusted OS migration not required
Jan 13 20:16:53.934975 kernel: psci: SMC Calling Convention v1.1
Jan 13 20:16:53.934991 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003)
Jan 13 20:16:53.935002 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Jan 13 20:16:53.935009 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Jan 13 20:16:53.935017 kernel: pcpu-alloc: [0] 0 [0] 1 
Jan 13 20:16:53.935024 kernel: Detected PIPT I-cache on CPU0
Jan 13 20:16:53.935031 kernel: CPU features: detected: GIC system register CPU interface
Jan 13 20:16:53.935038 kernel: CPU features: detected: Hardware dirty bit management
Jan 13 20:16:53.935045 kernel: CPU features: detected: Spectre-v4
Jan 13 20:16:53.935052 kernel: CPU features: detected: Spectre-BHB
Jan 13 20:16:53.935059 kernel: CPU features: kernel page table isolation forced ON by KASLR
Jan 13 20:16:53.935066 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Jan 13 20:16:53.935073 kernel: CPU features: detected: ARM erratum 1418040
Jan 13 20:16:53.935082 kernel: CPU features: detected: SSBS not fully self-synchronizing
Jan 13 20:16:53.935089 kernel: alternatives: applying boot alternatives
Jan 13 20:16:53.935098 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc
Jan 13 20:16:53.935105 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Jan 13 20:16:53.935112 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 13 20:16:53.935120 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Jan 13 20:16:53.935127 kernel: Fallback order for Node 0: 0 
Jan 13 20:16:53.935134 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 1008000
Jan 13 20:16:53.935141 kernel: Policy zone: Normal
Jan 13 20:16:53.935148 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 13 20:16:53.935155 kernel: software IO TLB: area num 2.
Jan 13 20:16:53.935164 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB)
Jan 13 20:16:53.935171 kernel: Memory: 3881016K/4096000K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 214984K reserved, 0K cma-reserved)
Jan 13 20:16:53.935178 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Jan 13 20:16:53.935185 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 13 20:16:53.935193 kernel: rcu:         RCU event tracing is enabled.
Jan 13 20:16:53.935201 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Jan 13 20:16:53.935221 kernel:         Trampoline variant of Tasks RCU enabled.
Jan 13 20:16:53.935230 kernel:         Tracing variant of Tasks RCU enabled.
Jan 13 20:16:53.935237 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 13 20:16:53.935244 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Jan 13 20:16:53.935251 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Jan 13 20:16:53.935261 kernel: GICv3: 256 SPIs implemented
Jan 13 20:16:53.935268 kernel: GICv3: 0 Extended SPIs implemented
Jan 13 20:16:53.935275 kernel: Root IRQ handler: gic_handle_irq
Jan 13 20:16:53.935283 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Jan 13 20:16:53.935290 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000
Jan 13 20:16:53.935297 kernel: ITS [mem 0x08080000-0x0809ffff]
Jan 13 20:16:53.935304 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1)
Jan 13 20:16:53.935311 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1)
Jan 13 20:16:53.935318 kernel: GICv3: using LPI property table @0x00000001000e0000
Jan 13 20:16:53.935325 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000
Jan 13 20:16:53.935339 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 13 20:16:53.935349 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 13 20:16:53.935356 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Jan 13 20:16:53.935364 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Jan 13 20:16:53.935371 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Jan 13 20:16:53.935378 kernel: Console: colour dummy device 80x25
Jan 13 20:16:53.935386 kernel: ACPI: Core revision 20230628
Jan 13 20:16:53.935393 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Jan 13 20:16:53.935401 kernel: pid_max: default: 32768 minimum: 301
Jan 13 20:16:53.935408 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Jan 13 20:16:53.935415 kernel: landlock: Up and running.
Jan 13 20:16:53.935424 kernel: SELinux:  Initializing.
Jan 13 20:16:53.935432 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Jan 13 20:16:53.935456 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Jan 13 20:16:53.935463 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Jan 13 20:16:53.935471 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Jan 13 20:16:53.935479 kernel: rcu: Hierarchical SRCU implementation.
Jan 13 20:16:53.935486 kernel: rcu:         Max phase no-delay instances is 400.
Jan 13 20:16:53.935494 kernel: Platform MSI: ITS@0x8080000 domain created
Jan 13 20:16:53.935501 kernel: PCI/MSI: ITS@0x8080000 domain created
Jan 13 20:16:53.935510 kernel: Remapping and enabling EFI services.
Jan 13 20:16:53.935518 kernel: smp: Bringing up secondary CPUs ...
Jan 13 20:16:53.935525 kernel: Detected PIPT I-cache on CPU1
Jan 13 20:16:53.935533 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000
Jan 13 20:16:53.935541 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000
Jan 13 20:16:53.935548 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 13 20:16:53.935556 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Jan 13 20:16:53.935563 kernel: smp: Brought up 1 node, 2 CPUs
Jan 13 20:16:53.935571 kernel: SMP: Total of 2 processors activated.
Jan 13 20:16:53.935578 kernel: CPU features: detected: 32-bit EL0 Support
Jan 13 20:16:53.935587 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Jan 13 20:16:53.935595 kernel: CPU features: detected: Common not Private translations
Jan 13 20:16:53.935608 kernel: CPU features: detected: CRC32 instructions
Jan 13 20:16:53.935638 kernel: CPU features: detected: Enhanced Virtualization Traps
Jan 13 20:16:53.935660 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Jan 13 20:16:53.935669 kernel: CPU features: detected: LSE atomic instructions
Jan 13 20:16:53.935677 kernel: CPU features: detected: Privileged Access Never
Jan 13 20:16:53.935684 kernel: CPU features: detected: RAS Extension Support
Jan 13 20:16:53.935692 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS)
Jan 13 20:16:53.935727 kernel: CPU: All CPU(s) started at EL1
Jan 13 20:16:53.935736 kernel: alternatives: applying system-wide alternatives
Jan 13 20:16:53.935744 kernel: devtmpfs: initialized
Jan 13 20:16:53.935752 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 13 20:16:53.935760 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Jan 13 20:16:53.935767 kernel: pinctrl core: initialized pinctrl subsystem
Jan 13 20:16:53.935775 kernel: SMBIOS 3.0.0 present.
Jan 13 20:16:53.935785 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017
Jan 13 20:16:53.935793 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 13 20:16:53.935801 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Jan 13 20:16:53.935808 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 13 20:16:53.935816 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 13 20:16:53.935824 kernel: audit: initializing netlink subsys (disabled)
Jan 13 20:16:53.935832 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1
Jan 13 20:16:53.935840 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 13 20:16:53.935847 kernel: cpuidle: using governor menu
Jan 13 20:16:53.935857 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Jan 13 20:16:53.935864 kernel: ASID allocator initialised with 32768 entries
Jan 13 20:16:53.935872 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 13 20:16:53.935880 kernel: Serial: AMBA PL011 UART driver
Jan 13 20:16:53.935888 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Jan 13 20:16:53.935896 kernel: Modules: 0 pages in range for non-PLT usage
Jan 13 20:16:53.935903 kernel: Modules: 508880 pages in range for PLT usage
Jan 13 20:16:53.935911 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 13 20:16:53.935919 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Jan 13 20:16:53.935928 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Jan 13 20:16:53.935936 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Jan 13 20:16:53.935944 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 13 20:16:53.935952 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Jan 13 20:16:53.935959 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Jan 13 20:16:53.935967 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Jan 13 20:16:53.935975 kernel: ACPI: Added _OSI(Module Device)
Jan 13 20:16:53.935996 kernel: ACPI: Added _OSI(Processor Device)
Jan 13 20:16:53.936005 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Jan 13 20:16:53.936016 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 13 20:16:53.936024 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 13 20:16:53.936031 kernel: ACPI: Interpreter enabled
Jan 13 20:16:53.936039 kernel: ACPI: Using GIC for interrupt routing
Jan 13 20:16:53.936047 kernel: ACPI: MCFG table detected, 1 entries
Jan 13 20:16:53.936055 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA
Jan 13 20:16:53.936063 kernel: printk: console [ttyAMA0] enabled
Jan 13 20:16:53.936070 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 13 20:16:53.936274 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Jan 13 20:16:53.936374 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Jan 13 20:16:53.938524 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Jan 13 20:16:53.938694 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00
Jan 13 20:16:53.938760 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff]
Jan 13 20:16:53.938770 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io  0x0000-0xffff window]
Jan 13 20:16:53.938778 kernel: PCI host bridge to bus 0000:00
Jan 13 20:16:53.938875 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window]
Jan 13 20:16:53.938950 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Jan 13 20:16:53.939009 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window]
Jan 13 20:16:53.939082 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 13 20:16:53.939181 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000
Jan 13 20:16:53.939283 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000
Jan 13 20:16:53.939353 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff]
Jan 13 20:16:53.939546 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref]
Jan 13 20:16:53.939711 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400
Jan 13 20:16:53.939784 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff]
Jan 13 20:16:53.939881 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400
Jan 13 20:16:53.939959 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff]
Jan 13 20:16:53.940035 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400
Jan 13 20:16:53.940109 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff]
Jan 13 20:16:53.940253 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400
Jan 13 20:16:53.940360 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff]
Jan 13 20:16:53.941588 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400
Jan 13 20:16:53.941702 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff]
Jan 13 20:16:53.941778 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400
Jan 13 20:16:53.941852 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff]
Jan 13 20:16:53.941924 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400
Jan 13 20:16:53.942028 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff]
Jan 13 20:16:53.942120 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400
Jan 13 20:16:53.942185 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff]
Jan 13 20:16:53.942278 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400
Jan 13 20:16:53.942383 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff]
Jan 13 20:16:53.943567 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002
Jan 13 20:16:53.943665 kernel: pci 0000:00:04.0: reg 0x10: [io  0x8200-0x8207]
Jan 13 20:16:53.943744 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000
Jan 13 20:16:53.943835 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff]
Jan 13 20:16:53.943902 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref]
Jan 13 20:16:53.943967 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref]
Jan 13 20:16:53.944050 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330
Jan 13 20:16:53.944151 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit]
Jan 13 20:16:53.944249 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000
Jan 13 20:16:53.944356 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff]
Jan 13 20:16:53.946996 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref]
Jan 13 20:16:53.947149 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00
Jan 13 20:16:53.947239 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref]
Jan 13 20:16:53.947410 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00
Jan 13 20:16:53.947510 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref]
Jan 13 20:16:53.947641 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000
Jan 13 20:16:53.947717 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff]
Jan 13 20:16:53.947856 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref]
Jan 13 20:16:53.948062 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000
Jan 13 20:16:53.948263 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff]
Jan 13 20:16:53.948362 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref]
Jan 13 20:16:53.948432 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref]
Jan 13 20:16:53.950682 kernel: pci 0000:00:02.0: bridge window [io  0x1000-0x0fff] to [bus 01] add_size 1000
Jan 13 20:16:53.950823 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000
Jan 13 20:16:53.950903 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000
Jan 13 20:16:53.950988 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 02] add_size 1000
Jan 13 20:16:53.951054 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000
Jan 13 20:16:53.951119 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000
Jan 13 20:16:53.951189 kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Jan 13 20:16:53.951274 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000
Jan 13 20:16:53.951369 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000
Jan 13 20:16:53.951491 kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Jan 13 20:16:53.951562 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000
Jan 13 20:16:53.951634 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000
Jan 13 20:16:53.951704 kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Jan 13 20:16:53.951768 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000
Jan 13 20:16:53.951832 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000
Jan 13 20:16:53.951901 kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Jan 13 20:16:53.951966 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000
Jan 13 20:16:53.952031 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000
Jan 13 20:16:53.952105 kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Jan 13 20:16:53.952170 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000
Jan 13 20:16:53.952254 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000
Jan 13 20:16:53.952327 kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Jan 13 20:16:53.952412 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000
Jan 13 20:16:53.953945 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000
Jan 13 20:16:53.954054 kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Jan 13 20:16:53.954121 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000
Jan 13 20:16:53.954254 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000
Jan 13 20:16:53.954336 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff]
Jan 13 20:16:53.954408 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref]
Jan 13 20:16:53.954493 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff]
Jan 13 20:16:53.954558 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref]
Jan 13 20:16:53.954670 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff]
Jan 13 20:16:53.954788 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref]
Jan 13 20:16:53.954897 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff]
Jan 13 20:16:53.954980 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref]
Jan 13 20:16:53.955054 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff]
Jan 13 20:16:53.955145 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref]
Jan 13 20:16:53.955242 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff]
Jan 13 20:16:53.955312 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref]
Jan 13 20:16:53.955390 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff]
Jan 13 20:16:53.956579 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref]
Jan 13 20:16:53.956677 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff]
Jan 13 20:16:53.956741 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref]
Jan 13 20:16:53.956808 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff]
Jan 13 20:16:53.956872 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref]
Jan 13 20:16:53.956940 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref]
Jan 13 20:16:53.957047 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff]
Jan 13 20:16:53.957129 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff]
Jan 13 20:16:53.957196 kernel: pci 0000:00:02.0: BAR 13: assigned [io  0x1000-0x1fff]
Jan 13 20:16:53.957494 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff]
Jan 13 20:16:53.957581 kernel: pci 0000:00:02.1: BAR 13: assigned [io  0x2000-0x2fff]
Jan 13 20:16:53.957652 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff]
Jan 13 20:16:53.957715 kernel: pci 0000:00:02.2: BAR 13: assigned [io  0x3000-0x3fff]
Jan 13 20:16:53.957783 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff]
Jan 13 20:16:53.957855 kernel: pci 0000:00:02.3: BAR 13: assigned [io  0x4000-0x4fff]
Jan 13 20:16:53.957988 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff]
Jan 13 20:16:53.958107 kernel: pci 0000:00:02.4: BAR 13: assigned [io  0x5000-0x5fff]
Jan 13 20:16:53.958182 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff]
Jan 13 20:16:53.958272 kernel: pci 0000:00:02.5: BAR 13: assigned [io  0x6000-0x6fff]
Jan 13 20:16:53.958404 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff]
Jan 13 20:16:53.960656 kernel: pci 0000:00:02.6: BAR 13: assigned [io  0x7000-0x7fff]
Jan 13 20:16:53.960759 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff]
Jan 13 20:16:53.960835 kernel: pci 0000:00:02.7: BAR 13: assigned [io  0x8000-0x8fff]
Jan 13 20:16:53.960927 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff]
Jan 13 20:16:53.961001 kernel: pci 0000:00:03.0: BAR 13: assigned [io  0x9000-0x9fff]
Jan 13 20:16:53.961079 kernel: pci 0000:00:04.0: BAR 0: assigned [io  0xa000-0xa007]
Jan 13 20:16:53.961194 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref]
Jan 13 20:16:53.961296 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref]
Jan 13 20:16:53.961367 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff]
Jan 13 20:16:53.963510 kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Jan 13 20:16:53.963783 kernel: pci 0000:00:02.0:   bridge window [io  0x1000-0x1fff]
Jan 13 20:16:53.963863 kernel: pci 0000:00:02.0:   bridge window [mem 0x10000000-0x101fffff]
Jan 13 20:16:53.963928 kernel: pci 0000:00:02.0:   bridge window [mem 0x8000000000-0x80001fffff 64bit pref]
Jan 13 20:16:53.964035 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit]
Jan 13 20:16:53.964155 kernel: pci 0000:00:02.1: PCI bridge to [bus 02]
Jan 13 20:16:53.964294 kernel: pci 0000:00:02.1:   bridge window [io  0x2000-0x2fff]
Jan 13 20:16:53.964395 kernel: pci 0000:00:02.1:   bridge window [mem 0x10200000-0x103fffff]
Jan 13 20:16:53.964578 kernel: pci 0000:00:02.1:   bridge window [mem 0x8000200000-0x80003fffff 64bit pref]
Jan 13 20:16:53.964657 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref]
Jan 13 20:16:53.964778 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff]
Jan 13 20:16:53.964900 kernel: pci 0000:00:02.2: PCI bridge to [bus 03]
Jan 13 20:16:53.965021 kernel: pci 0000:00:02.2:   bridge window [io  0x3000-0x3fff]
Jan 13 20:16:53.965118 kernel: pci 0000:00:02.2:   bridge window [mem 0x10400000-0x105fffff]
Jan 13 20:16:53.965184 kernel: pci 0000:00:02.2:   bridge window [mem 0x8000400000-0x80005fffff 64bit pref]
Jan 13 20:16:53.965304 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref]
Jan 13 20:16:53.965392 kernel: pci 0000:00:02.3: PCI bridge to [bus 04]
Jan 13 20:16:53.967608 kernel: pci 0000:00:02.3:   bridge window [io  0x4000-0x4fff]
Jan 13 20:16:53.967707 kernel: pci 0000:00:02.3:   bridge window [mem 0x10600000-0x107fffff]
Jan 13 20:16:53.967771 kernel: pci 0000:00:02.3:   bridge window [mem 0x8000600000-0x80007fffff 64bit pref]
Jan 13 20:16:53.967846 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref]
Jan 13 20:16:53.967922 kernel: pci 0000:00:02.4: PCI bridge to [bus 05]
Jan 13 20:16:53.967987 kernel: pci 0000:00:02.4:   bridge window [io  0x5000-0x5fff]
Jan 13 20:16:53.968049 kernel: pci 0000:00:02.4:   bridge window [mem 0x10800000-0x109fffff]
Jan 13 20:16:53.968112 kernel: pci 0000:00:02.4:   bridge window [mem 0x8000800000-0x80009fffff 64bit pref]
Jan 13 20:16:53.968185 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref]
Jan 13 20:16:53.968274 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff]
Jan 13 20:16:53.968346 kernel: pci 0000:00:02.5: PCI bridge to [bus 06]
Jan 13 20:16:53.968541 kernel: pci 0000:00:02.5:   bridge window [io  0x6000-0x6fff]
Jan 13 20:16:53.968678 kernel: pci 0000:00:02.5:   bridge window [mem 0x10a00000-0x10bfffff]
Jan 13 20:16:53.968791 kernel: pci 0000:00:02.5:   bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]
Jan 13 20:16:53.968912 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref]
Jan 13 20:16:53.968987 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref]
Jan 13 20:16:53.969060 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff]
Jan 13 20:16:53.969127 kernel: pci 0000:00:02.6: PCI bridge to [bus 07]
Jan 13 20:16:53.969192 kernel: pci 0000:00:02.6:   bridge window [io  0x7000-0x7fff]
Jan 13 20:16:53.969293 kernel: pci 0000:00:02.6:   bridge window [mem 0x10c00000-0x10dfffff]
Jan 13 20:16:53.969371 kernel: pci 0000:00:02.6:   bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]
Jan 13 20:16:53.969459 kernel: pci 0000:00:02.7: PCI bridge to [bus 08]
Jan 13 20:16:53.969527 kernel: pci 0000:00:02.7:   bridge window [io  0x8000-0x8fff]
Jan 13 20:16:53.969632 kernel: pci 0000:00:02.7:   bridge window [mem 0x10e00000-0x10ffffff]
Jan 13 20:16:53.969705 kernel: pci 0000:00:02.7:   bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]
Jan 13 20:16:53.969774 kernel: pci 0000:00:03.0: PCI bridge to [bus 09]
Jan 13 20:16:53.969844 kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Jan 13 20:16:53.969917 kernel: pci 0000:00:03.0:   bridge window [mem 0x11000000-0x111fffff]
Jan 13 20:16:53.970047 kernel: pci 0000:00:03.0:   bridge window [mem 0x8001000000-0x80011fffff 64bit pref]
Jan 13 20:16:53.970120 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window]
Jan 13 20:16:53.970179 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Jan 13 20:16:53.970255 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window]
Jan 13 20:16:53.970340 kernel: pci_bus 0000:01: resource 0 [io  0x1000-0x1fff]
Jan 13 20:16:53.970403 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff]
Jan 13 20:16:53.972626 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref]
Jan 13 20:16:53.972749 kernel: pci_bus 0000:02: resource 0 [io  0x2000-0x2fff]
Jan 13 20:16:53.972813 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff]
Jan 13 20:16:53.972872 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref]
Jan 13 20:16:53.972942 kernel: pci_bus 0000:03: resource 0 [io  0x3000-0x3fff]
Jan 13 20:16:53.973001 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff]
Jan 13 20:16:53.973091 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref]
Jan 13 20:16:53.973222 kernel: pci_bus 0000:04: resource 0 [io  0x4000-0x4fff]
Jan 13 20:16:53.973295 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff]
Jan 13 20:16:53.973360 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref]
Jan 13 20:16:53.973460 kernel: pci_bus 0000:05: resource 0 [io  0x5000-0x5fff]
Jan 13 20:16:53.973548 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff]
Jan 13 20:16:53.973625 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref]
Jan 13 20:16:53.973703 kernel: pci_bus 0000:06: resource 0 [io  0x6000-0x6fff]
Jan 13 20:16:53.973771 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff]
Jan 13 20:16:53.973831 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref]
Jan 13 20:16:53.973900 kernel: pci_bus 0000:07: resource 0 [io  0x7000-0x7fff]
Jan 13 20:16:53.973964 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff]
Jan 13 20:16:53.974029 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref]
Jan 13 20:16:53.974099 kernel: pci_bus 0000:08: resource 0 [io  0x8000-0x8fff]
Jan 13 20:16:53.974238 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff]
Jan 13 20:16:53.974311 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref]
Jan 13 20:16:53.974422 kernel: pci_bus 0000:09: resource 0 [io  0x9000-0x9fff]
Jan 13 20:16:53.976709 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff]
Jan 13 20:16:53.976857 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref]
Jan 13 20:16:53.976880 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Jan 13 20:16:53.976889 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Jan 13 20:16:53.976897 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Jan 13 20:16:53.976904 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Jan 13 20:16:53.976913 kernel: iommu: Default domain type: Translated
Jan 13 20:16:53.976921 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Jan 13 20:16:53.976929 kernel: efivars: Registered efivars operations
Jan 13 20:16:53.976937 kernel: vgaarb: loaded
Jan 13 20:16:53.976983 kernel: clocksource: Switched to clocksource arch_sys_counter
Jan 13 20:16:53.976995 kernel: VFS: Disk quotas dquot_6.6.0
Jan 13 20:16:53.977004 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 13 20:16:53.977012 kernel: pnp: PnP ACPI init
Jan 13 20:16:53.977105 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved
Jan 13 20:16:53.977118 kernel: pnp: PnP ACPI: found 1 devices
Jan 13 20:16:53.977127 kernel: NET: Registered PF_INET protocol family
Jan 13 20:16:53.977136 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 13 20:16:53.977145 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Jan 13 20:16:53.977156 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 13 20:16:53.977165 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Jan 13 20:16:53.977173 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Jan 13 20:16:53.977181 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Jan 13 20:16:53.977189 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Jan 13 20:16:53.977197 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Jan 13 20:16:53.977205 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 13 20:16:53.977326 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002)
Jan 13 20:16:53.977340 kernel: PCI: CLS 0 bytes, default 64
Jan 13 20:16:53.977352 kernel: kvm [1]: HYP mode not available
Jan 13 20:16:53.977360 kernel: Initialise system trusted keyrings
Jan 13 20:16:53.977368 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Jan 13 20:16:53.977377 kernel: Key type asymmetric registered
Jan 13 20:16:53.977385 kernel: Asymmetric key parser 'x509' registered
Jan 13 20:16:53.977393 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Jan 13 20:16:53.977401 kernel: io scheduler mq-deadline registered
Jan 13 20:16:53.977409 kernel: io scheduler kyber registered
Jan 13 20:16:53.977417 kernel: io scheduler bfq registered
Jan 13 20:16:53.977428 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37
Jan 13 20:16:53.977554 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50
Jan 13 20:16:53.977654 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50
Jan 13 20:16:53.977727 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:16:53.977798 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51
Jan 13 20:16:53.977864 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51
Jan 13 20:16:53.977935 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:16:53.978063 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52
Jan 13 20:16:53.978137 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52
Jan 13 20:16:53.978203 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:16:53.978313 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53
Jan 13 20:16:53.978401 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53
Jan 13 20:16:53.978529 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:16:53.978611 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54
Jan 13 20:16:53.978678 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54
Jan 13 20:16:53.978743 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:16:53.978814 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55
Jan 13 20:16:53.978890 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55
Jan 13 20:16:53.978969 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:16:53.979048 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56
Jan 13 20:16:53.979119 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56
Jan 13 20:16:53.979182 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:16:53.979266 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57
Jan 13 20:16:53.979335 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57
Jan 13 20:16:53.979450 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:16:53.979491 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38
Jan 13 20:16:53.979583 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58
Jan 13 20:16:53.979662 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58
Jan 13 20:16:53.979732 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:16:53.979742 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Jan 13 20:16:53.979751 kernel: ACPI: button: Power Button [PWRB]
Jan 13 20:16:53.979762 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Jan 13 20:16:53.979835 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002)
Jan 13 20:16:53.979908 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002)
Jan 13 20:16:53.980004 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002)
Jan 13 20:16:53.980017 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 13 20:16:53.980028 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35
Jan 13 20:16:53.980103 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001)
Jan 13 20:16:53.980114 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A
Jan 13 20:16:53.980123 kernel: thunder_xcv, ver 1.0
Jan 13 20:16:53.980134 kernel: thunder_bgx, ver 1.0
Jan 13 20:16:53.980142 kernel: nicpf, ver 1.0
Jan 13 20:16:53.980150 kernel: nicvf, ver 1.0
Jan 13 20:16:53.980251 kernel: rtc-efi rtc-efi.0: registered as rtc0
Jan 13 20:16:53.980316 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:16:53 UTC (1736799413)
Jan 13 20:16:53.980326 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 13 20:16:53.980334 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
Jan 13 20:16:53.980343 kernel: watchdog: Delayed init of the lockup detector failed: -19
Jan 13 20:16:53.980380 kernel: watchdog: Hard watchdog permanently disabled
Jan 13 20:16:53.980388 kernel: NET: Registered PF_INET6 protocol family
Jan 13 20:16:53.980396 kernel: Segment Routing with IPv6
Jan 13 20:16:53.980404 kernel: In-situ OAM (IOAM) with IPv6
Jan 13 20:16:53.980412 kernel: NET: Registered PF_PACKET protocol family
Jan 13 20:16:53.980421 kernel: Key type dns_resolver registered
Jan 13 20:16:53.980429 kernel: registered taskstats version 1
Jan 13 20:16:53.980463 kernel: Loading compiled-in X.509 certificates
Jan 13 20:16:53.980472 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 46cb4d1b22f3a5974766fe7d7b651e2f296d4fe0'
Jan 13 20:16:53.980483 kernel: Key type .fscrypt registered
Jan 13 20:16:53.980500 kernel: Key type fscrypt-provisioning registered
Jan 13 20:16:53.980509 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 13 20:16:53.980517 kernel: ima: Allocated hash algorithm: sha1
Jan 13 20:16:53.980525 kernel: ima: No architecture policies found
Jan 13 20:16:53.980533 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Jan 13 20:16:53.980541 kernel: clk: Disabling unused clocks
Jan 13 20:16:53.980549 kernel: Freeing unused kernel memory: 39936K
Jan 13 20:16:53.980579 kernel: Run /init as init process
Jan 13 20:16:53.980591 kernel:   with arguments:
Jan 13 20:16:53.980599 kernel:     /init
Jan 13 20:16:53.980607 kernel:   with environment:
Jan 13 20:16:53.980615 kernel:     HOME=/
Jan 13 20:16:53.980623 kernel:     TERM=linux
Jan 13 20:16:53.980631 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Jan 13 20:16:53.980641 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 13 20:16:53.980652 systemd[1]: Detected virtualization kvm.
Jan 13 20:16:53.980662 systemd[1]: Detected architecture arm64.
Jan 13 20:16:53.980671 systemd[1]: Running in initrd.
Jan 13 20:16:53.980679 systemd[1]: No hostname configured, using default hostname.
Jan 13 20:16:53.980687 systemd[1]: Hostname set to <localhost>.
Jan 13 20:16:53.980696 systemd[1]: Initializing machine ID from VM UUID.
Jan 13 20:16:53.980705 systemd[1]: Queued start job for default target initrd.target.
Jan 13 20:16:53.980713 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 13 20:16:53.980722 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 13 20:16:53.980732 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Jan 13 20:16:53.980757 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 13 20:16:53.980767 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Jan 13 20:16:53.980776 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Jan 13 20:16:53.980790 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Jan 13 20:16:53.980799 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Jan 13 20:16:53.980809 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 13 20:16:53.980818 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 13 20:16:53.980826 systemd[1]: Reached target paths.target - Path Units.
Jan 13 20:16:53.980835 systemd[1]: Reached target slices.target - Slice Units.
Jan 13 20:16:53.980843 systemd[1]: Reached target swap.target - Swaps.
Jan 13 20:16:53.980852 systemd[1]: Reached target timers.target - Timer Units.
Jan 13 20:16:53.980860 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Jan 13 20:16:53.980869 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 13 20:16:53.980878 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Jan 13 20:16:53.980888 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Jan 13 20:16:53.980897 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 13 20:16:53.980906 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 13 20:16:53.980914 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 13 20:16:53.980923 systemd[1]: Reached target sockets.target - Socket Units.
Jan 13 20:16:53.980932 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Jan 13 20:16:53.980940 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 13 20:16:53.980949 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Jan 13 20:16:53.980959 systemd[1]: Starting systemd-fsck-usr.service...
Jan 13 20:16:53.980968 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 13 20:16:53.980976 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 13 20:16:53.980985 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:16:53.980993 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Jan 13 20:16:53.981002 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 13 20:16:53.981042 systemd-journald[237]: Collecting audit messages is disabled.
Jan 13 20:16:53.981065 systemd[1]: Finished systemd-fsck-usr.service.
Jan 13 20:16:53.981077 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 13 20:16:53.981088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:16:53.981097 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 13 20:16:53.981108 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 13 20:16:53.981118 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 13 20:16:53.981127 systemd-journald[237]: Journal started
Jan 13 20:16:53.981153 systemd-journald[237]: Runtime Journal (/run/log/journal/a65cd1ee9c0943c98a87f1522aec6f5b) is 8.0M, max 76.5M, 68.5M free.
Jan 13 20:16:53.955565 systemd-modules-load[238]: Inserted module 'overlay'
Jan 13 20:16:53.985335 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 13 20:16:53.987544 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 13 20:16:53.996074 kernel: Bridge firewalling registered
Jan 13 20:16:53.993020 systemd-modules-load[238]: Inserted module 'br_netfilter'
Jan 13 20:16:54.001741 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 13 20:16:54.003512 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 13 20:16:54.005993 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 13 20:16:54.014768 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 13 20:16:54.017874 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:16:54.022309 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Jan 13 20:16:54.034707 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 13 20:16:54.038020 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:16:54.042283 dracut-cmdline[268]: dracut-dracut-053
Jan 13 20:16:54.047634 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc
Jan 13 20:16:54.051726 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 13 20:16:54.085364 systemd-resolved[283]: Positive Trust Anchors:
Jan 13 20:16:54.085388 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 13 20:16:54.085421 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 13 20:16:54.095282 systemd-resolved[283]: Defaulting to hostname 'linux'.
Jan 13 20:16:54.097499 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 13 20:16:54.098727 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 13 20:16:54.191686 kernel: SCSI subsystem initialized
Jan 13 20:16:54.198535 kernel: Loading iSCSI transport class v2.0-870.
Jan 13 20:16:54.208491 kernel: iscsi: registered transport (tcp)
Jan 13 20:16:54.225533 kernel: iscsi: registered transport (qla4xxx)
Jan 13 20:16:54.225620 kernel: QLogic iSCSI HBA Driver
Jan 13 20:16:54.295092 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Jan 13 20:16:54.300709 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Jan 13 20:16:54.338711 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 13 20:16:54.338812 kernel: device-mapper: uevent: version 1.0.3
Jan 13 20:16:54.338839 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Jan 13 20:16:54.392670 kernel: raid6: neonx8   gen() 15370 MB/s
Jan 13 20:16:54.409609 kernel: raid6: neonx4   gen() 15552 MB/s
Jan 13 20:16:54.426625 kernel: raid6: neonx2   gen() 13082 MB/s
Jan 13 20:16:54.443500 kernel: raid6: neonx1   gen() 10466 MB/s
Jan 13 20:16:54.460839 kernel: raid6: int64x8  gen()  6761 MB/s
Jan 13 20:16:54.477500 kernel: raid6: int64x4  gen()  7299 MB/s
Jan 13 20:16:54.495141 kernel: raid6: int64x2  gen()  6068 MB/s
Jan 13 20:16:54.511495 kernel: raid6: int64x1  gen()  5037 MB/s
Jan 13 20:16:54.511569 kernel: raid6: using algorithm neonx4 gen() 15552 MB/s
Jan 13 20:16:54.528485 kernel: raid6: .... xor() 12254 MB/s, rmw enabled
Jan 13 20:16:54.528557 kernel: raid6: using neon recovery algorithm
Jan 13 20:16:54.534155 kernel: xor: measuring software checksum speed
Jan 13 20:16:54.534252 kernel:    8regs           : 21601 MB/sec
Jan 13 20:16:54.534286 kernel:    32regs          : 21687 MB/sec
Jan 13 20:16:54.534302 kernel:    arm64_neon      : 27841 MB/sec
Jan 13 20:16:54.534319 kernel: xor: using function: arm64_neon (27841 MB/sec)
Jan 13 20:16:54.588496 kernel: Btrfs loaded, zoned=no, fsverity=no
Jan 13 20:16:54.607542 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Jan 13 20:16:54.614787 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 13 20:16:54.635518 systemd-udevd[456]: Using default interface naming scheme 'v255'.
Jan 13 20:16:54.639177 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 13 20:16:54.650156 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Jan 13 20:16:54.669243 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation
Jan 13 20:16:54.714524 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 13 20:16:54.728779 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 13 20:16:54.788379 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 13 20:16:54.799189 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Jan 13 20:16:54.827651 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Jan 13 20:16:54.830426 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 13 20:16:54.834127 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 13 20:16:54.835656 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 13 20:16:54.843901 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Jan 13 20:16:54.884854 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Jan 13 20:16:54.934763 kernel: scsi host0: Virtio SCSI HBA
Jan 13 20:16:54.961591 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU CD-ROM      2.5+ PQ: 0 ANSI: 5
Jan 13 20:16:54.964478 kernel: scsi 0:0:0:1: Direct-Access     QEMU     QEMU HARDDISK    2.5+ PQ: 0 ANSI: 5
Jan 13 20:16:54.993642 kernel: ACPI: bus type USB registered
Jan 13 20:16:54.993703 kernel: usbcore: registered new interface driver usbfs
Jan 13 20:16:54.994653 kernel: usbcore: registered new interface driver hub
Jan 13 20:16:54.994709 kernel: usbcore: registered new device driver usb
Jan 13 20:16:54.999608 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 13 20:16:54.999752 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:16:55.000851 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 13 20:16:55.001634 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 13 20:16:55.001814 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:16:55.003060 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:16:55.012841 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:16:55.019509 kernel: sr 0:0:0:0: Power-on or device reset occurred
Jan 13 20:16:55.024411 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray
Jan 13 20:16:55.024582 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 13 20:16:55.024594 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 13 20:16:55.034043 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller
Jan 13 20:16:55.050088 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1
Jan 13 20:16:55.050230 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010
Jan 13 20:16:55.050320 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller
Jan 13 20:16:55.050399 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2
Jan 13 20:16:55.050523 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed
Jan 13 20:16:55.050601 kernel: hub 1-0:1.0: USB hub found
Jan 13 20:16:55.050705 kernel: hub 1-0:1.0: 4 ports detected
Jan 13 20:16:55.050782 kernel: sd 0:0:0:1: Power-on or device reset occurred
Jan 13 20:16:55.060686 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM.
Jan 13 20:16:55.060849 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB)
Jan 13 20:16:55.060943 kernel: hub 2-0:1.0: USB hub found
Jan 13 20:16:55.061119 kernel: sd 0:0:0:1: [sda] Write Protect is off
Jan 13 20:16:55.061216 kernel: hub 2-0:1.0: 4 ports detected
Jan 13 20:16:55.061304 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08
Jan 13 20:16:55.061394 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 13 20:16:55.061503 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Jan 13 20:16:55.061514 kernel: GPT:17805311 != 80003071
Jan 13 20:16:55.061523 kernel: GPT:Alternate GPT header not at the end of the disk.
Jan 13 20:16:55.061532 kernel: GPT:17805311 != 80003071
Jan 13 20:16:55.061541 kernel: GPT: Use GNU Parted to correct GPT errors.
Jan 13 20:16:55.061550 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Jan 13 20:16:55.061559 kernel: sd 0:0:0:1: [sda] Attached SCSI disk
Jan 13 20:16:55.042774 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:16:55.055853 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 13 20:16:55.089028 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:16:55.127904 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT.
Jan 13 20:16:55.146232 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM.
Jan 13 20:16:55.153751 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (501)
Jan 13 20:16:55.155458 kernel: BTRFS: device fsid 2be7cc1c-29d4-4496-b29b-8561323213d2 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (514)
Jan 13 20:16:55.169757 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A.
Jan 13 20:16:55.170470 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A.
Jan 13 20:16:55.186137 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM.
Jan 13 20:16:55.196717 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Jan 13 20:16:55.205358 disk-uuid[574]: Primary Header is updated.
Jan 13 20:16:55.205358 disk-uuid[574]: Secondary Entries is updated.
Jan 13 20:16:55.205358 disk-uuid[574]: Secondary Header is updated.
Jan 13 20:16:55.217488 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Jan 13 20:16:55.225474 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Jan 13 20:16:55.288532 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd
Jan 13 20:16:55.534643 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd
Jan 13 20:16:55.678536 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1
Jan 13 20:16:55.679460 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0
Jan 13 20:16:55.682474 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2
Jan 13 20:16:55.734464 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0
Jan 13 20:16:55.735549 kernel: usbcore: registered new interface driver usbhid
Jan 13 20:16:55.735575 kernel: usbhid: USB HID core driver
Jan 13 20:16:56.233017 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Jan 13 20:16:56.233084 disk-uuid[575]: The operation has completed successfully.
Jan 13 20:16:56.313309 systemd[1]: disk-uuid.service: Deactivated successfully.
Jan 13 20:16:56.313444 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Jan 13 20:16:56.325777 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Jan 13 20:16:56.348988 sh[590]: Success
Jan 13 20:16:56.369629 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Jan 13 20:16:56.462369 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Jan 13 20:16:56.467265 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Jan 13 20:16:56.477550 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Jan 13 20:16:56.509565 kernel: BTRFS info (device dm-0): first mount of filesystem 2be7cc1c-29d4-4496-b29b-8561323213d2
Jan 13 20:16:56.509648 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:16:56.509660 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Jan 13 20:16:56.509672 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Jan 13 20:16:56.509682 kernel: BTRFS info (device dm-0): using free space tree
Jan 13 20:16:56.524489 kernel: BTRFS info (device dm-0): enabling ssd optimizations
Jan 13 20:16:56.526944 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Jan 13 20:16:56.528138 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Jan 13 20:16:56.539866 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Jan 13 20:16:56.545864 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Jan 13 20:16:56.561344 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779
Jan 13 20:16:56.561417 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:16:56.561429 kernel: BTRFS info (device sda6): using free space tree
Jan 13 20:16:56.569612 kernel: BTRFS info (device sda6): enabling ssd optimizations
Jan 13 20:16:56.569688 kernel: BTRFS info (device sda6): auto enabling async discard
Jan 13 20:16:56.581497 kernel: BTRFS info (device sda6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779
Jan 13 20:16:56.581896 systemd[1]: mnt-oem.mount: Deactivated successfully.
Jan 13 20:16:56.592060 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Jan 13 20:16:56.599944 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Jan 13 20:16:56.710270 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 13 20:16:56.719257 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 13 20:16:56.736515 ignition[686]: Ignition 2.20.0
Jan 13 20:16:56.737113 ignition[686]: Stage: fetch-offline
Jan 13 20:16:56.737162 ignition[686]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:16:56.737172 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner"
Jan 13 20:16:56.737359 ignition[686]: parsed url from cmdline: ""
Jan 13 20:16:56.737363 ignition[686]: no config URL provided
Jan 13 20:16:56.737368 ignition[686]: reading system config file "/usr/lib/ignition/user.ign"
Jan 13 20:16:56.737376 ignition[686]: no config at "/usr/lib/ignition/user.ign"
Jan 13 20:16:56.737382 ignition[686]: failed to fetch config: resource requires networking
Jan 13 20:16:56.744619 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 13 20:16:56.738289 ignition[686]: Ignition finished successfully
Jan 13 20:16:56.749815 systemd-networkd[777]: lo: Link UP
Jan 13 20:16:56.749828 systemd-networkd[777]: lo: Gained carrier
Jan 13 20:16:56.751809 systemd-networkd[777]: Enumeration completed
Jan 13 20:16:56.752182 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 13 20:16:56.753079 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:16:56.753082 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 13 20:16:56.754900 systemd[1]: Reached target network.target - Network.
Jan 13 20:16:56.757349 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:16:56.757353 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 13 20:16:56.758293 systemd-networkd[777]: eth0: Link UP
Jan 13 20:16:56.758297 systemd-networkd[777]: eth0: Gained carrier
Jan 13 20:16:56.758308 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:16:56.762710 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Jan 13 20:16:56.764795 systemd-networkd[777]: eth1: Link UP
Jan 13 20:16:56.764798 systemd-networkd[777]: eth1: Gained carrier
Jan 13 20:16:56.764809 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:16:56.776820 ignition[781]: Ignition 2.20.0
Jan 13 20:16:56.776831 ignition[781]: Stage: fetch
Jan 13 20:16:56.777030 ignition[781]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:16:56.777041 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner"
Jan 13 20:16:56.777145 ignition[781]: parsed url from cmdline: ""
Jan 13 20:16:56.777149 ignition[781]: no config URL provided
Jan 13 20:16:56.777157 ignition[781]: reading system config file "/usr/lib/ignition/user.ign"
Jan 13 20:16:56.777166 ignition[781]: no config at "/usr/lib/ignition/user.ign"
Jan 13 20:16:56.777269 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1
Jan 13 20:16:56.778120 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable
Jan 13 20:16:56.801617 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1
Jan 13 20:16:56.814994 systemd-networkd[777]: eth0: DHCPv4 address 138.199.153.200/32, gateway 172.31.1.1 acquired from 172.31.1.1
Jan 13 20:16:56.978407 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2
Jan 13 20:16:56.984846 ignition[781]: GET result: OK
Jan 13 20:16:56.985500 ignition[781]: parsing config with SHA512: 974f6661253e67cd1c0c3302d927a09c020f0c93bd4e96f795b1980b257b3f1de8356e69db73d3e4bf874857c4064308166ceb62525be29d1d39c213ed4a9751
Jan 13 20:16:56.991520 unknown[781]: fetched base config from "system"
Jan 13 20:16:56.992156 unknown[781]: fetched base config from "system"
Jan 13 20:16:56.992864 ignition[781]: fetch: fetch complete
Jan 13 20:16:56.992166 unknown[781]: fetched user config from "hetzner"
Jan 13 20:16:56.992870 ignition[781]: fetch: fetch passed
Jan 13 20:16:56.995117 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Jan 13 20:16:56.992948 ignition[781]: Ignition finished successfully
Jan 13 20:16:57.000877 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Jan 13 20:16:57.017726 ignition[789]: Ignition 2.20.0
Jan 13 20:16:57.017737 ignition[789]: Stage: kargs
Jan 13 20:16:57.017946 ignition[789]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:16:57.017956 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner"
Jan 13 20:16:57.018947 ignition[789]: kargs: kargs passed
Jan 13 20:16:57.023063 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Jan 13 20:16:57.019008 ignition[789]: Ignition finished successfully
Jan 13 20:16:57.031716 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Jan 13 20:16:57.047855 ignition[795]: Ignition 2.20.0
Jan 13 20:16:57.047866 ignition[795]: Stage: disks
Jan 13 20:16:57.048062 ignition[795]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:16:57.050389 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Jan 13 20:16:57.048073 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner"
Jan 13 20:16:57.049215 ignition[795]: disks: disks passed
Jan 13 20:16:57.052205 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Jan 13 20:16:57.049279 ignition[795]: Ignition finished successfully
Jan 13 20:16:57.053173 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Jan 13 20:16:57.054469 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 13 20:16:57.055587 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 13 20:16:57.056456 systemd[1]: Reached target basic.target - Basic System.
Jan 13 20:16:57.069145 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Jan 13 20:16:57.091624 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks
Jan 13 20:16:57.098554 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Jan 13 20:16:57.109638 systemd[1]: Mounting sysroot.mount - /sysroot...
Jan 13 20:16:57.159573 kernel: EXT4-fs (sda9): mounted filesystem f9a95e53-2d63-4443-b523-cb2108fb48f6 r/w with ordered data mode. Quota mode: none.
Jan 13 20:16:57.161219 systemd[1]: Mounted sysroot.mount - /sysroot.
Jan 13 20:16:57.163325 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Jan 13 20:16:57.171682 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 13 20:16:57.175296 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Jan 13 20:16:57.177615 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent...
Jan 13 20:16:57.184309 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Jan 13 20:16:57.187632 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 13 20:16:57.192134 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Jan 13 20:16:57.199716 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (811)
Jan 13 20:16:57.205016 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779
Jan 13 20:16:57.205093 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:16:57.205110 kernel: BTRFS info (device sda6): using free space tree
Jan 13 20:16:57.205821 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Jan 13 20:16:57.218491 kernel: BTRFS info (device sda6): enabling ssd optimizations
Jan 13 20:16:57.218575 kernel: BTRFS info (device sda6): auto enabling async discard
Jan 13 20:16:57.221908 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 13 20:16:57.259674 coreos-metadata[813]: Jan 13 20:16:57.259 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1
Jan 13 20:16:57.262585 coreos-metadata[813]: Jan 13 20:16:57.262 INFO Fetch successful
Jan 13 20:16:57.265834 coreos-metadata[813]: Jan 13 20:16:57.264 INFO wrote hostname ci-4186-1-0-a-dc4fc49980 to /sysroot/etc/hostname
Jan 13 20:16:57.266842 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory
Jan 13 20:16:57.269740 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Jan 13 20:16:57.274460 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory
Jan 13 20:16:57.280163 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory
Jan 13 20:16:57.288020 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory
Jan 13 20:16:57.408290 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Jan 13 20:16:57.415638 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Jan 13 20:16:57.417932 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Jan 13 20:16:57.430543 kernel: BTRFS info (device sda6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779
Jan 13 20:16:57.462513 ignition[928]: INFO     : Ignition 2.20.0
Jan 13 20:16:57.462513 ignition[928]: INFO     : Stage: mount
Jan 13 20:16:57.462513 ignition[928]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 13 20:16:57.467965 ignition[928]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/hetzner"
Jan 13 20:16:57.467965 ignition[928]: INFO     : mount: mount passed
Jan 13 20:16:57.467965 ignition[928]: INFO     : Ignition finished successfully
Jan 13 20:16:57.467744 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Jan 13 20:16:57.477749 systemd[1]: Starting ignition-files.service - Ignition (files)...
Jan 13 20:16:57.480696 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Jan 13 20:16:57.506654 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Jan 13 20:16:57.513704 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 13 20:16:57.543030 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (939)
Jan 13 20:16:57.543097 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779
Jan 13 20:16:57.543710 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:16:57.544446 kernel: BTRFS info (device sda6): using free space tree
Jan 13 20:16:57.548457 kernel: BTRFS info (device sda6): enabling ssd optimizations
Jan 13 20:16:57.548523 kernel: BTRFS info (device sda6): auto enabling async discard
Jan 13 20:16:57.551183 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 13 20:16:57.572259 ignition[956]: INFO     : Ignition 2.20.0
Jan 13 20:16:57.572259 ignition[956]: INFO     : Stage: files
Jan 13 20:16:57.575550 ignition[956]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 13 20:16:57.575550 ignition[956]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/hetzner"
Jan 13 20:16:57.575550 ignition[956]: DEBUG    : files: compiled without relabeling support, skipping
Jan 13 20:16:57.579295 ignition[956]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Jan 13 20:16:57.580322 ignition[956]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Jan 13 20:16:57.584682 ignition[956]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Jan 13 20:16:57.585934 ignition[956]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Jan 13 20:16:57.587580 ignition[956]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Jan 13 20:16:57.586657 unknown[956]: wrote ssh authorized keys file for user: core
Jan 13 20:16:57.589537 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Jan 13 20:16:57.590629 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1
Jan 13 20:16:57.642473 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Jan 13 20:16:57.746269 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Jan 13 20:16:57.748059 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/home/core/install.sh"
Jan 13 20:16:57.748059 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh"
Jan 13 20:16:57.748059 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/nginx.yaml"
Jan 13 20:16:57.748059 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml"
Jan 13 20:16:57.748059 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Jan 13 20:16:57.748059 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Jan 13 20:16:57.748059 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Jan 13 20:16:57.748059 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Jan 13 20:16:57.761662 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Jan 13 20:16:57.761662 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Jan 13 20:16:57.761662 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Jan 13 20:16:57.761662 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Jan 13 20:16:57.761662 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Jan 13 20:16:57.761662 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1
Jan 13 20:16:57.898555 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Jan 13 20:16:58.034755 systemd-networkd[777]: eth0: Gained IPv6LL
Jan 13 20:16:58.284017 ignition[956]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Jan 13 20:16:58.284017 ignition[956]: INFO     : files: op(b): [started]  processing unit "prepare-helm.service"
Jan 13 20:16:58.289263 ignition[956]: INFO     : files: op(b): op(c): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Jan 13 20:16:58.289263 ignition[956]: INFO     : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Jan 13 20:16:58.289263 ignition[956]: INFO     : files: op(b): [finished] processing unit "prepare-helm.service"
Jan 13 20:16:58.289263 ignition[956]: INFO     : files: op(d): [started]  processing unit "coreos-metadata.service"
Jan 13 20:16:58.289263 ignition[956]: INFO     : files: op(d): op(e): [started]  writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf"
Jan 13 20:16:58.289263 ignition[956]: INFO     : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf"
Jan 13 20:16:58.289263 ignition[956]: INFO     : files: op(d): [finished] processing unit "coreos-metadata.service"
Jan 13 20:16:58.289263 ignition[956]: INFO     : files: op(f): [started]  setting preset to enabled for "prepare-helm.service"
Jan 13 20:16:58.289263 ignition[956]: INFO     : files: op(f): [finished] setting preset to enabled for "prepare-helm.service"
Jan 13 20:16:58.289263 ignition[956]: INFO     : files: createResultFile: createFiles: op(10): [started]  writing file "/sysroot/etc/.ignition-result.json"
Jan 13 20:16:58.289263 ignition[956]: INFO     : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json"
Jan 13 20:16:58.289263 ignition[956]: INFO     : files: files passed
Jan 13 20:16:58.289263 ignition[956]: INFO     : Ignition finished successfully
Jan 13 20:16:58.292790 systemd[1]: Finished ignition-files.service - Ignition (files).
Jan 13 20:16:58.304690 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Jan 13 20:16:58.308237 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Jan 13 20:16:58.312504 systemd[1]: ignition-quench.service: Deactivated successfully.
Jan 13 20:16:58.313144 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Jan 13 20:16:58.321526 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 13 20:16:58.322908 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Jan 13 20:16:58.324145 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 13 20:16:58.327487 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 13 20:16:58.328381 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Jan 13 20:16:58.334691 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Jan 13 20:16:58.372492 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 13 20:16:58.372629 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Jan 13 20:16:58.374994 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Jan 13 20:16:58.376399 systemd[1]: Reached target initrd.target - Initrd Default Target.
Jan 13 20:16:58.378099 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Jan 13 20:16:58.389868 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Jan 13 20:16:58.408755 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 13 20:16:58.415749 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Jan 13 20:16:58.429903 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Jan 13 20:16:58.431774 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 13 20:16:58.432922 systemd[1]: Stopped target timers.target - Timer Units.
Jan 13 20:16:58.434064 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 13 20:16:58.434217 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 13 20:16:58.435853 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Jan 13 20:16:58.436619 systemd[1]: Stopped target basic.target - Basic System.
Jan 13 20:16:58.437990 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Jan 13 20:16:58.439630 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 13 20:16:58.440900 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Jan 13 20:16:58.442113 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Jan 13 20:16:58.443404 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 13 20:16:58.444913 systemd[1]: Stopped target sysinit.target - System Initialization.
Jan 13 20:16:58.446097 systemd[1]: Stopped target local-fs.target - Local File Systems.
Jan 13 20:16:58.447301 systemd[1]: Stopped target swap.target - Swaps.
Jan 13 20:16:58.448285 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 13 20:16:58.448457 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Jan 13 20:16:58.449855 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Jan 13 20:16:58.451037 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 13 20:16:58.452352 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Jan 13 20:16:58.455652 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 13 20:16:58.456628 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 13 20:16:58.456763 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Jan 13 20:16:58.459232 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Jan 13 20:16:58.459394 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 13 20:16:58.460997 systemd[1]: ignition-files.service: Deactivated successfully.
Jan 13 20:16:58.461158 systemd[1]: Stopped ignition-files.service - Ignition (files).
Jan 13 20:16:58.462771 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Jan 13 20:16:58.462961 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Jan 13 20:16:58.473107 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Jan 13 20:16:58.473945 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 13 20:16:58.474272 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 13 20:16:58.480886 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Jan 13 20:16:58.481602 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 13 20:16:58.481871 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 13 20:16:58.485099 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 13 20:16:58.485406 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 13 20:16:58.499465 ignition[1008]: INFO     : Ignition 2.20.0
Jan 13 20:16:58.499465 ignition[1008]: INFO     : Stage: umount
Jan 13 20:16:58.499465 ignition[1008]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 13 20:16:58.499465 ignition[1008]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/hetzner"
Jan 13 20:16:58.507035 ignition[1008]: INFO     : umount: umount passed
Jan 13 20:16:58.507035 ignition[1008]: INFO     : Ignition finished successfully
Jan 13 20:16:58.501906 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 13 20:16:58.502058 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Jan 13 20:16:58.503587 systemd[1]: ignition-mount.service: Deactivated successfully.
Jan 13 20:16:58.503776 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Jan 13 20:16:58.505722 systemd[1]: ignition-disks.service: Deactivated successfully.
Jan 13 20:16:58.505842 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Jan 13 20:16:58.507722 systemd[1]: ignition-kargs.service: Deactivated successfully.
Jan 13 20:16:58.507792 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Jan 13 20:16:58.508879 systemd[1]: ignition-fetch.service: Deactivated successfully.
Jan 13 20:16:58.508937 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Jan 13 20:16:58.509839 systemd[1]: Stopped target network.target - Network.
Jan 13 20:16:58.510540 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Jan 13 20:16:58.510648 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 13 20:16:58.511751 systemd[1]: Stopped target paths.target - Path Units.
Jan 13 20:16:58.512238 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 13 20:16:58.512791 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 13 20:16:58.513687 systemd[1]: Stopped target slices.target - Slice Units.
Jan 13 20:16:58.516695 systemd[1]: Stopped target sockets.target - Socket Units.
Jan 13 20:16:58.517614 systemd[1]: iscsid.socket: Deactivated successfully.
Jan 13 20:16:58.517733 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Jan 13 20:16:58.518622 systemd[1]: iscsiuio.socket: Deactivated successfully.
Jan 13 20:16:58.518703 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 13 20:16:58.519517 systemd[1]: ignition-setup.service: Deactivated successfully.
Jan 13 20:16:58.519583 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Jan 13 20:16:58.520729 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Jan 13 20:16:58.520819 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Jan 13 20:16:58.524720 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Jan 13 20:16:58.525499 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Jan 13 20:16:58.530129 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Jan 13 20:16:58.533615 systemd-networkd[777]: eth0: DHCPv6 lease lost
Jan 13 20:16:58.537537 systemd-networkd[777]: eth1: DHCPv6 lease lost
Jan 13 20:16:58.538881 systemd[1]: systemd-resolved.service: Deactivated successfully.
Jan 13 20:16:58.539067 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Jan 13 20:16:58.542890 systemd[1]: systemd-networkd.service: Deactivated successfully.
Jan 13 20:16:58.543621 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Jan 13 20:16:58.545282 systemd[1]: sysroot-boot.service: Deactivated successfully.
Jan 13 20:16:58.545411 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Jan 13 20:16:58.547858 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Jan 13 20:16:58.547936 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Jan 13 20:16:58.548687 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Jan 13 20:16:58.548745 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Jan 13 20:16:58.556645 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Jan 13 20:16:58.557123 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Jan 13 20:16:58.557200 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 13 20:16:58.560862 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 13 20:16:58.560944 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:16:58.561918 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 13 20:16:58.561961 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Jan 13 20:16:58.563078 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 13 20:16:58.563123 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 13 20:16:58.564372 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 13 20:16:58.578837 systemd[1]: network-cleanup.service: Deactivated successfully.
Jan 13 20:16:58.578955 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Jan 13 20:16:58.590934 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 13 20:16:58.591231 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 13 20:16:58.593369 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 13 20:16:58.593564 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Jan 13 20:16:58.594480 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 13 20:16:58.594514 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 13 20:16:58.595772 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 13 20:16:58.595820 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Jan 13 20:16:58.598015 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 13 20:16:58.598065 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Jan 13 20:16:58.599736 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 13 20:16:58.599789 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:16:58.608656 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Jan 13 20:16:58.609283 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 13 20:16:58.609360 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 13 20:16:58.613294 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 13 20:16:58.613369 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:16:58.616971 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 13 20:16:58.617129 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Jan 13 20:16:58.618167 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Jan 13 20:16:58.627000 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Jan 13 20:16:58.638132 systemd[1]: Switching root.
Jan 13 20:16:58.669089 systemd-journald[237]: Journal stopped
Jan 13 20:16:59.788796 systemd-journald[237]: Received SIGTERM from PID 1 (systemd).
Jan 13 20:16:59.788885 kernel: SELinux:  policy capability network_peer_controls=1
Jan 13 20:16:59.788899 kernel: SELinux:  policy capability open_perms=1
Jan 13 20:16:59.788913 kernel: SELinux:  policy capability extended_socket_class=1
Jan 13 20:16:59.788924 kernel: SELinux:  policy capability always_check_network=0
Jan 13 20:16:59.788934 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 13 20:16:59.788944 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 13 20:16:59.788959 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Jan 13 20:16:59.788973 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Jan 13 20:16:59.788983 systemd[1]: Successfully loaded SELinux policy in 55.093ms.
Jan 13 20:16:59.788999 kernel: audit: type=1403 audit(1736799418.894:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 13 20:16:59.789018 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.877ms.
Jan 13 20:16:59.789029 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 13 20:16:59.789039 systemd[1]: Detected virtualization kvm.
Jan 13 20:16:59.789050 systemd[1]: Detected architecture arm64.
Jan 13 20:16:59.789061 systemd[1]: Detected first boot.
Jan 13 20:16:59.789072 systemd[1]: Hostname set to <ci-4186-1-0-a-dc4fc49980>.
Jan 13 20:16:59.789082 systemd[1]: Initializing machine ID from VM UUID.
Jan 13 20:16:59.789093 zram_generator::config[1050]: No configuration found.
Jan 13 20:16:59.789104 systemd[1]: Populated /etc with preset unit settings.
Jan 13 20:16:59.789114 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 13 20:16:59.789125 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Jan 13 20:16:59.789135 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 13 20:16:59.789146 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Jan 13 20:16:59.789158 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Jan 13 20:16:59.789186 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Jan 13 20:16:59.789199 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Jan 13 20:16:59.789209 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Jan 13 20:16:59.789220 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Jan 13 20:16:59.789231 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Jan 13 20:16:59.789243 systemd[1]: Created slice user.slice - User and Session Slice.
Jan 13 20:16:59.789254 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 13 20:16:59.789271 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 13 20:16:59.789281 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Jan 13 20:16:59.789296 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Jan 13 20:16:59.789307 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Jan 13 20:16:59.789318 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 13 20:16:59.789328 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0...
Jan 13 20:16:59.789338 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 13 20:16:59.789349 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Jan 13 20:16:59.789366 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Jan 13 20:16:59.789376 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Jan 13 20:16:59.789387 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Jan 13 20:16:59.789397 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 13 20:16:59.789407 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 13 20:16:59.789419 systemd[1]: Reached target slices.target - Slice Units.
Jan 13 20:16:59.789429 systemd[1]: Reached target swap.target - Swaps.
Jan 13 20:16:59.789580 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Jan 13 20:16:59.789597 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Jan 13 20:16:59.789607 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 13 20:16:59.789618 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 13 20:16:59.789628 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 13 20:16:59.789639 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Jan 13 20:16:59.789649 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Jan 13 20:16:59.789659 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Jan 13 20:16:59.789669 systemd[1]: Mounting media.mount - External Media Directory...
Jan 13 20:16:59.789679 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Jan 13 20:16:59.789691 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Jan 13 20:16:59.789702 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Jan 13 20:16:59.789713 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 13 20:16:59.789723 systemd[1]: Reached target machines.target - Containers.
Jan 13 20:16:59.789734 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Jan 13 20:16:59.789744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:16:59.789755 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 13 20:16:59.789765 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Jan 13 20:16:59.789777 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 13 20:16:59.789791 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 13 20:16:59.789803 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 13 20:16:59.789814 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Jan 13 20:16:59.789824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 13 20:16:59.789835 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Jan 13 20:16:59.789848 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 13 20:16:59.789858 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Jan 13 20:16:59.789869 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Jan 13 20:16:59.789879 systemd[1]: Stopped systemd-fsck-usr.service.
Jan 13 20:16:59.789890 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 13 20:16:59.789900 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 13 20:16:59.789910 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Jan 13 20:16:59.789921 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Jan 13 20:16:59.789933 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 13 20:16:59.789943 systemd[1]: verity-setup.service: Deactivated successfully.
Jan 13 20:16:59.789954 systemd[1]: Stopped verity-setup.service.
Jan 13 20:16:59.789964 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Jan 13 20:16:59.789974 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Jan 13 20:16:59.789986 systemd[1]: Mounted media.mount - External Media Directory.
Jan 13 20:16:59.789997 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Jan 13 20:16:59.790007 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Jan 13 20:16:59.790017 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Jan 13 20:16:59.790027 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Jan 13 20:16:59.790038 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 13 20:16:59.790048 kernel: ACPI: bus type drm_connector registered
Jan 13 20:16:59.790059 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 13 20:16:59.790070 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Jan 13 20:16:59.790081 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 13 20:16:59.790092 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 13 20:16:59.790101 kernel: fuse: init (API version 7.39)
Jan 13 20:16:59.790111 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 13 20:16:59.790122 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 13 20:16:59.790132 kernel: loop: module loaded
Jan 13 20:16:59.790143 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 13 20:16:59.790154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 13 20:16:59.790165 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 13 20:16:59.790220 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Jan 13 20:16:59.790233 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 13 20:16:59.790278 systemd-journald[1124]: Collecting audit messages is disabled.
Jan 13 20:16:59.790301 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 13 20:16:59.790314 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 13 20:16:59.790326 systemd-journald[1124]: Journal started
Jan 13 20:16:59.790354 systemd-journald[1124]: Runtime Journal (/run/log/journal/a65cd1ee9c0943c98a87f1522aec6f5b) is 8.0M, max 76.5M, 68.5M free.
Jan 13 20:16:59.481565 systemd[1]: Queued start job for default target multi-user.target.
Jan 13 20:16:59.508314 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6.
Jan 13 20:16:59.508797 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 13 20:16:59.791852 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Jan 13 20:16:59.794529 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 13 20:16:59.794249 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Jan 13 20:16:59.808166 systemd[1]: Reached target network-pre.target - Preparation for Network.
Jan 13 20:16:59.814642 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Jan 13 20:16:59.823611 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Jan 13 20:16:59.825567 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Jan 13 20:16:59.825636 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 13 20:16:59.828489 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Jan 13 20:16:59.837220 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Jan 13 20:16:59.845705 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Jan 13 20:16:59.847162 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:16:59.851121 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Jan 13 20:16:59.861717 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Jan 13 20:16:59.863301 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 13 20:16:59.865839 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Jan 13 20:16:59.867257 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 13 20:16:59.880649 systemd-journald[1124]: Time spent on flushing to /var/log/journal/a65cd1ee9c0943c98a87f1522aec6f5b is 49.177ms for 1117 entries.
Jan 13 20:16:59.880649 systemd-journald[1124]: System Journal (/var/log/journal/a65cd1ee9c0943c98a87f1522aec6f5b) is 8.0M, max 584.8M, 576.8M free.
Jan 13 20:16:59.959640 systemd-journald[1124]: Received client request to flush runtime journal.
Jan 13 20:16:59.959701 kernel: loop0: detected capacity change from 0 to 189592
Jan 13 20:16:59.878050 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 13 20:16:59.882401 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Jan 13 20:16:59.886102 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Jan 13 20:16:59.891479 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 13 20:16:59.894506 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Jan 13 20:16:59.895453 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Jan 13 20:16:59.898483 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Jan 13 20:16:59.917752 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Jan 13 20:16:59.935051 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Jan 13 20:16:59.936155 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Jan 13 20:16:59.949602 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Jan 13 20:16:59.951859 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:16:59.963506 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Jan 13 20:16:59.977707 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Jan 13 20:16:59.990476 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Jan 13 20:16:59.999659 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Jan 13 20:17:00.004407 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Jan 13 20:17:00.026821 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Jan 13 20:17:00.029640 kernel: loop1: detected capacity change from 0 to 113552
Jan 13 20:17:00.037159 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 13 20:17:00.071465 kernel: loop2: detected capacity change from 0 to 8
Jan 13 20:17:00.093423 systemd-tmpfiles[1183]: ACLs are not supported, ignoring.
Jan 13 20:17:00.093461 systemd-tmpfiles[1183]: ACLs are not supported, ignoring.
Jan 13 20:17:00.099601 kernel: loop3: detected capacity change from 0 to 116784
Jan 13 20:17:00.102490 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 13 20:17:00.152723 kernel: loop4: detected capacity change from 0 to 189592
Jan 13 20:17:00.203711 kernel: loop5: detected capacity change from 0 to 113552
Jan 13 20:17:00.222818 kernel: loop6: detected capacity change from 0 to 8
Jan 13 20:17:00.226469 kernel: loop7: detected capacity change from 0 to 116784
Jan 13 20:17:00.250108 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'.
Jan 13 20:17:00.251665 (sd-merge)[1189]: Merged extensions into '/usr'.
Jan 13 20:17:00.259928 systemd[1]: Reloading requested from client PID 1164 ('systemd-sysext') (unit systemd-sysext.service)...
Jan 13 20:17:00.259949 systemd[1]: Reloading...
Jan 13 20:17:00.367604 zram_generator::config[1214]: No configuration found.
Jan 13 20:17:00.599882 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:17:00.682977 systemd[1]: Reloading finished in 421 ms.
Jan 13 20:17:00.689342 ldconfig[1159]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Jan 13 20:17:00.719581 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Jan 13 20:17:00.723807 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Jan 13 20:17:00.739339 systemd[1]: Starting ensure-sysext.service...
Jan 13 20:17:00.744257 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 13 20:17:00.746266 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Jan 13 20:17:00.757870 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 13 20:17:00.764522 systemd[1]: Reloading requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)...
Jan 13 20:17:00.764714 systemd[1]: Reloading...
Jan 13 20:17:00.801400 systemd-udevd[1255]: Using default interface naming scheme 'v255'.
Jan 13 20:17:00.804599 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Jan 13 20:17:00.805361 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Jan 13 20:17:00.806272 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Jan 13 20:17:00.807499 systemd-tmpfiles[1253]: ACLs are not supported, ignoring.
Jan 13 20:17:00.807584 systemd-tmpfiles[1253]: ACLs are not supported, ignoring.
Jan 13 20:17:00.813109 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot.
Jan 13 20:17:00.813123 systemd-tmpfiles[1253]: Skipping /boot
Jan 13 20:17:00.835698 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot.
Jan 13 20:17:00.835711 systemd-tmpfiles[1253]: Skipping /boot
Jan 13 20:17:00.884465 zram_generator::config[1285]: No configuration found.
Jan 13 20:17:01.071407 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:17:01.141021 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped.
Jan 13 20:17:01.141148 systemd[1]: Reloading finished in 375 ms.
Jan 13 20:17:01.153686 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 13 20:17:01.155080 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 13 20:17:01.199475 kernel: mousedev: PS/2 mouse device common for all mice
Jan 13 20:17:01.202957 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 13 20:17:01.208908 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Jan 13 20:17:01.209797 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:17:01.212430 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 13 20:17:01.217120 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 13 20:17:01.220463 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 13 20:17:01.221914 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:17:01.224783 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Jan 13 20:17:01.230566 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 13 20:17:01.234848 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 13 20:17:01.239911 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Jan 13 20:17:01.245713 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:17:01.245921 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:17:01.248042 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:17:01.268921 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 13 20:17:01.269740 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:17:01.277320 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Jan 13 20:17:01.301677 systemd[1]: Finished ensure-sysext.service.
Jan 13 20:17:01.305406 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 13 20:17:01.306632 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 13 20:17:01.322013 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Jan 13 20:17:01.365200 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0
Jan 13 20:17:01.365323 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1290)
Jan 13 20:17:01.365353 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 13 20:17:01.365379 kernel: [drm] features: -context_init
Jan 13 20:17:01.368117 kernel: [drm] number of scanouts: 1
Jan 13 20:17:01.368250 kernel: [drm] number of cap sets: 0
Jan 13 20:17:01.366818 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped.
Jan 13 20:17:01.366969 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:17:01.374735 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 13 20:17:01.378463 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0
Jan 13 20:17:01.378643 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:17:01.386414 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 13 20:17:01.388149 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 13 20:17:01.392486 kernel: Console: switching to colour frame buffer device 160x50
Jan 13 20:17:01.396014 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 13 20:17:01.396272 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 13 20:17:01.430347 augenrules[1393]: No rules
Jan 13 20:17:01.463642 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 13 20:17:01.468156 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 13 20:17:01.468683 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 13 20:17:01.469987 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Jan 13 20:17:01.475859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 13 20:17:01.476067 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 13 20:17:01.478647 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Jan 13 20:17:01.481530 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Jan 13 20:17:01.485813 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 13 20:17:01.486020 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 13 20:17:01.525714 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Jan 13 20:17:01.532345 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM.
Jan 13 20:17:01.540687 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Jan 13 20:17:01.542612 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 13 20:17:01.542706 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 13 20:17:01.545329 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Jan 13 20:17:01.551611 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:17:01.552329 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Jan 13 20:17:01.585407 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Jan 13 20:17:01.613932 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Jan 13 20:17:01.648040 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Jan 13 20:17:01.661068 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Jan 13 20:17:01.682382 lvm[1424]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 13 20:17:01.698912 systemd-networkd[1369]: lo: Link UP
Jan 13 20:17:01.700025 systemd-networkd[1369]: lo: Gained carrier
Jan 13 20:17:01.703991 systemd-networkd[1369]: Enumeration completed
Jan 13 20:17:01.704592 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 13 20:17:01.705617 systemd-networkd[1369]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:17:01.705626 systemd-networkd[1369]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 13 20:17:01.707540 systemd-networkd[1369]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:17:01.707548 systemd-networkd[1369]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 13 20:17:01.709999 systemd-networkd[1369]: eth0: Link UP
Jan 13 20:17:01.710014 systemd-networkd[1369]: eth0: Gained carrier
Jan 13 20:17:01.710040 systemd-networkd[1369]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:17:01.714188 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Jan 13 20:17:01.717224 systemd-networkd[1369]: eth1: Link UP
Jan 13 20:17:01.717239 systemd-networkd[1369]: eth1: Gained carrier
Jan 13 20:17:01.717266 systemd-networkd[1369]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:17:01.718681 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Jan 13 20:17:01.720191 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 13 20:17:01.735784 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Jan 13 20:17:01.737034 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Jan 13 20:17:01.739720 systemd[1]: Reached target time-set.target - System Time Set.
Jan 13 20:17:01.746582 systemd-resolved[1370]: Positive Trust Anchors:
Jan 13 20:17:01.746607 systemd-resolved[1370]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 13 20:17:01.746639 systemd-resolved[1370]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 13 20:17:01.760565 systemd-resolved[1370]: Using system hostname 'ci-4186-1-0-a-dc4fc49980'.
Jan 13 20:17:01.761080 lvm[1427]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 13 20:17:01.764734 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 13 20:17:01.765591 systemd-networkd[1369]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1
Jan 13 20:17:01.765592 systemd[1]: Reached target network.target - Network.
Jan 13 20:17:01.766150 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 13 20:17:01.768065 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection.
Jan 13 20:17:01.773557 systemd-networkd[1369]: eth0: DHCPv4 address 138.199.153.200/32, gateway 172.31.1.1 acquired from 172.31.1.1
Jan 13 20:17:01.787613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:17:01.789270 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 13 20:17:01.790891 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Jan 13 20:17:01.791940 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Jan 13 20:17:01.793285 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Jan 13 20:17:01.794261 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Jan 13 20:17:01.795341 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Jan 13 20:17:01.796387 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Jan 13 20:17:01.796824 systemd[1]: Reached target paths.target - Path Units.
Jan 13 20:17:01.797807 systemd[1]: Reached target timers.target - Timer Units.
Jan 13 20:17:01.801686 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Jan 13 20:17:01.804328 systemd[1]: Starting docker.socket - Docker Socket for the API...
Jan 13 20:17:01.812096 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Jan 13 20:17:01.814192 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Jan 13 20:17:01.815590 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Jan 13 20:17:01.819445 systemd[1]: Reached target sockets.target - Socket Units.
Jan 13 20:17:01.820363 systemd[1]: Reached target basic.target - Basic System.
Jan 13 20:17:01.821408 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Jan 13 20:17:01.821866 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Jan 13 20:17:01.827691 systemd[1]: Starting containerd.service - containerd container runtime...
Jan 13 20:17:01.836244 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Jan 13 20:17:01.840816 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Jan 13 20:17:01.846007 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Jan 13 20:17:01.850743 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Jan 13 20:17:01.853078 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Jan 13 20:17:01.857734 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Jan 13 20:17:01.864709 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Jan 13 20:17:01.872489 jq[1438]: false
Jan 13 20:17:01.875428 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent.
Jan 13 20:17:01.890804 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Jan 13 20:17:01.896066 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Jan 13 20:17:01.904996 systemd[1]: Starting systemd-logind.service - User Login Management...
Jan 13 20:17:01.908015 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Jan 13 20:17:01.908759 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Jan 13 20:17:01.912580 systemd[1]: Starting update-engine.service - Update Engine...
Jan 13 20:17:01.917531 coreos-metadata[1436]: Jan 13 20:17:01.915 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1
Jan 13 20:17:01.917661 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Jan 13 20:17:01.926409 coreos-metadata[1436]: Jan 13 20:17:01.921 INFO Fetch successful
Jan 13 20:17:01.926409 coreos-metadata[1436]: Jan 13 20:17:01.921 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1
Jan 13 20:17:01.926409 coreos-metadata[1436]: Jan 13 20:17:01.922 INFO Fetch successful
Jan 13 20:17:01.926919 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Jan 13 20:17:01.927196 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Jan 13 20:17:01.929320 systemd-timesyncd[1386]: Contacted time server 129.70.132.37:123 (1.flatcar.pool.ntp.org).
Jan 13 20:17:01.929396 systemd-timesyncd[1386]: Initial clock synchronization to Mon 2025-01-13 20:17:02.044585 UTC.
Jan 13 20:17:01.938351 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Jan 13 20:17:01.938627 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Jan 13 20:17:01.979845 jq[1450]: true
Jan 13 20:17:01.993289 extend-filesystems[1440]: Found loop4
Jan 13 20:17:01.994408 systemd[1]: motdgen.service: Deactivated successfully.
Jan 13 20:17:01.995305 extend-filesystems[1440]: Found loop5
Jan 13 20:17:01.995305 extend-filesystems[1440]: Found loop6
Jan 13 20:17:01.995305 extend-filesystems[1440]: Found loop7
Jan 13 20:17:01.995305 extend-filesystems[1440]: Found sda
Jan 13 20:17:01.995305 extend-filesystems[1440]: Found sda1
Jan 13 20:17:02.016320 extend-filesystems[1440]: Found sda2
Jan 13 20:17:02.016320 extend-filesystems[1440]: Found sda3
Jan 13 20:17:02.016320 extend-filesystems[1440]: Found usr
Jan 13 20:17:02.016320 extend-filesystems[1440]: Found sda4
Jan 13 20:17:02.016320 extend-filesystems[1440]: Found sda6
Jan 13 20:17:02.016320 extend-filesystems[1440]: Found sda7
Jan 13 20:17:02.016320 extend-filesystems[1440]: Found sda9
Jan 13 20:17:02.016320 extend-filesystems[1440]: Checking size of /dev/sda9
Jan 13 20:17:01.995631 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Jan 13 20:17:02.037921 tar[1452]: linux-arm64/helm
Jan 13 20:17:02.048238 dbus-daemon[1437]: [system] SELinux support is enabled
Jan 13 20:17:02.051283 extend-filesystems[1440]: Resized partition /dev/sda9
Jan 13 20:17:02.048778 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Jan 13 20:17:02.053625 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Jan 13 20:17:02.053666 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Jan 13 20:17:02.056955 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Jan 13 20:17:02.059975 extend-filesystems[1481]: resize2fs 1.47.1 (20-May-2024)
Jan 13 20:17:02.084641 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks
Jan 13 20:17:02.057017 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Jan 13 20:17:02.083067 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Jan 13 20:17:02.096277 jq[1469]: true
Jan 13 20:17:02.111168 update_engine[1448]: I20250113 20:17:02.110970  1448 main.cc:92] Flatcar Update Engine starting
Jan 13 20:17:02.129513 systemd[1]: Started update-engine.service - Update Engine.
Jan 13 20:17:02.139472 update_engine[1448]: I20250113 20:17:02.130901  1448 update_check_scheduler.cc:74] Next update check in 11m36s
Jan 13 20:17:02.146799 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Jan 13 20:17:02.165882 systemd-logind[1447]: New seat seat0.
Jan 13 20:17:02.174183 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 13 20:17:02.174215 systemd-logind[1447]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard)
Jan 13 20:17:02.174571 systemd[1]: Started systemd-logind.service - User Login Management.
Jan 13 20:17:02.269006 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Jan 13 20:17:02.271644 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Jan 13 20:17:02.307956 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1335)
Jan 13 20:17:02.367679 bash[1507]: Updated "/home/core/.ssh/authorized_keys"
Jan 13 20:17:02.376865 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Jan 13 20:17:02.402593 kernel: EXT4-fs (sda9): resized filesystem to 9393147
Jan 13 20:17:02.405059 systemd[1]: Starting sshkeys.service...
Jan 13 20:17:02.439080 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Jan 13 20:17:02.456299 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Jan 13 20:17:02.468421 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Jan 13 20:17:02.471802 extend-filesystems[1481]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required
Jan 13 20:17:02.471802 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 5
Jan 13 20:17:02.471802 extend-filesystems[1481]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long.
Jan 13 20:17:02.486989 extend-filesystems[1440]: Resized filesystem in /dev/sda9
Jan 13 20:17:02.486989 extend-filesystems[1440]: Found sr0
Jan 13 20:17:02.473632 systemd[1]: extend-filesystems.service: Deactivated successfully.
Jan 13 20:17:02.473860 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Jan 13 20:17:02.576219 coreos-metadata[1518]: Jan 13 20:17:02.575 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1
Jan 13 20:17:02.579309 coreos-metadata[1518]: Jan 13 20:17:02.578 INFO Fetch successful
Jan 13 20:17:02.588632 unknown[1518]: wrote ssh authorized keys file for user: core
Jan 13 20:17:02.635059 containerd[1475]: time="2025-01-13T20:17:02.634904726Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Jan 13 20:17:02.651498 update-ssh-keys[1523]: Updated "/home/core/.ssh/authorized_keys"
Jan 13 20:17:02.653632 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Jan 13 20:17:02.662095 systemd[1]: Finished sshkeys.service.
Jan 13 20:17:02.718597 containerd[1475]: time="2025-01-13T20:17:02.718408540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:17:02.722901 containerd[1475]: time="2025-01-13T20:17:02.722833673Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:17:02.722901 containerd[1475]: time="2025-01-13T20:17:02.722886505Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jan 13 20:17:02.722901 containerd[1475]: time="2025-01-13T20:17:02.722908011Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jan 13 20:17:02.723192 containerd[1475]: time="2025-01-13T20:17:02.723104769Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jan 13 20:17:02.723192 containerd[1475]: time="2025-01-13T20:17:02.723122745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jan 13 20:17:02.723238 containerd[1475]: time="2025-01-13T20:17:02.723206617Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:17:02.723238 containerd[1475]: time="2025-01-13T20:17:02.723221509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:17:02.724024 containerd[1475]: time="2025-01-13T20:17:02.723413033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:17:02.725621 containerd[1475]: time="2025-01-13T20:17:02.725420665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jan 13 20:17:02.725621 containerd[1475]: time="2025-01-13T20:17:02.725531684Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:17:02.725621 containerd[1475]: time="2025-01-13T20:17:02.725547469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jan 13 20:17:02.725797 containerd[1475]: time="2025-01-13T20:17:02.725734691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:17:02.726251 containerd[1475]: time="2025-01-13T20:17:02.726210457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:17:02.726455 containerd[1475]: time="2025-01-13T20:17:02.726427017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:17:02.726581 containerd[1475]: time="2025-01-13T20:17:02.726557635Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jan 13 20:17:02.726746 containerd[1475]: time="2025-01-13T20:17:02.726700750Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jan 13 20:17:02.727000 containerd[1475]: time="2025-01-13T20:17:02.726934474Z" level=info msg="metadata content store policy set" policy=shared
Jan 13 20:17:02.741310 containerd[1475]: time="2025-01-13T20:17:02.741150938Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jan 13 20:17:02.741310 containerd[1475]: time="2025-01-13T20:17:02.741245198Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jan 13 20:17:02.741310 containerd[1475]: time="2025-01-13T20:17:02.741299085Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jan 13 20:17:02.741310 containerd[1475]: time="2025-01-13T20:17:02.741320469Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jan 13 20:17:02.741578 containerd[1475]: time="2025-01-13T20:17:02.741338404Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jan 13 20:17:02.741621 containerd[1475]: time="2025-01-13T20:17:02.741575212Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jan 13 20:17:02.742387 containerd[1475]: time="2025-01-13T20:17:02.742039170Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jan 13 20:17:02.742387 containerd[1475]: time="2025-01-13T20:17:02.742206672Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jan 13 20:17:02.742387 containerd[1475]: time="2025-01-13T20:17:02.742224486Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jan 13 20:17:02.742387 containerd[1475]: time="2025-01-13T20:17:02.742241001Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jan 13 20:17:02.742387 containerd[1475]: time="2025-01-13T20:17:02.742257962Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jan 13 20:17:02.742387 containerd[1475]: time="2025-01-13T20:17:02.742273219Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jan 13 20:17:02.742387 containerd[1475]: time="2025-01-13T20:17:02.742298052Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jan 13 20:17:02.742387 containerd[1475]: time="2025-01-13T20:17:02.742334652Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jan 13 20:17:02.742387 containerd[1475]: time="2025-01-13T20:17:02.742353115Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jan 13 20:17:02.742387 containerd[1475]: time="2025-01-13T20:17:02.742370076Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jan 13 20:17:02.742387 containerd[1475]: time="2025-01-13T20:17:02.742400753Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jan 13 20:17:02.743088 containerd[1475]: time="2025-01-13T20:17:02.742572231Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jan 13 20:17:02.743088 containerd[1475]: time="2025-01-13T20:17:02.742643890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.743088 containerd[1475]: time="2025-01-13T20:17:02.742660405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.743088 containerd[1475]: time="2025-01-13T20:17:02.742685320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.743088 containerd[1475]: time="2025-01-13T20:17:02.742703173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.743088 containerd[1475]: time="2025-01-13T20:17:02.742730401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.743088 containerd[1475]: time="2025-01-13T20:17:02.742744684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.743088 containerd[1475]: time="2025-01-13T20:17:02.742758683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.743088 containerd[1475]: time="2025-01-13T20:17:02.742774224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.743088 containerd[1475]: time="2025-01-13T20:17:02.742788832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.743088 containerd[1475]: time="2025-01-13T20:17:02.742812407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.743088 containerd[1475]: time="2025-01-13T20:17:02.742824621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.743088 containerd[1475]: time="2025-01-13T20:17:02.742839188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.743088 containerd[1475]: time="2025-01-13T20:17:02.742853471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.743088 containerd[1475]: time="2025-01-13T20:17:02.742868606Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jan 13 20:17:02.743421 containerd[1475]: time="2025-01-13T20:17:02.742898958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.743421 containerd[1475]: time="2025-01-13T20:17:02.742917826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.743421 containerd[1475]: time="2025-01-13T20:17:02.742932596Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jan 13 20:17:02.743421 containerd[1475]: time="2025-01-13T20:17:02.743118277Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jan 13 20:17:02.743421 containerd[1475]: time="2025-01-13T20:17:02.743137916Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jan 13 20:17:02.743421 containerd[1475]: time="2025-01-13T20:17:02.743149643Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jan 13 20:17:02.743421 containerd[1475]: time="2025-01-13T20:17:02.743161289Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jan 13 20:17:02.743421 containerd[1475]: time="2025-01-13T20:17:02.743169972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.743421 containerd[1475]: time="2025-01-13T20:17:02.743182876Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jan 13 20:17:02.743421 containerd[1475]: time="2025-01-13T20:17:02.743193953Z" level=info msg="NRI interface is disabled by configuration."
Jan 13 20:17:02.743421 containerd[1475]: time="2025-01-13T20:17:02.743205761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Jan 13 20:17:02.747807 containerd[1475]: time="2025-01-13T20:17:02.746832703Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Jan 13 20:17:02.747807 containerd[1475]: time="2025-01-13T20:17:02.746911909Z" level=info msg="Connect containerd service"
Jan 13 20:17:02.747807 containerd[1475]: time="2025-01-13T20:17:02.746969772Z" level=info msg="using legacy CRI server"
Jan 13 20:17:02.747807 containerd[1475]: time="2025-01-13T20:17:02.746978496Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Jan 13 20:17:02.747807 containerd[1475]: time="2025-01-13T20:17:02.747286476Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Jan 13 20:17:02.748532 containerd[1475]: time="2025-01-13T20:17:02.748189844Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 13 20:17:02.751157 containerd[1475]: time="2025-01-13T20:17:02.750858842Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Jan 13 20:17:02.751157 containerd[1475]: time="2025-01-13T20:17:02.750934112Z" level=info msg=serving... address=/run/containerd/containerd.sock
Jan 13 20:17:02.751157 containerd[1475]: time="2025-01-13T20:17:02.751090212Z" level=info msg="Start subscribing containerd event"
Jan 13 20:17:02.751157 containerd[1475]: time="2025-01-13T20:17:02.751132737Z" level=info msg="Start recovering state"
Jan 13 20:17:02.751414 containerd[1475]: time="2025-01-13T20:17:02.751213323Z" level=info msg="Start event monitor"
Jan 13 20:17:02.751414 containerd[1475]: time="2025-01-13T20:17:02.751226835Z" level=info msg="Start snapshots syncer"
Jan 13 20:17:02.751414 containerd[1475]: time="2025-01-13T20:17:02.751237142Z" level=info msg="Start cni network conf syncer for default"
Jan 13 20:17:02.751414 containerd[1475]: time="2025-01-13T20:17:02.751248138Z" level=info msg="Start streaming server"
Jan 13 20:17:02.751414 containerd[1475]: time="2025-01-13T20:17:02.751386750Z" level=info msg="containerd successfully booted in 0.122084s"
Jan 13 20:17:02.751678 systemd[1]: Started containerd.service - containerd container runtime.
Jan 13 20:17:02.962708 systemd-networkd[1369]: eth0: Gained IPv6LL
Jan 13 20:17:02.969077 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Jan 13 20:17:02.971801 systemd[1]: Reached target network-online.target - Network is Online.
Jan 13 20:17:02.981985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:17:02.992050 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Jan 13 20:17:03.000053 sshd_keygen[1473]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Jan 13 20:17:03.007099 tar[1452]: linux-arm64/LICENSE
Jan 13 20:17:03.007099 tar[1452]: linux-arm64/README.md
Jan 13 20:17:03.046728 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Jan 13 20:17:03.052433 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Jan 13 20:17:03.057200 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Jan 13 20:17:03.068651 systemd[1]: Starting issuegen.service - Generate /run/issue...
Jan 13 20:17:03.088880 systemd[1]: issuegen.service: Deactivated successfully.
Jan 13 20:17:03.089922 systemd[1]: Finished issuegen.service - Generate /run/issue.
Jan 13 20:17:03.101148 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Jan 13 20:17:03.116654 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Jan 13 20:17:03.127444 systemd[1]: Started getty@tty1.service - Getty on tty1.
Jan 13 20:17:03.135740 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0.
Jan 13 20:17:03.137361 systemd[1]: Reached target getty.target - Login Prompts.
Jan 13 20:17:03.154675 systemd-networkd[1369]: eth1: Gained IPv6LL
Jan 13 20:17:03.971253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:17:03.973591 systemd[1]: Reached target multi-user.target - Multi-User System.
Jan 13 20:17:03.980621 systemd[1]: Startup finished in 912ms (kernel) + 5.172s (initrd) + 5.141s (userspace) = 11.226s.
Jan 13 20:17:03.983196 (kubelet)[1566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:17:03.995265 agetty[1560]: failed to open credentials directory
Jan 13 20:17:03.995266 agetty[1559]: failed to open credentials directory
Jan 13 20:17:04.709335 kubelet[1566]: E0113 20:17:04.708989    1566 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:17:04.713689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:17:04.714336 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:17:14.964059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Jan 13 20:17:14.972807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:17:15.103797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:17:15.116947 (kubelet)[1585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:17:15.171697 kubelet[1585]: E0113 20:17:15.171615    1585 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:17:15.176693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:17:15.177045 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:17:25.428575 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Jan 13 20:17:25.438010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:17:25.560252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:17:25.572229 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:17:25.627096 kubelet[1600]: E0113 20:17:25.627032    1600 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:17:25.630032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:17:25.630236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:17:35.881866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Jan 13 20:17:35.887752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:17:36.036963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:17:36.042274 (kubelet)[1615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:17:36.081020 kubelet[1615]: E0113 20:17:36.080933    1615 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:17:36.083198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:17:36.083380 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:17:46.231861 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
Jan 13 20:17:46.239926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:17:46.369846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:17:46.381150 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:17:46.434378 kubelet[1630]: E0113 20:17:46.434293    1630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:17:46.438622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:17:46.438880 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:17:47.787584 update_engine[1448]: I20250113 20:17:47.787390  1448 update_attempter.cc:509] Updating boot flags...
Jan 13 20:17:47.853013 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1647)
Jan 13 20:17:56.481854 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
Jan 13 20:17:56.488765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:17:56.630410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:17:56.636283 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:17:56.693297 kubelet[1660]: E0113 20:17:56.693244    1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:17:56.696088 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:17:56.696267 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:18:06.732000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
Jan 13 20:18:06.746136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:18:06.880591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:18:06.886490 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:18:06.936906 kubelet[1676]: E0113 20:18:06.936771    1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:18:06.938692 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:18:06.938879 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:18:16.982040 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
Jan 13 20:18:16.999743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:18:17.131792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:18:17.137792 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:18:17.182690 kubelet[1691]: E0113 20:18:17.182632    1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:18:17.185901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:18:17.186060 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:18:27.232046 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
Jan 13 20:18:27.240861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:18:27.390872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:18:27.404124 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:18:27.449399 kubelet[1707]: E0113 20:18:27.449323    1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:18:27.453410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:18:27.453930 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:18:37.482196 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9.
Jan 13 20:18:37.499857 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:18:37.656796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:18:37.669469 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:18:37.722631 kubelet[1722]: E0113 20:18:37.722513    1722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:18:37.725731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:18:37.725950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:18:46.365724 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Jan 13 20:18:46.373839 systemd[1]: Started sshd@0-138.199.153.200:22-139.178.89.65:48230.service - OpenSSH per-connection server daemon (139.178.89.65:48230).
Jan 13 20:18:47.374005 sshd[1730]: Accepted publickey for core from 139.178.89.65 port 48230 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc
Jan 13 20:18:47.377820 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:18:47.391220 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Jan 13 20:18:47.406779 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Jan 13 20:18:47.413176 systemd-logind[1447]: New session 1 of user core.
Jan 13 20:18:47.423947 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Jan 13 20:18:47.431862 systemd[1]: Starting user@500.service - User Manager for UID 500...
Jan 13 20:18:47.446080 (systemd)[1734]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Jan 13 20:18:47.567248 systemd[1734]: Queued start job for default target default.target.
Jan 13 20:18:47.581194 systemd[1734]: Created slice app.slice - User Application Slice.
Jan 13 20:18:47.581226 systemd[1734]: Reached target paths.target - Paths.
Jan 13 20:18:47.581240 systemd[1734]: Reached target timers.target - Timers.
Jan 13 20:18:47.582853 systemd[1734]: Starting dbus.socket - D-Bus User Message Bus Socket...
Jan 13 20:18:47.598051 systemd[1734]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Jan 13 20:18:47.599419 systemd[1734]: Reached target sockets.target - Sockets.
Jan 13 20:18:47.599473 systemd[1734]: Reached target basic.target - Basic System.
Jan 13 20:18:47.599558 systemd[1734]: Reached target default.target - Main User Target.
Jan 13 20:18:47.599613 systemd[1734]: Startup finished in 144ms.
Jan 13 20:18:47.599776 systemd[1]: Started user@500.service - User Manager for UID 500.
Jan 13 20:18:47.613839 systemd[1]: Started session-1.scope - Session 1 of User core.
Jan 13 20:18:47.731815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10.
Jan 13 20:18:47.746798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:18:47.866983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:18:47.873418 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:18:47.919512 kubelet[1751]: E0113 20:18:47.919394    1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:18:47.921721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:18:47.921907 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:18:48.311883 systemd[1]: Started sshd@1-138.199.153.200:22-139.178.89.65:48240.service - OpenSSH per-connection server daemon (139.178.89.65:48240).
Jan 13 20:18:49.302215 sshd[1761]: Accepted publickey for core from 139.178.89.65 port 48240 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc
Jan 13 20:18:49.305265 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:18:49.318075 systemd-logind[1447]: New session 2 of user core.
Jan 13 20:18:49.329479 systemd[1]: Started session-2.scope - Session 2 of User core.
Jan 13 20:18:49.985858 sshd[1763]: Connection closed by 139.178.89.65 port 48240
Jan 13 20:18:49.986740 sshd-session[1761]: pam_unix(sshd:session): session closed for user core
Jan 13 20:18:49.991729 systemd[1]: sshd@1-138.199.153.200:22-139.178.89.65:48240.service: Deactivated successfully.
Jan 13 20:18:49.994066 systemd[1]: session-2.scope: Deactivated successfully.
Jan 13 20:18:49.999553 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit.
Jan 13 20:18:50.002160 systemd-logind[1447]: Removed session 2.
Jan 13 20:18:50.171947 systemd[1]: Started sshd@2-138.199.153.200:22-139.178.89.65:48250.service - OpenSSH per-connection server daemon (139.178.89.65:48250).
Jan 13 20:18:51.155596 sshd[1768]: Accepted publickey for core from 139.178.89.65 port 48250 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc
Jan 13 20:18:51.158914 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:18:51.167520 systemd-logind[1447]: New session 3 of user core.
Jan 13 20:18:51.175841 systemd[1]: Started session-3.scope - Session 3 of User core.
Jan 13 20:18:51.827418 sshd[1770]: Connection closed by 139.178.89.65 port 48250
Jan 13 20:18:51.827291 sshd-session[1768]: pam_unix(sshd:session): session closed for user core
Jan 13 20:18:51.830736 systemd[1]: sshd@2-138.199.153.200:22-139.178.89.65:48250.service: Deactivated successfully.
Jan 13 20:18:51.832842 systemd[1]: session-3.scope: Deactivated successfully.
Jan 13 20:18:51.835082 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit.
Jan 13 20:18:51.836954 systemd-logind[1447]: Removed session 3.
Jan 13 20:18:52.006919 systemd[1]: Started sshd@3-138.199.153.200:22-139.178.89.65:57126.service - OpenSSH per-connection server daemon (139.178.89.65:57126).
Jan 13 20:18:53.000116 sshd[1775]: Accepted publickey for core from 139.178.89.65 port 57126 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc
Jan 13 20:18:53.001710 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:18:53.009246 systemd-logind[1447]: New session 4 of user core.
Jan 13 20:18:53.018889 systemd[1]: Started session-4.scope - Session 4 of User core.
Jan 13 20:18:53.683891 sshd[1777]: Connection closed by 139.178.89.65 port 57126
Jan 13 20:18:53.685792 sshd-session[1775]: pam_unix(sshd:session): session closed for user core
Jan 13 20:18:53.691373 systemd[1]: sshd@3-138.199.153.200:22-139.178.89.65:57126.service: Deactivated successfully.
Jan 13 20:18:53.694534 systemd[1]: session-4.scope: Deactivated successfully.
Jan 13 20:18:53.698471 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit.
Jan 13 20:18:53.700910 systemd-logind[1447]: Removed session 4.
Jan 13 20:18:53.859145 systemd[1]: Started sshd@4-138.199.153.200:22-139.178.89.65:57134.service - OpenSSH per-connection server daemon (139.178.89.65:57134).
Jan 13 20:18:54.852508 sshd[1782]: Accepted publickey for core from 139.178.89.65 port 57134 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc
Jan 13 20:18:54.851993 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:18:54.861668 systemd-logind[1447]: New session 5 of user core.
Jan 13 20:18:54.868912 systemd[1]: Started session-5.scope - Session 5 of User core.
Jan 13 20:18:55.385692 sudo[1785]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Jan 13 20:18:55.386046 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:18:55.764989 systemd[1]: Starting docker.service - Docker Application Container Engine...
Jan 13 20:18:55.766173 (dockerd)[1803]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Jan 13 20:18:56.040492 dockerd[1803]: time="2025-01-13T20:18:56.039542344Z" level=info msg="Starting up"
Jan 13 20:18:56.124115 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3305124117-merged.mount: Deactivated successfully.
Jan 13 20:18:56.158147 dockerd[1803]: time="2025-01-13T20:18:56.157659944Z" level=info msg="Loading containers: start."
Jan 13 20:18:56.365483 kernel: Initializing XFRM netlink socket
Jan 13 20:18:56.476935 systemd-networkd[1369]: docker0: Link UP
Jan 13 20:18:56.508597 dockerd[1803]: time="2025-01-13T20:18:56.508387019Z" level=info msg="Loading containers: done."
Jan 13 20:18:56.531181 dockerd[1803]: time="2025-01-13T20:18:56.530468249Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Jan 13 20:18:56.531181 dockerd[1803]: time="2025-01-13T20:18:56.530601449Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
Jan 13 20:18:56.531181 dockerd[1803]: time="2025-01-13T20:18:56.530837050Z" level=info msg="Daemon has completed initialization"
Jan 13 20:18:56.582598 dockerd[1803]: time="2025-01-13T20:18:56.582494600Z" level=info msg="API listen on /run/docker.sock"
Jan 13 20:18:56.583643 systemd[1]: Started docker.service - Docker Application Container Engine.
Jan 13 20:18:57.755190 containerd[1475]: time="2025-01-13T20:18:57.755136709Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\""
Jan 13 20:18:57.981757 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11.
Jan 13 20:18:57.993182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:18:58.124914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:18:58.131314 (kubelet)[1998]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:18:58.182645 kubelet[1998]: E0113 20:18:58.182075    1998 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:18:58.187670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:18:58.189053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:18:58.441003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784857089.mount: Deactivated successfully.
Jan 13 20:18:59.417209 containerd[1475]: time="2025-01-13T20:18:59.417147854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:18:59.421307 containerd[1475]: time="2025-01-13T20:18:59.421218183Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615677"
Jan 13 20:18:59.423921 containerd[1475]: time="2025-01-13T20:18:59.423865069Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:18:59.432988 containerd[1475]: time="2025-01-13T20:18:59.432908450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:18:59.434975 containerd[1475]: time="2025-01-13T20:18:59.434926694Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 1.679731465s"
Jan 13 20:18:59.435162 containerd[1475]: time="2025-01-13T20:18:59.435145415Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\""
Jan 13 20:18:59.436667 containerd[1475]: time="2025-01-13T20:18:59.436633618Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\""
Jan 13 20:19:00.964428 containerd[1475]: time="2025-01-13T20:19:00.964377905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:00.968291 containerd[1475]: time="2025-01-13T20:19:00.968209635Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470116"
Jan 13 20:19:00.970531 containerd[1475]: time="2025-01-13T20:19:00.970482561Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:00.974980 containerd[1475]: time="2025-01-13T20:19:00.974921332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:00.976455 containerd[1475]: time="2025-01-13T20:19:00.975894894Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 1.539013595s"
Jan 13 20:19:00.976610 containerd[1475]: time="2025-01-13T20:19:00.976592856Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\""
Jan 13 20:19:00.977688 containerd[1475]: time="2025-01-13T20:19:00.977664899Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\""
Jan 13 20:19:02.220527 containerd[1475]: time="2025-01-13T20:19:02.219273534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:02.222751 containerd[1475]: time="2025-01-13T20:19:02.222635025Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024222"
Jan 13 20:19:02.224415 containerd[1475]: time="2025-01-13T20:19:02.224320830Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:02.229832 containerd[1475]: time="2025-01-13T20:19:02.229758727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:02.232379 containerd[1475]: time="2025-01-13T20:19:02.232092374Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.254121554s"
Jan 13 20:19:02.232379 containerd[1475]: time="2025-01-13T20:19:02.232368975Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\""
Jan 13 20:19:02.233642 containerd[1475]: time="2025-01-13T20:19:02.233297618Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\""
Jan 13 20:19:03.739832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount705065897.mount: Deactivated successfully.
Jan 13 20:19:04.177369 containerd[1475]: time="2025-01-13T20:19:04.177167289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:04.181504 containerd[1475]: time="2025-01-13T20:19:04.181346984Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771452"
Jan 13 20:19:04.183842 containerd[1475]: time="2025-01-13T20:19:04.183744153Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:04.187471 containerd[1475]: time="2025-01-13T20:19:04.187074205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:04.188671 containerd[1475]: time="2025-01-13T20:19:04.188139849Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.954788031s"
Jan 13 20:19:04.188671 containerd[1475]: time="2025-01-13T20:19:04.188187489Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\""
Jan 13 20:19:04.189295 containerd[1475]: time="2025-01-13T20:19:04.189259613Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Jan 13 20:19:04.820451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount952570254.mount: Deactivated successfully.
Jan 13 20:19:05.738282 containerd[1475]: time="2025-01-13T20:19:05.736758555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:05.742089 containerd[1475]: time="2025-01-13T20:19:05.741960015Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461"
Jan 13 20:19:05.745354 containerd[1475]: time="2025-01-13T20:19:05.744605145Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:05.752474 containerd[1475]: time="2025-01-13T20:19:05.750998410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:05.753293 containerd[1475]: time="2025-01-13T20:19:05.753229379Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.563831486s"
Jan 13 20:19:05.753509 containerd[1475]: time="2025-01-13T20:19:05.753424620Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\""
Jan 13 20:19:05.754647 containerd[1475]: time="2025-01-13T20:19:05.754610864Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\""
Jan 13 20:19:06.320176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2440205437.mount: Deactivated successfully.
Jan 13 20:19:06.329587 containerd[1475]: time="2025-01-13T20:19:06.329534059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:06.331740 containerd[1475]: time="2025-01-13T20:19:06.331516387Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723"
Jan 13 20:19:06.336861 containerd[1475]: time="2025-01-13T20:19:06.336775968Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:06.339176 containerd[1475]: time="2025-01-13T20:19:06.339100738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:06.340307 containerd[1475]: time="2025-01-13T20:19:06.340123702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 585.294797ms"
Jan 13 20:19:06.340307 containerd[1475]: time="2025-01-13T20:19:06.340171462Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\""
Jan 13 20:19:06.341358 containerd[1475]: time="2025-01-13T20:19:06.341270067Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\""
Jan 13 20:19:07.042682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount537047071.mount: Deactivated successfully.
Jan 13 20:19:08.232258 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12.
Jan 13 20:19:08.241754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:19:08.418002 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:19:08.418693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:19:08.476538 kubelet[2184]: E0113 20:19:08.476385    2184 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:19:08.479973 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:19:08.480343 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:19:08.741139 containerd[1475]: time="2025-01-13T20:19:08.740829591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:08.744226 containerd[1475]: time="2025-01-13T20:19:08.743651444Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406487"
Jan 13 20:19:08.746351 containerd[1475]: time="2025-01-13T20:19:08.746287136Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:08.753485 containerd[1475]: time="2025-01-13T20:19:08.752462125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:08.754166 containerd[1475]: time="2025-01-13T20:19:08.754122652Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.412789985s"
Jan 13 20:19:08.754324 containerd[1475]: time="2025-01-13T20:19:08.754305973Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\""
Jan 13 20:19:15.228852 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:19:15.242970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:19:15.298729 systemd[1]: Reloading requested from client PID 2221 ('systemctl') (unit session-5.scope)...
Jan 13 20:19:15.298748 systemd[1]: Reloading...
Jan 13 20:19:15.445018 zram_generator::config[2261]: No configuration found.
Jan 13 20:19:15.540329 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:19:15.608427 systemd[1]: Reloading finished in 309 ms.
Jan 13 20:19:15.677536 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:19:15.681554 systemd[1]: kubelet.service: Deactivated successfully.
Jan 13 20:19:15.681875 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:19:15.689163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:19:15.833201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:19:15.836557 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Jan 13 20:19:15.894477 kubelet[2311]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 13 20:19:15.894477 kubelet[2311]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jan 13 20:19:15.894477 kubelet[2311]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 13 20:19:15.895026 kubelet[2311]: I0113 20:19:15.894641    2311 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jan 13 20:19:17.749194 kubelet[2311]: I0113 20:19:17.749116    2311 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Jan 13 20:19:17.749194 kubelet[2311]: I0113 20:19:17.749157    2311 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 13 20:19:17.749955 kubelet[2311]: I0113 20:19:17.749496    2311 server.go:929] "Client rotation is on, will bootstrap in background"
Jan 13 20:19:17.782567 kubelet[2311]: E0113 20:19:17.782516    2311 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://138.199.153.200:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 138.199.153.200:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:19:17.785256 kubelet[2311]: I0113 20:19:17.784315    2311 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 13 20:19:17.799910 kubelet[2311]: E0113 20:19:17.799855    2311 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Jan 13 20:19:17.799910 kubelet[2311]: I0113 20:19:17.799908    2311 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Jan 13 20:19:17.806321 kubelet[2311]: I0113 20:19:17.805169    2311 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jan 13 20:19:17.806321 kubelet[2311]: I0113 20:19:17.805626    2311 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Jan 13 20:19:17.806321 kubelet[2311]: I0113 20:19:17.805758    2311 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jan 13 20:19:17.806321 kubelet[2311]: I0113 20:19:17.805787    2311 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-a-dc4fc49980","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Jan 13 20:19:17.806639 kubelet[2311]: I0113 20:19:17.806219    2311 topology_manager.go:138] "Creating topology manager with none policy"
Jan 13 20:19:17.806639 kubelet[2311]: I0113 20:19:17.806232    2311 container_manager_linux.go:300] "Creating device plugin manager"
Jan 13 20:19:17.806639 kubelet[2311]: I0113 20:19:17.806623    2311 state_mem.go:36] "Initialized new in-memory state store"
Jan 13 20:19:17.809656 kubelet[2311]: I0113 20:19:17.809603    2311 kubelet.go:408] "Attempting to sync node with API server"
Jan 13 20:19:17.809656 kubelet[2311]: I0113 20:19:17.809651    2311 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 13 20:19:17.809835 kubelet[2311]: I0113 20:19:17.809681    2311 kubelet.go:314] "Adding apiserver pod source"
Jan 13 20:19:17.809835 kubelet[2311]: I0113 20:19:17.809693    2311 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 13 20:19:17.815759 kubelet[2311]: I0113 20:19:17.815681    2311 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Jan 13 20:19:17.818266 kubelet[2311]: I0113 20:19:17.818076    2311 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jan 13 20:19:17.820209 kubelet[2311]: W0113 20:19:17.819014    2311 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jan 13 20:19:17.820209 kubelet[2311]: I0113 20:19:17.819829    2311 server.go:1269] "Started kubelet"
Jan 13 20:19:17.820209 kubelet[2311]: W0113 20:19:17.819994    2311 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.153.200:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.199.153.200:6443: connect: connection refused
Jan 13 20:19:17.820209 kubelet[2311]: E0113 20:19:17.820051    2311 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.199.153.200:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.153.200:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:19:17.825979 kubelet[2311]: W0113 20:19:17.825896    2311 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.153.200:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-a-dc4fc49980&limit=500&resourceVersion=0": dial tcp 138.199.153.200:6443: connect: connection refused
Jan 13 20:19:17.825979 kubelet[2311]: E0113 20:19:17.825966    2311 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.199.153.200:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-a-dc4fc49980&limit=500&resourceVersion=0\": dial tcp 138.199.153.200:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:19:17.827388 kubelet[2311]: E0113 20:19:17.826023    2311 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.153.200:6443/api/v1/namespaces/default/events\": dial tcp 138.199.153.200:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-a-dc4fc49980.181a59fef1e79ec9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-a-dc4fc49980,UID:ci-4186-1-0-a-dc4fc49980,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-a-dc4fc49980,},FirstTimestamp:2025-01-13 20:19:17.819801289 +0000 UTC m=+1.977027476,LastTimestamp:2025-01-13 20:19:17.819801289 +0000 UTC m=+1.977027476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-a-dc4fc49980,}"
Jan 13 20:19:17.828500 kubelet[2311]: I0113 20:19:17.827610    2311 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Jan 13 20:19:17.829396 kubelet[2311]: I0113 20:19:17.829351    2311 server.go:460] "Adding debug handlers to kubelet server"
Jan 13 20:19:17.829753 kubelet[2311]: I0113 20:19:17.829731    2311 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 13 20:19:17.830364 kubelet[2311]: I0113 20:19:17.830286    2311 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jan 13 20:19:17.830605 kubelet[2311]: I0113 20:19:17.830577    2311 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jan 13 20:19:17.833278 kubelet[2311]: E0113 20:19:17.833234    2311 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 13 20:19:17.833689 kubelet[2311]: I0113 20:19:17.833655    2311 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Jan 13 20:19:17.836947 kubelet[2311]: I0113 20:19:17.836112    2311 volume_manager.go:289] "Starting Kubelet Volume Manager"
Jan 13 20:19:17.836947 kubelet[2311]: I0113 20:19:17.836265    2311 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Jan 13 20:19:17.836947 kubelet[2311]: I0113 20:19:17.836384    2311 reconciler.go:26] "Reconciler: start to sync state"
Jan 13 20:19:17.836947 kubelet[2311]: W0113 20:19:17.836895    2311 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.153.200:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.200:6443: connect: connection refused
Jan 13 20:19:17.837152 kubelet[2311]: E0113 20:19:17.836964    2311 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.199.153.200:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.153.200:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:19:17.838144 kubelet[2311]: E0113 20:19:17.838089    2311 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-a-dc4fc49980\" not found"
Jan 13 20:19:17.838288 kubelet[2311]: E0113 20:19:17.838240    2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.200:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-a-dc4fc49980?timeout=10s\": dial tcp 138.199.153.200:6443: connect: connection refused" interval="200ms"
Jan 13 20:19:17.838608 kubelet[2311]: I0113 20:19:17.838572    2311 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jan 13 20:19:17.840117 kubelet[2311]: I0113 20:19:17.840080    2311 factory.go:221] Registration of the containerd container factory successfully
Jan 13 20:19:17.840117 kubelet[2311]: I0113 20:19:17.840105    2311 factory.go:221] Registration of the systemd container factory successfully
Jan 13 20:19:17.851639 kubelet[2311]: I0113 20:19:17.851416    2311 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jan 13 20:19:17.853379 kubelet[2311]: I0113 20:19:17.852925    2311 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jan 13 20:19:17.853379 kubelet[2311]: I0113 20:19:17.852960    2311 status_manager.go:217] "Starting to sync pod status with apiserver"
Jan 13 20:19:17.853379 kubelet[2311]: I0113 20:19:17.852983    2311 kubelet.go:2321] "Starting kubelet main sync loop"
Jan 13 20:19:17.853379 kubelet[2311]: E0113 20:19:17.853040    2311 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 13 20:19:17.864158 kubelet[2311]: W0113 20:19:17.863841    2311 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.153.200:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.200:6443: connect: connection refused
Jan 13 20:19:17.864158 kubelet[2311]: E0113 20:19:17.863903    2311 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.199.153.200:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.153.200:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:19:17.870164 kubelet[2311]: I0113 20:19:17.870131    2311 cpu_manager.go:214] "Starting CPU manager" policy="none"
Jan 13 20:19:17.870164 kubelet[2311]: I0113 20:19:17.870152    2311 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Jan 13 20:19:17.870164 kubelet[2311]: I0113 20:19:17.870173    2311 state_mem.go:36] "Initialized new in-memory state store"
Jan 13 20:19:17.873337 kubelet[2311]: I0113 20:19:17.873294    2311 policy_none.go:49] "None policy: Start"
Jan 13 20:19:17.874412 kubelet[2311]: I0113 20:19:17.874365    2311 memory_manager.go:170] "Starting memorymanager" policy="None"
Jan 13 20:19:17.874412 kubelet[2311]: I0113 20:19:17.874408    2311 state_mem.go:35] "Initializing new in-memory state store"
Jan 13 20:19:17.885335 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Jan 13 20:19:17.899546 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Jan 13 20:19:17.904672 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Jan 13 20:19:17.916044 kubelet[2311]: I0113 20:19:17.915059    2311 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 13 20:19:17.916044 kubelet[2311]: I0113 20:19:17.915477    2311 eviction_manager.go:189] "Eviction manager: starting control loop"
Jan 13 20:19:17.916044 kubelet[2311]: I0113 20:19:17.915498    2311 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Jan 13 20:19:17.916044 kubelet[2311]: I0113 20:19:17.915876    2311 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jan 13 20:19:17.920095 kubelet[2311]: E0113 20:19:17.920050    2311 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-0-a-dc4fc49980\" not found"
Jan 13 20:19:17.969613 systemd[1]: Created slice kubepods-burstable-podb944ca5e06fb6c38b17b0c76ae0dad69.slice - libcontainer container kubepods-burstable-podb944ca5e06fb6c38b17b0c76ae0dad69.slice.
Jan 13 20:19:17.992472 systemd[1]: Created slice kubepods-burstable-pod4e1b0bd4fd531d0f6575f509c504e74b.slice - libcontainer container kubepods-burstable-pod4e1b0bd4fd531d0f6575f509c504e74b.slice.
Jan 13 20:19:17.999460 systemd[1]: Created slice kubepods-burstable-pod7bf17615c6847854eb32e738deccf5ad.slice - libcontainer container kubepods-burstable-pod7bf17615c6847854eb32e738deccf5ad.slice.
Jan 13 20:19:18.020303 kubelet[2311]: I0113 20:19:18.019385    2311 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:18.020893 kubelet[2311]: E0113 20:19:18.020843    2311 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.153.200:6443/api/v1/nodes\": dial tcp 138.199.153.200:6443: connect: connection refused" node="ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:18.039838 kubelet[2311]: E0113 20:19:18.039690    2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.200:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-a-dc4fc49980?timeout=10s\": dial tcp 138.199.153.200:6443: connect: connection refused" interval="400ms"
Jan 13 20:19:18.138695 kubelet[2311]: I0113 20:19:18.138625    2311 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e1b0bd4fd531d0f6575f509c504e74b-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-a-dc4fc49980\" (UID: \"4e1b0bd4fd531d0f6575f509c504e74b\") " pod="kube-system/kube-scheduler-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:18.138695 kubelet[2311]: I0113 20:19:18.138692    2311 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7bf17615c6847854eb32e738deccf5ad-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-a-dc4fc49980\" (UID: \"7bf17615c6847854eb32e738deccf5ad\") " pod="kube-system/kube-apiserver-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:18.138943 kubelet[2311]: I0113 20:19:18.138720    2311 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7bf17615c6847854eb32e738deccf5ad-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-a-dc4fc49980\" (UID: \"7bf17615c6847854eb32e738deccf5ad\") " pod="kube-system/kube-apiserver-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:18.138943 kubelet[2311]: I0113 20:19:18.138739    2311 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7bf17615c6847854eb32e738deccf5ad-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-a-dc4fc49980\" (UID: \"7bf17615c6847854eb32e738deccf5ad\") " pod="kube-system/kube-apiserver-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:18.138943 kubelet[2311]: I0113 20:19:18.138770    2311 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b944ca5e06fb6c38b17b0c76ae0dad69-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-a-dc4fc49980\" (UID: \"b944ca5e06fb6c38b17b0c76ae0dad69\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:18.138943 kubelet[2311]: I0113 20:19:18.138832    2311 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b944ca5e06fb6c38b17b0c76ae0dad69-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-a-dc4fc49980\" (UID: \"b944ca5e06fb6c38b17b0c76ae0dad69\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:18.138943 kubelet[2311]: I0113 20:19:18.138853    2311 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b944ca5e06fb6c38b17b0c76ae0dad69-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-a-dc4fc49980\" (UID: \"b944ca5e06fb6c38b17b0c76ae0dad69\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:18.139295 kubelet[2311]: I0113 20:19:18.138885    2311 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b944ca5e06fb6c38b17b0c76ae0dad69-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-a-dc4fc49980\" (UID: \"b944ca5e06fb6c38b17b0c76ae0dad69\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:18.139295 kubelet[2311]: I0113 20:19:18.138905    2311 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b944ca5e06fb6c38b17b0c76ae0dad69-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-a-dc4fc49980\" (UID: \"b944ca5e06fb6c38b17b0c76ae0dad69\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:18.228517 kubelet[2311]: I0113 20:19:18.228086    2311 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:18.229758 kubelet[2311]: E0113 20:19:18.229428    2311 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.153.200:6443/api/v1/nodes\": dial tcp 138.199.153.200:6443: connect: connection refused" node="ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:18.284775 containerd[1475]: time="2025-01-13T20:19:18.284239551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-a-dc4fc49980,Uid:b944ca5e06fb6c38b17b0c76ae0dad69,Namespace:kube-system,Attempt:0,}"
Jan 13 20:19:18.298515 containerd[1475]: time="2025-01-13T20:19:18.298454883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-a-dc4fc49980,Uid:4e1b0bd4fd531d0f6575f509c504e74b,Namespace:kube-system,Attempt:0,}"
Jan 13 20:19:18.306167 containerd[1475]: time="2025-01-13T20:19:18.305734971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-a-dc4fc49980,Uid:7bf17615c6847854eb32e738deccf5ad,Namespace:kube-system,Attempt:0,}"
Jan 13 20:19:18.442515 kubelet[2311]: E0113 20:19:18.441501    2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.200:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-a-dc4fc49980?timeout=10s\": dial tcp 138.199.153.200:6443: connect: connection refused" interval="800ms"
Jan 13 20:19:18.633665 kubelet[2311]: I0113 20:19:18.633539    2311 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:18.634277 kubelet[2311]: E0113 20:19:18.634228    2311 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.153.200:6443/api/v1/nodes\": dial tcp 138.199.153.200:6443: connect: connection refused" node="ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:18.863950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount650103639.mount: Deactivated successfully.
Jan 13 20:19:18.896709 containerd[1475]: time="2025-01-13T20:19:18.894731871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 13 20:19:18.902496 containerd[1475]: time="2025-01-13T20:19:18.901951437Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193"
Jan 13 20:19:18.910868 containerd[1475]: time="2025-01-13T20:19:18.910777935Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 13 20:19:18.921606 containerd[1475]: time="2025-01-13T20:19:18.921421524Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 13 20:19:18.926901 containerd[1475]: time="2025-01-13T20:19:18.926828199Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 13 20:19:18.930918 containerd[1475]: time="2025-01-13T20:19:18.930786464Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jan 13 20:19:18.933882 containerd[1475]: time="2025-01-13T20:19:18.933763444Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jan 13 20:19:18.939831 containerd[1475]: time="2025-01-13T20:19:18.938138512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 13 20:19:18.942565 containerd[1475]: time="2025-01-13T20:19:18.941990217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 657.644705ms"
Jan 13 20:19:18.948813 containerd[1475]: time="2025-01-13T20:19:18.948732941Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 650.140097ms"
Jan 13 20:19:18.970591 kubelet[2311]: W0113 20:19:18.970210    2311 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.153.200:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.200:6443: connect: connection refused
Jan 13 20:19:18.970591 kubelet[2311]: E0113 20:19:18.970304    2311 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.199.153.200:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.153.200:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:19:18.977690 containerd[1475]: time="2025-01-13T20:19:18.977393567Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 671.535875ms"
Jan 13 20:19:19.122816 containerd[1475]: time="2025-01-13T20:19:19.122653648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:19:19.122816 containerd[1475]: time="2025-01-13T20:19:19.122731928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:19:19.123575 containerd[1475]: time="2025-01-13T20:19:19.122749129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:19:19.125039 containerd[1475]: time="2025-01-13T20:19:19.124919583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:19:19.127460 containerd[1475]: time="2025-01-13T20:19:19.127157638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:19:19.127778 containerd[1475]: time="2025-01-13T20:19:19.127577081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:19:19.127778 containerd[1475]: time="2025-01-13T20:19:19.127636961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:19:19.128141 containerd[1475]: time="2025-01-13T20:19:19.128092764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:19:19.130977 kubelet[2311]: W0113 20:19:19.130909    2311 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.153.200:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.199.153.200:6443: connect: connection refused
Jan 13 20:19:19.132345 kubelet[2311]: E0113 20:19:19.132147    2311 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.199.153.200:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.153.200:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:19:19.134171 containerd[1475]: time="2025-01-13T20:19:19.133720881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:19:19.134171 containerd[1475]: time="2025-01-13T20:19:19.133788202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:19:19.134171 containerd[1475]: time="2025-01-13T20:19:19.133803922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:19:19.134171 containerd[1475]: time="2025-01-13T20:19:19.133938483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:19:19.158705 systemd[1]: Started cri-containerd-b1eebe170271174f712fecb2b0af72f72691ca0f103315b23b202404206f4593.scope - libcontainer container b1eebe170271174f712fecb2b0af72f72691ca0f103315b23b202404206f4593.
Jan 13 20:19:19.169600 systemd[1]: Started cri-containerd-39982b2b4d3b934ad8259af1764c9d9639a742f86dd0164e2a259fdd63a1c22d.scope - libcontainer container 39982b2b4d3b934ad8259af1764c9d9639a742f86dd0164e2a259fdd63a1c22d.
Jan 13 20:19:19.179083 systemd[1]: Started cri-containerd-33c84913d8e25107591b31f24268b28969823fce240db12ab25cf24d57bf09ca.scope - libcontainer container 33c84913d8e25107591b31f24268b28969823fce240db12ab25cf24d57bf09ca.
Jan 13 20:19:19.241106 containerd[1475]: time="2025-01-13T20:19:19.241063555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-a-dc4fc49980,Uid:b944ca5e06fb6c38b17b0c76ae0dad69,Namespace:kube-system,Attempt:0,} returns sandbox id \"39982b2b4d3b934ad8259af1764c9d9639a742f86dd0164e2a259fdd63a1c22d\""
Jan 13 20:19:19.243155 kubelet[2311]: E0113 20:19:19.243028    2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.200:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-a-dc4fc49980?timeout=10s\": dial tcp 138.199.153.200:6443: connect: connection refused" interval="1.6s"
Jan 13 20:19:19.250073 containerd[1475]: time="2025-01-13T20:19:19.249815733Z" level=info msg="CreateContainer within sandbox \"39982b2b4d3b934ad8259af1764c9d9639a742f86dd0164e2a259fdd63a1c22d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Jan 13 20:19:19.255521 containerd[1475]: time="2025-01-13T20:19:19.255386810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-a-dc4fc49980,Uid:7bf17615c6847854eb32e738deccf5ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1eebe170271174f712fecb2b0af72f72691ca0f103315b23b202404206f4593\""
Jan 13 20:19:19.260751 containerd[1475]: time="2025-01-13T20:19:19.260604085Z" level=info msg="CreateContainer within sandbox \"b1eebe170271174f712fecb2b0af72f72691ca0f103315b23b202404206f4593\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Jan 13 20:19:19.267158 containerd[1475]: time="2025-01-13T20:19:19.266936887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-a-dc4fc49980,Uid:4e1b0bd4fd531d0f6575f509c504e74b,Namespace:kube-system,Attempt:0,} returns sandbox id \"33c84913d8e25107591b31f24268b28969823fce240db12ab25cf24d57bf09ca\""
Jan 13 20:19:19.272692 containerd[1475]: time="2025-01-13T20:19:19.272103681Z" level=info msg="CreateContainer within sandbox \"33c84913d8e25107591b31f24268b28969823fce240db12ab25cf24d57bf09ca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Jan 13 20:19:19.284933 containerd[1475]: time="2025-01-13T20:19:19.284837286Z" level=info msg="CreateContainer within sandbox \"39982b2b4d3b934ad8259af1764c9d9639a742f86dd0164e2a259fdd63a1c22d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"20c10f3f8ecb5d0f1de9db4a38cf797cd4ca0de907a9175acd0e1a43895d9320\""
Jan 13 20:19:19.286364 containerd[1475]: time="2025-01-13T20:19:19.286278575Z" level=info msg="StartContainer for \"20c10f3f8ecb5d0f1de9db4a38cf797cd4ca0de907a9175acd0e1a43895d9320\""
Jan 13 20:19:19.301047 kubelet[2311]: W0113 20:19:19.300795    2311 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.153.200:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-a-dc4fc49980&limit=500&resourceVersion=0": dial tcp 138.199.153.200:6443: connect: connection refused
Jan 13 20:19:19.301047 kubelet[2311]: E0113 20:19:19.300976    2311 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.199.153.200:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-a-dc4fc49980&limit=500&resourceVersion=0\": dial tcp 138.199.153.200:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:19:19.308145 containerd[1475]: time="2025-01-13T20:19:19.308088560Z" level=info msg="CreateContainer within sandbox \"b1eebe170271174f712fecb2b0af72f72691ca0f103315b23b202404206f4593\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c4874b3c4060ebf6aad2b51daa81ac8480a8de9f5038612f5f8860ffbf96204b\""
Jan 13 20:19:19.310479 containerd[1475]: time="2025-01-13T20:19:19.310124134Z" level=info msg="StartContainer for \"c4874b3c4060ebf6aad2b51daa81ac8480a8de9f5038612f5f8860ffbf96204b\""
Jan 13 20:19:19.321348 containerd[1475]: time="2025-01-13T20:19:19.321102006Z" level=info msg="CreateContainer within sandbox \"33c84913d8e25107591b31f24268b28969823fce240db12ab25cf24d57bf09ca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e8b47e3e89495b6edc1255ea053a22fec1d26bd7bd9372f820e11ca70510e18a\""
Jan 13 20:19:19.323667 containerd[1475]: time="2025-01-13T20:19:19.323611503Z" level=info msg="StartContainer for \"e8b47e3e89495b6edc1255ea053a22fec1d26bd7bd9372f820e11ca70510e18a\""
Jan 13 20:19:19.324920 systemd[1]: Started cri-containerd-20c10f3f8ecb5d0f1de9db4a38cf797cd4ca0de907a9175acd0e1a43895d9320.scope - libcontainer container 20c10f3f8ecb5d0f1de9db4a38cf797cd4ca0de907a9175acd0e1a43895d9320.
Jan 13 20:19:19.364323 kubelet[2311]: W0113 20:19:19.363891    2311 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.153.200:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.200:6443: connect: connection refused
Jan 13 20:19:19.364323 kubelet[2311]: E0113 20:19:19.364260    2311 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.199.153.200:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.153.200:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:19:19.378762 systemd[1]: Started cri-containerd-c4874b3c4060ebf6aad2b51daa81ac8480a8de9f5038612f5f8860ffbf96204b.scope - libcontainer container c4874b3c4060ebf6aad2b51daa81ac8480a8de9f5038612f5f8860ffbf96204b.
Jan 13 20:19:19.389982 systemd[1]: Started cri-containerd-e8b47e3e89495b6edc1255ea053a22fec1d26bd7bd9372f820e11ca70510e18a.scope - libcontainer container e8b47e3e89495b6edc1255ea053a22fec1d26bd7bd9372f820e11ca70510e18a.
Jan 13 20:19:19.410922 containerd[1475]: time="2025-01-13T20:19:19.410685362Z" level=info msg="StartContainer for \"20c10f3f8ecb5d0f1de9db4a38cf797cd4ca0de907a9175acd0e1a43895d9320\" returns successfully"
Jan 13 20:19:19.441771 kubelet[2311]: I0113 20:19:19.441035    2311 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:19.442316 kubelet[2311]: E0113 20:19:19.442275    2311 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.153.200:6443/api/v1/nodes\": dial tcp 138.199.153.200:6443: connect: connection refused" node="ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:19.485905 containerd[1475]: time="2025-01-13T20:19:19.485661700Z" level=info msg="StartContainer for \"e8b47e3e89495b6edc1255ea053a22fec1d26bd7bd9372f820e11ca70510e18a\" returns successfully"
Jan 13 20:19:19.501639 containerd[1475]: time="2025-01-13T20:19:19.501583486Z" level=info msg="StartContainer for \"c4874b3c4060ebf6aad2b51daa81ac8480a8de9f5038612f5f8860ffbf96204b\" returns successfully"
Jan 13 20:19:21.048541 kubelet[2311]: I0113 20:19:21.048102    2311 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:22.335201 kubelet[2311]: E0113 20:19:22.335139    2311 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-1-0-a-dc4fc49980\" not found" node="ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:22.455992 kubelet[2311]: E0113 20:19:22.455663    2311 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186-1-0-a-dc4fc49980.181a59fef1e79ec9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-a-dc4fc49980,UID:ci-4186-1-0-a-dc4fc49980,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-a-dc4fc49980,},FirstTimestamp:2025-01-13 20:19:17.819801289 +0000 UTC m=+1.977027476,LastTimestamp:2025-01-13 20:19:17.819801289 +0000 UTC m=+1.977027476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-a-dc4fc49980,}"
Jan 13 20:19:22.520299 kubelet[2311]: I0113 20:19:22.520217    2311 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:22.520299 kubelet[2311]: E0113 20:19:22.520297    2311 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4186-1-0-a-dc4fc49980\": node \"ci-4186-1-0-a-dc4fc49980\" not found"
Jan 13 20:19:22.524776 kubelet[2311]: E0113 20:19:22.524649    2311 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186-1-0-a-dc4fc49980.181a59fef2b44a1e  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-a-dc4fc49980,UID:ci-4186-1-0-a-dc4fc49980,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-a-dc4fc49980,},FirstTimestamp:2025-01-13 20:19:17.833214494 +0000 UTC m=+1.990440681,LastTimestamp:2025-01-13 20:19:17.833214494 +0000 UTC m=+1.990440681,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-a-dc4fc49980,}"
Jan 13 20:19:22.587448 kubelet[2311]: E0113 20:19:22.587126    2311 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186-1-0-a-dc4fc49980.181a59fef4ddd7c3  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-a-dc4fc49980,UID:ci-4186-1-0-a-dc4fc49980,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4186-1-0-a-dc4fc49980 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-a-dc4fc49980,},FirstTimestamp:2025-01-13 20:19:17.869492163 +0000 UTC m=+2.026718310,LastTimestamp:2025-01-13 20:19:17.869492163 +0000 UTC m=+2.026718310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-a-dc4fc49980,}"
Jan 13 20:19:22.647112 kubelet[2311]: E0113 20:19:22.646977    2311 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186-1-0-a-dc4fc49980.181a59fef4ddeb23  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-a-dc4fc49980,UID:ci-4186-1-0-a-dc4fc49980,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-4186-1-0-a-dc4fc49980 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-a-dc4fc49980,},FirstTimestamp:2025-01-13 20:19:17.869497123 +0000 UTC m=+2.026723310,LastTimestamp:2025-01-13 20:19:17.869497123 +0000 UTC m=+2.026723310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-a-dc4fc49980,}"
Jan 13 20:19:22.811231 kubelet[2311]: I0113 20:19:22.811174    2311 apiserver.go:52] "Watching apiserver"
Jan 13 20:19:22.837741 kubelet[2311]: I0113 20:19:22.837570    2311 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Jan 13 20:19:23.115968 kubelet[2311]: E0113 20:19:23.114667    2311 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186-1-0-a-dc4fc49980\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:24.809930 systemd[1]: Reloading requested from client PID 2587 ('systemctl') (unit session-5.scope)...
Jan 13 20:19:24.810546 systemd[1]: Reloading...
Jan 13 20:19:24.969562 zram_generator::config[2636]: No configuration found.
Jan 13 20:19:25.085397 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:19:25.173321 systemd[1]: Reloading finished in 362 ms.
Jan 13 20:19:25.221021 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:19:25.235359 systemd[1]: kubelet.service: Deactivated successfully.
Jan 13 20:19:25.235823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:19:25.236096 systemd[1]: kubelet.service: Consumed 2.510s CPU time, 114.6M memory peak, 0B memory swap peak.
Jan 13 20:19:25.250457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:19:25.401837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:19:25.410973 (kubelet)[2672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Jan 13 20:19:25.490135 kubelet[2672]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 13 20:19:25.490135 kubelet[2672]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jan 13 20:19:25.490135 kubelet[2672]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 13 20:19:25.490135 kubelet[2672]: I0113 20:19:25.488710    2672 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jan 13 20:19:25.500412 kubelet[2672]: I0113 20:19:25.500367    2672 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Jan 13 20:19:25.500412 kubelet[2672]: I0113 20:19:25.500406    2672 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 13 20:19:25.501616 kubelet[2672]: I0113 20:19:25.501587    2672 server.go:929] "Client rotation is on, will bootstrap in background"
Jan 13 20:19:25.505488 kubelet[2672]: I0113 20:19:25.503511    2672 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jan 13 20:19:25.506165 kubelet[2672]: I0113 20:19:25.506128    2672 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 13 20:19:25.512983 kubelet[2672]: E0113 20:19:25.510824    2672 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Jan 13 20:19:25.512983 kubelet[2672]: I0113 20:19:25.510958    2672 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Jan 13 20:19:25.515358 kubelet[2672]: I0113 20:19:25.515328    2672 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jan 13 20:19:25.515701 kubelet[2672]: I0113 20:19:25.515688    2672 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Jan 13 20:19:25.516129 kubelet[2672]: I0113 20:19:25.516080    2672 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jan 13 20:19:25.516517 kubelet[2672]: I0113 20:19:25.516228    2672 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-a-dc4fc49980","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Jan 13 20:19:25.516693 kubelet[2672]: I0113 20:19:25.516668    2672 topology_manager.go:138] "Creating topology manager with none policy"
Jan 13 20:19:25.516743 kubelet[2672]: I0113 20:19:25.516735    2672 container_manager_linux.go:300] "Creating device plugin manager"
Jan 13 20:19:25.516873 kubelet[2672]: I0113 20:19:25.516864    2672 state_mem.go:36] "Initialized new in-memory state store"
Jan 13 20:19:25.517153 kubelet[2672]: I0113 20:19:25.517142    2672 kubelet.go:408] "Attempting to sync node with API server"
Jan 13 20:19:25.517339 kubelet[2672]: I0113 20:19:25.517325    2672 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 13 20:19:25.517415 kubelet[2672]: I0113 20:19:25.517407    2672 kubelet.go:314] "Adding apiserver pod source"
Jan 13 20:19:25.517502 kubelet[2672]: I0113 20:19:25.517492    2672 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 13 20:19:25.519569 kubelet[2672]: I0113 20:19:25.519526    2672 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Jan 13 20:19:25.521791 kubelet[2672]: I0113 20:19:25.520975    2672 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jan 13 20:19:25.531081 kubelet[2672]: I0113 20:19:25.531055    2672 server.go:1269] "Started kubelet"
Jan 13 20:19:25.532991 kubelet[2672]: I0113 20:19:25.532916    2672 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Jan 13 20:19:25.535077 kubelet[2672]: I0113 20:19:25.535042    2672 server.go:460] "Adding debug handlers to kubelet server"
Jan 13 20:19:25.536921 kubelet[2672]: I0113 20:19:25.536323    2672 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jan 13 20:19:25.536921 kubelet[2672]: I0113 20:19:25.536593    2672 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jan 13 20:19:25.537611 kubelet[2672]: I0113 20:19:25.537590    2672 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 13 20:19:25.543763 kubelet[2672]: I0113 20:19:25.543720    2672 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Jan 13 20:19:25.545713 kubelet[2672]: I0113 20:19:25.545682    2672 volume_manager.go:289] "Starting Kubelet Volume Manager"
Jan 13 20:19:25.546552 kubelet[2672]: E0113 20:19:25.546151    2672 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-a-dc4fc49980\" not found"
Jan 13 20:19:25.548558 kubelet[2672]: I0113 20:19:25.548526    2672 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Jan 13 20:19:25.548873 kubelet[2672]: I0113 20:19:25.548826    2672 reconciler.go:26] "Reconciler: start to sync state"
Jan 13 20:19:25.551316 kubelet[2672]: I0113 20:19:25.551270    2672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jan 13 20:19:25.553145 kubelet[2672]: I0113 20:19:25.553109    2672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jan 13 20:19:25.553432 kubelet[2672]: I0113 20:19:25.553268    2672 status_manager.go:217] "Starting to sync pod status with apiserver"
Jan 13 20:19:25.553432 kubelet[2672]: I0113 20:19:25.553299    2672 kubelet.go:2321] "Starting kubelet main sync loop"
Jan 13 20:19:25.553792 kubelet[2672]: E0113 20:19:25.553703    2672 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 13 20:19:25.563539 kubelet[2672]: I0113 20:19:25.562539    2672 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jan 13 20:19:25.569227 kubelet[2672]: E0113 20:19:25.569179    2672 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 13 20:19:25.569864 kubelet[2672]: I0113 20:19:25.569841    2672 factory.go:221] Registration of the containerd container factory successfully
Jan 13 20:19:25.569951 kubelet[2672]: I0113 20:19:25.569943    2672 factory.go:221] Registration of the systemd container factory successfully
Jan 13 20:19:25.655726 kubelet[2672]: E0113 20:19:25.655603    2672 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Jan 13 20:19:25.656014 kubelet[2672]: I0113 20:19:25.655998    2672 cpu_manager.go:214] "Starting CPU manager" policy="none"
Jan 13 20:19:25.656096 kubelet[2672]: I0113 20:19:25.656083    2672 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Jan 13 20:19:25.656178 kubelet[2672]: I0113 20:19:25.656154    2672 state_mem.go:36] "Initialized new in-memory state store"
Jan 13 20:19:25.656831 kubelet[2672]: I0113 20:19:25.656774    2672 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Jan 13 20:19:25.657128 kubelet[2672]: I0113 20:19:25.657061    2672 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Jan 13 20:19:25.657313 kubelet[2672]: I0113 20:19:25.657300    2672 policy_none.go:49] "None policy: Start"
Jan 13 20:19:25.660113 kubelet[2672]: I0113 20:19:25.660057    2672 memory_manager.go:170] "Starting memorymanager" policy="None"
Jan 13 20:19:25.660353 kubelet[2672]: I0113 20:19:25.660337    2672 state_mem.go:35] "Initializing new in-memory state store"
Jan 13 20:19:25.660798 kubelet[2672]: I0113 20:19:25.660770    2672 state_mem.go:75] "Updated machine memory state"
Jan 13 20:19:25.668952 kubelet[2672]: I0113 20:19:25.668887    2672 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 13 20:19:25.670496 kubelet[2672]: I0113 20:19:25.669692    2672 eviction_manager.go:189] "Eviction manager: starting control loop"
Jan 13 20:19:25.670496 kubelet[2672]: I0113 20:19:25.669722    2672 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Jan 13 20:19:25.673648 kubelet[2672]: I0113 20:19:25.673622    2672 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jan 13 20:19:25.792424 kubelet[2672]: I0113 20:19:25.792150    2672 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:25.812473 kubelet[2672]: I0113 20:19:25.812403    2672 kubelet_node_status.go:111] "Node was previously registered" node="ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:25.812596 kubelet[2672]: I0113 20:19:25.812570    2672 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:25.872749 kubelet[2672]: E0113 20:19:25.872667    2672 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-0-a-dc4fc49980\" already exists" pod="kube-system/kube-apiserver-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:25.951614 kubelet[2672]: I0113 20:19:25.951029    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b944ca5e06fb6c38b17b0c76ae0dad69-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-a-dc4fc49980\" (UID: \"b944ca5e06fb6c38b17b0c76ae0dad69\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:25.951614 kubelet[2672]: I0113 20:19:25.951131    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b944ca5e06fb6c38b17b0c76ae0dad69-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-a-dc4fc49980\" (UID: \"b944ca5e06fb6c38b17b0c76ae0dad69\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:25.951614 kubelet[2672]: I0113 20:19:25.951157    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7bf17615c6847854eb32e738deccf5ad-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-a-dc4fc49980\" (UID: \"7bf17615c6847854eb32e738deccf5ad\") " pod="kube-system/kube-apiserver-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:25.951614 kubelet[2672]: I0113 20:19:25.951223    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b944ca5e06fb6c38b17b0c76ae0dad69-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-a-dc4fc49980\" (UID: \"b944ca5e06fb6c38b17b0c76ae0dad69\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:25.951614 kubelet[2672]: I0113 20:19:25.951243    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b944ca5e06fb6c38b17b0c76ae0dad69-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-a-dc4fc49980\" (UID: \"b944ca5e06fb6c38b17b0c76ae0dad69\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:25.951840 kubelet[2672]: I0113 20:19:25.951298    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b944ca5e06fb6c38b17b0c76ae0dad69-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-a-dc4fc49980\" (UID: \"b944ca5e06fb6c38b17b0c76ae0dad69\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:25.951840 kubelet[2672]: I0113 20:19:25.951321    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e1b0bd4fd531d0f6575f509c504e74b-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-a-dc4fc49980\" (UID: \"4e1b0bd4fd531d0f6575f509c504e74b\") " pod="kube-system/kube-scheduler-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:25.951840 kubelet[2672]: I0113 20:19:25.951369    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7bf17615c6847854eb32e738deccf5ad-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-a-dc4fc49980\" (UID: \"7bf17615c6847854eb32e738deccf5ad\") " pod="kube-system/kube-apiserver-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:25.951840 kubelet[2672]: I0113 20:19:25.951389    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7bf17615c6847854eb32e738deccf5ad-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-a-dc4fc49980\" (UID: \"7bf17615c6847854eb32e738deccf5ad\") " pod="kube-system/kube-apiserver-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:26.518060 kubelet[2672]: I0113 20:19:26.517867    2672 apiserver.go:52] "Watching apiserver"
Jan 13 20:19:26.550474 kubelet[2672]: I0113 20:19:26.548921    2672 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Jan 13 20:19:26.621609 kubelet[2672]: E0113 20:19:26.621554    2672 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-0-a-dc4fc49980\" already exists" pod="kube-system/kube-apiserver-ci-4186-1-0-a-dc4fc49980"
Jan 13 20:19:26.657639 kubelet[2672]: I0113 20:19:26.657372    2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-0-a-dc4fc49980" podStartSLOduration=3.657323336 podStartE2EDuration="3.657323336s" podCreationTimestamp="2025-01-13 20:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:26.640637849 +0000 UTC m=+1.223398614" watchObservedRunningTime="2025-01-13 20:19:26.657323336 +0000 UTC m=+1.240084141"
Jan 13 20:19:26.662918 kubelet[2672]: I0113 20:19:26.662555    2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-0-a-dc4fc49980" podStartSLOduration=1.662521736 podStartE2EDuration="1.662521736s" podCreationTimestamp="2025-01-13 20:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:26.655045199 +0000 UTC m=+1.237805964" watchObservedRunningTime="2025-01-13 20:19:26.662521736 +0000 UTC m=+1.245282501"
Jan 13 20:19:26.698795 kubelet[2672]: I0113 20:19:26.698681    2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-0-a-dc4fc49980" podStartSLOduration=1.6986646109999999 podStartE2EDuration="1.698664611s" podCreationTimestamp="2025-01-13 20:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:26.678118455 +0000 UTC m=+1.260879260" watchObservedRunningTime="2025-01-13 20:19:26.698664611 +0000 UTC m=+1.281425376"
Jan 13 20:19:27.021372 sudo[1785]: pam_unix(sudo:session): session closed for user root
Jan 13 20:19:27.184460 sshd[1784]: Connection closed by 139.178.89.65 port 57134
Jan 13 20:19:27.183896 sshd-session[1782]: pam_unix(sshd:session): session closed for user core
Jan 13 20:19:27.195096 systemd[1]: sshd@4-138.199.153.200:22-139.178.89.65:57134.service: Deactivated successfully.
Jan 13 20:19:27.200369 systemd[1]: session-5.scope: Deactivated successfully.
Jan 13 20:19:27.200874 systemd[1]: session-5.scope: Consumed 7.702s CPU time, 155.7M memory peak, 0B memory swap peak.
Jan 13 20:19:27.201912 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit.
Jan 13 20:19:27.203970 systemd-logind[1447]: Removed session 5.
Jan 13 20:19:29.114521 kubelet[2672]: I0113 20:19:29.114393    2672 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Jan 13 20:19:29.115511 containerd[1475]: time="2025-01-13T20:19:29.114902359Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Jan 13 20:19:29.116014 kubelet[2672]: I0113 20:19:29.115628    2672 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Jan 13 20:19:29.948848 systemd[1]: Created slice kubepods-besteffort-pod8b75b1f7_8d51_481b_a46a_a1d71a309af4.slice - libcontainer container kubepods-besteffort-pod8b75b1f7_8d51_481b_a46a_a1d71a309af4.slice.
Jan 13 20:19:29.967134 systemd[1]: Created slice kubepods-burstable-podc8a8152a_ed55_40ac_a288_911159c42db1.slice - libcontainer container kubepods-burstable-podc8a8152a_ed55_40ac_a288_911159c42db1.slice.
Jan 13 20:19:29.978545 kubelet[2672]: I0113 20:19:29.978466    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c8a8152a-ed55-40ac-a288-911159c42db1-run\") pod \"kube-flannel-ds-q755s\" (UID: \"c8a8152a-ed55-40ac-a288-911159c42db1\") " pod="kube-flannel/kube-flannel-ds-q755s"
Jan 13 20:19:29.979033 kubelet[2672]: I0113 20:19:29.978791    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/c8a8152a-ed55-40ac-a288-911159c42db1-cni\") pod \"kube-flannel-ds-q755s\" (UID: \"c8a8152a-ed55-40ac-a288-911159c42db1\") " pod="kube-flannel/kube-flannel-ds-q755s"
Jan 13 20:19:29.979033 kubelet[2672]: I0113 20:19:29.978818    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/c8a8152a-ed55-40ac-a288-911159c42db1-flannel-cfg\") pod \"kube-flannel-ds-q755s\" (UID: \"c8a8152a-ed55-40ac-a288-911159c42db1\") " pod="kube-flannel/kube-flannel-ds-q755s"
Jan 13 20:19:29.979033 kubelet[2672]: I0113 20:19:29.978837    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n8dr\" (UniqueName: \"kubernetes.io/projected/c8a8152a-ed55-40ac-a288-911159c42db1-kube-api-access-8n8dr\") pod \"kube-flannel-ds-q755s\" (UID: \"c8a8152a-ed55-40ac-a288-911159c42db1\") " pod="kube-flannel/kube-flannel-ds-q755s"
Jan 13 20:19:29.979033 kubelet[2672]: I0113 20:19:29.978853    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8b75b1f7-8d51-481b-a46a-a1d71a309af4-kube-proxy\") pod \"kube-proxy-fnk4q\" (UID: \"8b75b1f7-8d51-481b-a46a-a1d71a309af4\") " pod="kube-system/kube-proxy-fnk4q"
Jan 13 20:19:29.979033 kubelet[2672]: I0113 20:19:29.978867    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b75b1f7-8d51-481b-a46a-a1d71a309af4-xtables-lock\") pod \"kube-proxy-fnk4q\" (UID: \"8b75b1f7-8d51-481b-a46a-a1d71a309af4\") " pod="kube-system/kube-proxy-fnk4q"
Jan 13 20:19:29.979182 kubelet[2672]: I0113 20:19:29.978882    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b75b1f7-8d51-481b-a46a-a1d71a309af4-lib-modules\") pod \"kube-proxy-fnk4q\" (UID: \"8b75b1f7-8d51-481b-a46a-a1d71a309af4\") " pod="kube-system/kube-proxy-fnk4q"
Jan 13 20:19:29.979182 kubelet[2672]: I0113 20:19:29.978902    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/c8a8152a-ed55-40ac-a288-911159c42db1-cni-plugin\") pod \"kube-flannel-ds-q755s\" (UID: \"c8a8152a-ed55-40ac-a288-911159c42db1\") " pod="kube-flannel/kube-flannel-ds-q755s"
Jan 13 20:19:29.979182 kubelet[2672]: I0113 20:19:29.978922    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8a8152a-ed55-40ac-a288-911159c42db1-xtables-lock\") pod \"kube-flannel-ds-q755s\" (UID: \"c8a8152a-ed55-40ac-a288-911159c42db1\") " pod="kube-flannel/kube-flannel-ds-q755s"
Jan 13 20:19:29.979182 kubelet[2672]: I0113 20:19:29.978950    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9skpl\" (UniqueName: \"kubernetes.io/projected/8b75b1f7-8d51-481b-a46a-a1d71a309af4-kube-api-access-9skpl\") pod \"kube-proxy-fnk4q\" (UID: \"8b75b1f7-8d51-481b-a46a-a1d71a309af4\") " pod="kube-system/kube-proxy-fnk4q"
Jan 13 20:19:30.093948 kubelet[2672]: E0113 20:19:30.093648    2672 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Jan 13 20:19:30.093948 kubelet[2672]: E0113 20:19:30.093688    2672 projected.go:194] Error preparing data for projected volume kube-api-access-9skpl for pod kube-system/kube-proxy-fnk4q: configmap "kube-root-ca.crt" not found
Jan 13 20:19:30.093948 kubelet[2672]: E0113 20:19:30.093755    2672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b75b1f7-8d51-481b-a46a-a1d71a309af4-kube-api-access-9skpl podName:8b75b1f7-8d51-481b-a46a-a1d71a309af4 nodeName:}" failed. No retries permitted until 2025-01-13 20:19:30.593731742 +0000 UTC m=+5.176492507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9skpl" (UniqueName: "kubernetes.io/projected/8b75b1f7-8d51-481b-a46a-a1d71a309af4-kube-api-access-9skpl") pod "kube-proxy-fnk4q" (UID: "8b75b1f7-8d51-481b-a46a-a1d71a309af4") : configmap "kube-root-ca.crt" not found
Jan 13 20:19:30.274293 containerd[1475]: time="2025-01-13T20:19:30.273869440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-q755s,Uid:c8a8152a-ed55-40ac-a288-911159c42db1,Namespace:kube-flannel,Attempt:0,}"
Jan 13 20:19:30.310136 containerd[1475]: time="2025-01-13T20:19:30.310020412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:19:30.310583 containerd[1475]: time="2025-01-13T20:19:30.310155053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:19:30.310583 containerd[1475]: time="2025-01-13T20:19:30.310170013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:19:30.310583 containerd[1475]: time="2025-01-13T20:19:30.310322335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:19:30.337907 systemd[1]: Started cri-containerd-994ac844e194d9810e3ddd343bdafc8dc8056858594e9bea9aa6c37d044adefa.scope - libcontainer container 994ac844e194d9810e3ddd343bdafc8dc8056858594e9bea9aa6c37d044adefa.
Jan 13 20:19:30.376769 containerd[1475]: time="2025-01-13T20:19:30.376727312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-q755s,Uid:c8a8152a-ed55-40ac-a288-911159c42db1,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"994ac844e194d9810e3ddd343bdafc8dc8056858594e9bea9aa6c37d044adefa\""
Jan 13 20:19:30.380511 containerd[1475]: time="2025-01-13T20:19:30.380251501Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\""
Jan 13 20:19:30.860925 containerd[1475]: time="2025-01-13T20:19:30.860565228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fnk4q,Uid:8b75b1f7-8d51-481b-a46a-a1d71a309af4,Namespace:kube-system,Attempt:0,}"
Jan 13 20:19:30.890895 containerd[1475]: time="2025-01-13T20:19:30.890787913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:19:30.890895 containerd[1475]: time="2025-01-13T20:19:30.890846553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:19:30.891117 containerd[1475]: time="2025-01-13T20:19:30.890863953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:19:30.891117 containerd[1475]: time="2025-01-13T20:19:30.890942514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:19:30.908695 systemd[1]: Started cri-containerd-c724ccfd976856607fa1f05a7e070d830ec0a68a1b3f1c4b16c2b58652b81a86.scope - libcontainer container c724ccfd976856607fa1f05a7e070d830ec0a68a1b3f1c4b16c2b58652b81a86.
Jan 13 20:19:30.944169 containerd[1475]: time="2025-01-13T20:19:30.944098984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fnk4q,Uid:8b75b1f7-8d51-481b-a46a-a1d71a309af4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c724ccfd976856607fa1f05a7e070d830ec0a68a1b3f1c4b16c2b58652b81a86\""
Jan 13 20:19:30.950519 containerd[1475]: time="2025-01-13T20:19:30.950412395Z" level=info msg="CreateContainer within sandbox \"c724ccfd976856607fa1f05a7e070d830ec0a68a1b3f1c4b16c2b58652b81a86\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Jan 13 20:19:30.976987 containerd[1475]: time="2025-01-13T20:19:30.976933730Z" level=info msg="CreateContainer within sandbox \"c724ccfd976856607fa1f05a7e070d830ec0a68a1b3f1c4b16c2b58652b81a86\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4f90adef32061e8c002ba8fdb2aa29a47a5ca5c98e80d7274e8c915d6f8b31ee\""
Jan 13 20:19:30.979764 containerd[1475]: time="2025-01-13T20:19:30.979472710Z" level=info msg="StartContainer for \"4f90adef32061e8c002ba8fdb2aa29a47a5ca5c98e80d7274e8c915d6f8b31ee\""
Jan 13 20:19:31.023664 systemd[1]: Started cri-containerd-4f90adef32061e8c002ba8fdb2aa29a47a5ca5c98e80d7274e8c915d6f8b31ee.scope - libcontainer container 4f90adef32061e8c002ba8fdb2aa29a47a5ca5c98e80d7274e8c915d6f8b31ee.
Jan 13 20:19:31.068389 containerd[1475]: time="2025-01-13T20:19:31.068252316Z" level=info msg="StartContainer for \"4f90adef32061e8c002ba8fdb2aa29a47a5ca5c98e80d7274e8c915d6f8b31ee\" returns successfully"
Jan 13 20:19:33.008533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1244019822.mount: Deactivated successfully.
Jan 13 20:19:33.063516 containerd[1475]: time="2025-01-13T20:19:33.062729833Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:33.066538 containerd[1475]: time="2025-01-13T20:19:33.064167245Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532"
Jan 13 20:19:33.068430 containerd[1475]: time="2025-01-13T20:19:33.068368440Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:33.072311 containerd[1475]: time="2025-01-13T20:19:33.072251073Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:33.074688 containerd[1475]: time="2025-01-13T20:19:33.074616933Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.69410575s"
Jan 13 20:19:33.074688 containerd[1475]: time="2025-01-13T20:19:33.074681453Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\""
Jan 13 20:19:33.080980 containerd[1475]: time="2025-01-13T20:19:33.080929266Z" level=info msg="CreateContainer within sandbox \"994ac844e194d9810e3ddd343bdafc8dc8056858594e9bea9aa6c37d044adefa\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}"
Jan 13 20:19:33.106333 containerd[1475]: time="2025-01-13T20:19:33.106128158Z" level=info msg="CreateContainer within sandbox \"994ac844e194d9810e3ddd343bdafc8dc8056858594e9bea9aa6c37d044adefa\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"11118fa85c624cfcb48d3a9c9fb730637ce6783c1b8d75696d64acd5f5a8a73a\""
Jan 13 20:19:33.107630 containerd[1475]: time="2025-01-13T20:19:33.107025805Z" level=info msg="StartContainer for \"11118fa85c624cfcb48d3a9c9fb730637ce6783c1b8d75696d64acd5f5a8a73a\""
Jan 13 20:19:33.149750 systemd[1]: Started cri-containerd-11118fa85c624cfcb48d3a9c9fb730637ce6783c1b8d75696d64acd5f5a8a73a.scope - libcontainer container 11118fa85c624cfcb48d3a9c9fb730637ce6783c1b8d75696d64acd5f5a8a73a.
Jan 13 20:19:33.185647 containerd[1475]: time="2025-01-13T20:19:33.185596386Z" level=info msg="StartContainer for \"11118fa85c624cfcb48d3a9c9fb730637ce6783c1b8d75696d64acd5f5a8a73a\" returns successfully"
Jan 13 20:19:33.187522 systemd[1]: cri-containerd-11118fa85c624cfcb48d3a9c9fb730637ce6783c1b8d75696d64acd5f5a8a73a.scope: Deactivated successfully.
Jan 13 20:19:33.236881 containerd[1475]: time="2025-01-13T20:19:33.236677495Z" level=info msg="shim disconnected" id=11118fa85c624cfcb48d3a9c9fb730637ce6783c1b8d75696d64acd5f5a8a73a namespace=k8s.io
Jan 13 20:19:33.237530 containerd[1475]: time="2025-01-13T20:19:33.237283580Z" level=warning msg="cleaning up after shim disconnected" id=11118fa85c624cfcb48d3a9c9fb730637ce6783c1b8d75696d64acd5f5a8a73a namespace=k8s.io
Jan 13 20:19:33.237530 containerd[1475]: time="2025-01-13T20:19:33.237314381Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:19:33.654701 containerd[1475]: time="2025-01-13T20:19:33.654415888Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\""
Jan 13 20:19:33.674106 kubelet[2672]: I0113 20:19:33.673791    2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fnk4q" podStartSLOduration=4.673755531 podStartE2EDuration="4.673755531s" podCreationTimestamp="2025-01-13 20:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:31.646925983 +0000 UTC m=+6.229686748" watchObservedRunningTime="2025-01-13 20:19:33.673755531 +0000 UTC m=+8.256516296"
Jan 13 20:19:33.914754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11118fa85c624cfcb48d3a9c9fb730637ce6783c1b8d75696d64acd5f5a8a73a-rootfs.mount: Deactivated successfully.
Jan 13 20:19:36.291939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount798957917.mount: Deactivated successfully.
Jan 13 20:19:37.166572 containerd[1475]: time="2025-01-13T20:19:37.166497581Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:37.170098 containerd[1475]: time="2025-01-13T20:19:37.169995332Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261"
Jan 13 20:19:37.172103 containerd[1475]: time="2025-01-13T20:19:37.172014149Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:37.178151 containerd[1475]: time="2025-01-13T20:19:37.178047322Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 13 20:19:37.181542 containerd[1475]: time="2025-01-13T20:19:37.181280671Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.526800943s"
Jan 13 20:19:37.181542 containerd[1475]: time="2025-01-13T20:19:37.181337351Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\""
Jan 13 20:19:37.187069 containerd[1475]: time="2025-01-13T20:19:37.186778039Z" level=info msg="CreateContainer within sandbox \"994ac844e194d9810e3ddd343bdafc8dc8056858594e9bea9aa6c37d044adefa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Jan 13 20:19:37.226351 containerd[1475]: time="2025-01-13T20:19:37.226173025Z" level=info msg="CreateContainer within sandbox \"994ac844e194d9810e3ddd343bdafc8dc8056858594e9bea9aa6c37d044adefa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cf7627466e1dd946fc39f582c0c25569243c6d54f6594a108d62c8871d00f039\""
Jan 13 20:19:37.229502 containerd[1475]: time="2025-01-13T20:19:37.228718007Z" level=info msg="StartContainer for \"cf7627466e1dd946fc39f582c0c25569243c6d54f6594a108d62c8871d00f039\""
Jan 13 20:19:37.277899 systemd[1]: Started cri-containerd-cf7627466e1dd946fc39f582c0c25569243c6d54f6594a108d62c8871d00f039.scope - libcontainer container cf7627466e1dd946fc39f582c0c25569243c6d54f6594a108d62c8871d00f039.
Jan 13 20:19:37.309996 systemd[1]: cri-containerd-cf7627466e1dd946fc39f582c0c25569243c6d54f6594a108d62c8871d00f039.scope: Deactivated successfully.
Jan 13 20:19:37.312268 containerd[1475]: time="2025-01-13T20:19:37.311756297Z" level=info msg="StartContainer for \"cf7627466e1dd946fc39f582c0c25569243c6d54f6594a108d62c8871d00f039\" returns successfully"
Jan 13 20:19:37.341558 kubelet[2672]: I0113 20:19:37.340295    2672 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
Jan 13 20:19:37.416876 systemd[1]: Created slice kubepods-burstable-podebd5237e_f6c8_4d85_ae63_ca339a36c567.slice - libcontainer container kubepods-burstable-podebd5237e_f6c8_4d85_ae63_ca339a36c567.slice.
Jan 13 20:19:37.426827 containerd[1475]: time="2025-01-13T20:19:37.426106821Z" level=info msg="shim disconnected" id=cf7627466e1dd946fc39f582c0c25569243c6d54f6594a108d62c8871d00f039 namespace=k8s.io
Jan 13 20:19:37.426827 containerd[1475]: time="2025-01-13T20:19:37.426425224Z" level=warning msg="cleaning up after shim disconnected" id=cf7627466e1dd946fc39f582c0c25569243c6d54f6594a108d62c8871d00f039 namespace=k8s.io
Jan 13 20:19:37.426827 containerd[1475]: time="2025-01-13T20:19:37.426469625Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:19:37.440995 systemd[1]: Created slice kubepods-burstable-pod1a9ac66f_056e_4836_8608_11dee01b258b.slice - libcontainer container kubepods-burstable-pod1a9ac66f_056e_4836_8608_11dee01b258b.slice.
Jan 13 20:19:37.454849 containerd[1475]: time="2025-01-13T20:19:37.454754793Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:19:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Jan 13 20:19:37.537735 kubelet[2672]: I0113 20:19:37.537651    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbxc9\" (UniqueName: \"kubernetes.io/projected/ebd5237e-f6c8-4d85-ae63-ca339a36c567-kube-api-access-gbxc9\") pod \"coredns-6f6b679f8f-r4z66\" (UID: \"ebd5237e-f6c8-4d85-ae63-ca339a36c567\") " pod="kube-system/coredns-6f6b679f8f-r4z66"
Jan 13 20:19:37.539730 kubelet[2672]: I0113 20:19:37.539318    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebd5237e-f6c8-4d85-ae63-ca339a36c567-config-volume\") pod \"coredns-6f6b679f8f-r4z66\" (UID: \"ebd5237e-f6c8-4d85-ae63-ca339a36c567\") " pod="kube-system/coredns-6f6b679f8f-r4z66"
Jan 13 20:19:37.539730 kubelet[2672]: I0113 20:19:37.539399    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b22x\" (UniqueName: \"kubernetes.io/projected/1a9ac66f-056e-4836-8608-11dee01b258b-kube-api-access-6b22x\") pod \"coredns-6f6b679f8f-d8gsb\" (UID: \"1a9ac66f-056e-4836-8608-11dee01b258b\") " pod="kube-system/coredns-6f6b679f8f-d8gsb"
Jan 13 20:19:37.539730 kubelet[2672]: I0113 20:19:37.539498    2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a9ac66f-056e-4836-8608-11dee01b258b-config-volume\") pod \"coredns-6f6b679f8f-d8gsb\" (UID: \"1a9ac66f-056e-4836-8608-11dee01b258b\") " pod="kube-system/coredns-6f6b679f8f-d8gsb"
Jan 13 20:19:37.673663 containerd[1475]: time="2025-01-13T20:19:37.673499915Z" level=info msg="CreateContainer within sandbox \"994ac844e194d9810e3ddd343bdafc8dc8056858594e9bea9aa6c37d044adefa\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}"
Jan 13 20:19:37.716114 containerd[1475]: time="2025-01-13T20:19:37.716044368Z" level=info msg="CreateContainer within sandbox \"994ac844e194d9810e3ddd343bdafc8dc8056858594e9bea9aa6c37d044adefa\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"06f9fbab41d6421325a44d4111a887c33211cb30cad23a22af59c58847dc8a33\""
Jan 13 20:19:37.717152 containerd[1475]: time="2025-01-13T20:19:37.717096658Z" level=info msg="StartContainer for \"06f9fbab41d6421325a44d4111a887c33211cb30cad23a22af59c58847dc8a33\""
Jan 13 20:19:37.736518 containerd[1475]: time="2025-01-13T20:19:37.735413659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-r4z66,Uid:ebd5237e-f6c8-4d85-ae63-ca339a36c567,Namespace:kube-system,Attempt:0,}"
Jan 13 20:19:37.753948 systemd[1]: Started cri-containerd-06f9fbab41d6421325a44d4111a887c33211cb30cad23a22af59c58847dc8a33.scope - libcontainer container 06f9fbab41d6421325a44d4111a887c33211cb30cad23a22af59c58847dc8a33.
Jan 13 20:19:37.767527 containerd[1475]: time="2025-01-13T20:19:37.767451740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d8gsb,Uid:1a9ac66f-056e-4836-8608-11dee01b258b,Namespace:kube-system,Attempt:0,}"
Jan 13 20:19:37.797857 containerd[1475]: time="2025-01-13T20:19:37.797735606Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-r4z66,Uid:ebd5237e-f6c8-4d85-ae63-ca339a36c567,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd191d3f256e5accb8364117e1db3536612f3048a555fce3b60787b6553161b9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Jan 13 20:19:37.799011 kubelet[2672]: E0113 20:19:37.798609    2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd191d3f256e5accb8364117e1db3536612f3048a555fce3b60787b6553161b9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Jan 13 20:19:37.799011 kubelet[2672]: E0113 20:19:37.798717    2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd191d3f256e5accb8364117e1db3536612f3048a555fce3b60787b6553161b9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-r4z66"
Jan 13 20:19:37.799011 kubelet[2672]: E0113 20:19:37.798772    2672 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd191d3f256e5accb8364117e1db3536612f3048a555fce3b60787b6553161b9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-r4z66"
Jan 13 20:19:37.799011 kubelet[2672]: E0113 20:19:37.798819    2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-r4z66_kube-system(ebd5237e-f6c8-4d85-ae63-ca339a36c567)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-r4z66_kube-system(ebd5237e-f6c8-4d85-ae63-ca339a36c567)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd191d3f256e5accb8364117e1db3536612f3048a555fce3b60787b6553161b9\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-r4z66" podUID="ebd5237e-f6c8-4d85-ae63-ca339a36c567"
Jan 13 20:19:37.800789 containerd[1475]: time="2025-01-13T20:19:37.800739752Z" level=info msg="StartContainer for \"06f9fbab41d6421325a44d4111a887c33211cb30cad23a22af59c58847dc8a33\" returns successfully"
Jan 13 20:19:37.832905 containerd[1475]: time="2025-01-13T20:19:37.832797354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d8gsb,Uid:1a9ac66f-056e-4836-8608-11dee01b258b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"405dbec8ce1569559deae405c56d1a57a4016012be1ecd2ac366fdf55af2ad7c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Jan 13 20:19:37.834512 kubelet[2672]: E0113 20:19:37.833871    2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"405dbec8ce1569559deae405c56d1a57a4016012be1ecd2ac366fdf55af2ad7c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Jan 13 20:19:37.834512 kubelet[2672]: E0113 20:19:37.833949    2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"405dbec8ce1569559deae405c56d1a57a4016012be1ecd2ac366fdf55af2ad7c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-d8gsb"
Jan 13 20:19:37.834512 kubelet[2672]: E0113 20:19:37.833974    2672 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"405dbec8ce1569559deae405c56d1a57a4016012be1ecd2ac366fdf55af2ad7c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-d8gsb"
Jan 13 20:19:37.834512 kubelet[2672]: E0113 20:19:37.834030    2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-d8gsb_kube-system(1a9ac66f-056e-4836-8608-11dee01b258b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-d8gsb_kube-system(1a9ac66f-056e-4836-8608-11dee01b258b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"405dbec8ce1569559deae405c56d1a57a4016012be1ecd2ac366fdf55af2ad7c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-d8gsb" podUID="1a9ac66f-056e-4836-8608-11dee01b258b"
Jan 13 20:19:38.209870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf7627466e1dd946fc39f582c0c25569243c6d54f6594a108d62c8871d00f039-rootfs.mount: Deactivated successfully.
Jan 13 20:19:38.730346 kubelet[2672]: I0113 20:19:38.730238    2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-q755s" podStartSLOduration=2.9261070240000002 podStartE2EDuration="9.730212941s" podCreationTimestamp="2025-01-13 20:19:29 +0000 UTC" firstStartedPulling="2025-01-13 20:19:30.37899249 +0000 UTC m=+4.961753255" lastFinishedPulling="2025-01-13 20:19:37.183098407 +0000 UTC m=+11.765859172" observedRunningTime="2025-01-13 20:19:38.708791111 +0000 UTC m=+13.291551876" watchObservedRunningTime="2025-01-13 20:19:38.730212941 +0000 UTC m=+13.312973706"
Jan 13 20:19:38.916623 systemd-networkd[1369]: flannel.1: Link UP
Jan 13 20:19:38.916629 systemd-networkd[1369]: flannel.1: Gained carrier
Jan 13 20:19:40.530666 systemd-networkd[1369]: flannel.1: Gained IPv6LL
Jan 13 20:19:49.556026 containerd[1475]: time="2025-01-13T20:19:49.555612553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-r4z66,Uid:ebd5237e-f6c8-4d85-ae63-ca339a36c567,Namespace:kube-system,Attempt:0,}"
Jan 13 20:19:49.609104 systemd-networkd[1369]: cni0: Link UP
Jan 13 20:19:49.609110 systemd-networkd[1369]: cni0: Gained carrier
Jan 13 20:19:49.611332 systemd-networkd[1369]: cni0: Lost carrier
Jan 13 20:19:49.618695 systemd-networkd[1369]: veth75c0e17e: Link UP
Jan 13 20:19:49.621525 kernel: cni0: port 1(veth75c0e17e) entered blocking state
Jan 13 20:19:49.621636 kernel: cni0: port 1(veth75c0e17e) entered disabled state
Jan 13 20:19:49.622793 kernel: veth75c0e17e: entered allmulticast mode
Jan 13 20:19:49.622945 kernel: veth75c0e17e: entered promiscuous mode
Jan 13 20:19:49.626595 kernel: cni0: port 1(veth75c0e17e) entered blocking state
Jan 13 20:19:49.626693 kernel: cni0: port 1(veth75c0e17e) entered forwarding state
Jan 13 20:19:49.628683 kernel: cni0: port 1(veth75c0e17e) entered disabled state
Jan 13 20:19:49.637539 kernel: cni0: port 1(veth75c0e17e) entered blocking state
Jan 13 20:19:49.637672 kernel: cni0: port 1(veth75c0e17e) entered forwarding state
Jan 13 20:19:49.638850 systemd-networkd[1369]: veth75c0e17e: Gained carrier
Jan 13 20:19:49.639097 systemd-networkd[1369]: cni0: Gained carrier
Jan 13 20:19:49.646390 containerd[1475]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"}
Jan 13 20:19:49.646390 containerd[1475]: delegateAdd: netconf sent to delegate plugin:
Jan 13 20:19:49.666990 containerd[1475]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T20:19:49.666780867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:19:49.666990 containerd[1475]: time="2025-01-13T20:19:49.666898028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:19:49.666990 containerd[1475]: time="2025-01-13T20:19:49.666914508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:19:49.667421 containerd[1475]: time="2025-01-13T20:19:49.667003829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:19:49.689806 systemd[1]: Started cri-containerd-b41310fc9025278fe5663d7caeeadec8569aa5178031a9fa6f876cb6cdb42c39.scope - libcontainer container b41310fc9025278fe5663d7caeeadec8569aa5178031a9fa6f876cb6cdb42c39.
Jan 13 20:19:49.728552 containerd[1475]: time="2025-01-13T20:19:49.728510184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-r4z66,Uid:ebd5237e-f6c8-4d85-ae63-ca339a36c567,Namespace:kube-system,Attempt:0,} returns sandbox id \"b41310fc9025278fe5663d7caeeadec8569aa5178031a9fa6f876cb6cdb42c39\""
Jan 13 20:19:49.733051 containerd[1475]: time="2025-01-13T20:19:49.732967587Z" level=info msg="CreateContainer within sandbox \"b41310fc9025278fe5663d7caeeadec8569aa5178031a9fa6f876cb6cdb42c39\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Jan 13 20:19:49.756350 containerd[1475]: time="2025-01-13T20:19:49.756194931Z" level=info msg="CreateContainer within sandbox \"b41310fc9025278fe5663d7caeeadec8569aa5178031a9fa6f876cb6cdb42c39\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55c0c099124f59a5d9602b6480855ac5a8cb78d0950eb08063e3b41aa5918721\""
Jan 13 20:19:49.759896 containerd[1475]: time="2025-01-13T20:19:49.758937158Z" level=info msg="StartContainer for \"55c0c099124f59a5d9602b6480855ac5a8cb78d0950eb08063e3b41aa5918721\""
Jan 13 20:19:49.796774 systemd[1]: Started cri-containerd-55c0c099124f59a5d9602b6480855ac5a8cb78d0950eb08063e3b41aa5918721.scope - libcontainer container 55c0c099124f59a5d9602b6480855ac5a8cb78d0950eb08063e3b41aa5918721.
Jan 13 20:19:49.831950 containerd[1475]: time="2025-01-13T20:19:49.831785742Z" level=info msg="StartContainer for \"55c0c099124f59a5d9602b6480855ac5a8cb78d0950eb08063e3b41aa5918721\" returns successfully"
Jan 13 20:19:50.707110 systemd-networkd[1369]: veth75c0e17e: Gained IPv6LL
Jan 13 20:19:50.758489 kubelet[2672]: I0113 20:19:50.757668    2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-r4z66" podStartSLOduration=20.757648974 podStartE2EDuration="20.757648974s" podCreationTimestamp="2025-01-13 20:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:50.74075721 +0000 UTC m=+25.323517975" watchObservedRunningTime="2025-01-13 20:19:50.757648974 +0000 UTC m=+25.340409739"
Jan 13 20:19:51.090906 systemd-networkd[1369]: cni0: Gained IPv6LL
Jan 13 20:19:52.555078 containerd[1475]: time="2025-01-13T20:19:52.554916090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d8gsb,Uid:1a9ac66f-056e-4836-8608-11dee01b258b,Namespace:kube-system,Attempt:0,}"
Jan 13 20:19:52.594559 systemd-networkd[1369]: vethe6d1789a: Link UP
Jan 13 20:19:52.599299 kernel: cni0: port 2(vethe6d1789a) entered blocking state
Jan 13 20:19:52.599381 kernel: cni0: port 2(vethe6d1789a) entered disabled state
Jan 13 20:19:52.599397 kernel: vethe6d1789a: entered allmulticast mode
Jan 13 20:19:52.605233 kernel: vethe6d1789a: entered promiscuous mode
Jan 13 20:19:52.623017 kernel: cni0: port 2(vethe6d1789a) entered blocking state
Jan 13 20:19:52.623144 kernel: cni0: port 2(vethe6d1789a) entered forwarding state
Jan 13 20:19:52.621412 systemd-networkd[1369]: vethe6d1789a: Gained carrier
Jan 13 20:19:52.628767 containerd[1475]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"}
Jan 13 20:19:52.628767 containerd[1475]: delegateAdd: netconf sent to delegate plugin:
Jan 13 20:19:52.660749 containerd[1475]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T20:19:52.660413568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:19:52.660749 containerd[1475]: time="2025-01-13T20:19:52.660524609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:19:52.660749 containerd[1475]: time="2025-01-13T20:19:52.660543929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:19:52.660749 containerd[1475]: time="2025-01-13T20:19:52.660657970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:19:52.693756 systemd[1]: Started cri-containerd-2ea03568e246eab96b20eef8f7aedb99d4d3e5e61da1827beade09ba41df6947.scope - libcontainer container 2ea03568e246eab96b20eef8f7aedb99d4d3e5e61da1827beade09ba41df6947.
Jan 13 20:19:52.742410 containerd[1475]: time="2025-01-13T20:19:52.742358134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d8gsb,Uid:1a9ac66f-056e-4836-8608-11dee01b258b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ea03568e246eab96b20eef8f7aedb99d4d3e5e61da1827beade09ba41df6947\""
Jan 13 20:19:52.748711 containerd[1475]: time="2025-01-13T20:19:52.748542155Z" level=info msg="CreateContainer within sandbox \"2ea03568e246eab96b20eef8f7aedb99d4d3e5e61da1827beade09ba41df6947\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Jan 13 20:19:52.768045 containerd[1475]: time="2025-01-13T20:19:52.767963306Z" level=info msg="CreateContainer within sandbox \"2ea03568e246eab96b20eef8f7aedb99d4d3e5e61da1827beade09ba41df6947\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bbb572b53b893a202b036695c7d70e7fa667983e0a83f7521b9762afd448ce50\""
Jan 13 20:19:52.769621 containerd[1475]: time="2025-01-13T20:19:52.768797794Z" level=info msg="StartContainer for \"bbb572b53b893a202b036695c7d70e7fa667983e0a83f7521b9762afd448ce50\""
Jan 13 20:19:52.802743 systemd[1]: Started cri-containerd-bbb572b53b893a202b036695c7d70e7fa667983e0a83f7521b9762afd448ce50.scope - libcontainer container bbb572b53b893a202b036695c7d70e7fa667983e0a83f7521b9762afd448ce50.
Jan 13 20:19:52.842476 containerd[1475]: time="2025-01-13T20:19:52.841905633Z" level=info msg="StartContainer for \"bbb572b53b893a202b036695c7d70e7fa667983e0a83f7521b9762afd448ce50\" returns successfully"
Jan 13 20:19:53.746037 kubelet[2672]: I0113 20:19:53.745943    2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-d8gsb" podStartSLOduration=23.745924046 podStartE2EDuration="23.745924046s" podCreationTimestamp="2025-01-13 20:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:53.744734194 +0000 UTC m=+28.327494959" watchObservedRunningTime="2025-01-13 20:19:53.745924046 +0000 UTC m=+28.328684811"
Jan 13 20:19:54.483536 systemd-networkd[1369]: vethe6d1789a: Gained IPv6LL
Jan 13 20:21:29.161743 kernel: hrtimer: interrupt took 2100996 ns
Jan 13 20:24:09.046790 systemd[1]: Started sshd@5-138.199.153.200:22-139.178.89.65:46110.service - OpenSSH per-connection server daemon (139.178.89.65:46110).
Jan 13 20:24:10.066639 sshd[4672]: Accepted publickey for core from 139.178.89.65 port 46110 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc
Jan 13 20:24:10.070728 sshd-session[4672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:24:10.083550 systemd-logind[1447]: New session 6 of user core.
Jan 13 20:24:10.089881 systemd[1]: Started session-6.scope - Session 6 of User core.
Jan 13 20:24:10.862701 sshd[4680]: Connection closed by 139.178.89.65 port 46110
Jan 13 20:24:10.861117 sshd-session[4672]: pam_unix(sshd:session): session closed for user core
Jan 13 20:24:10.867317 systemd[1]: sshd@5-138.199.153.200:22-139.178.89.65:46110.service: Deactivated successfully.
Jan 13 20:24:10.868093 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit.
Jan 13 20:24:10.871042 systemd[1]: session-6.scope: Deactivated successfully.
Jan 13 20:24:10.877302 systemd-logind[1447]: Removed session 6.
Jan 13 20:24:16.047921 systemd[1]: Started sshd@6-138.199.153.200:22-139.178.89.65:56766.service - OpenSSH per-connection server daemon (139.178.89.65:56766).
Jan 13 20:24:17.058921 sshd[4713]: Accepted publickey for core from 139.178.89.65 port 56766 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc
Jan 13 20:24:17.062797 sshd-session[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:24:17.072212 systemd-logind[1447]: New session 7 of user core.
Jan 13 20:24:17.079743 systemd[1]: Started session-7.scope - Session 7 of User core.
Jan 13 20:24:17.852603 sshd[4730]: Connection closed by 139.178.89.65 port 56766
Jan 13 20:24:17.853897 sshd-session[4713]: pam_unix(sshd:session): session closed for user core
Jan 13 20:24:17.859374 systemd[1]: sshd@6-138.199.153.200:22-139.178.89.65:56766.service: Deactivated successfully.
Jan 13 20:24:17.859574 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit.
Jan 13 20:24:17.863690 systemd[1]: session-7.scope: Deactivated successfully.
Jan 13 20:24:17.866563 systemd-logind[1447]: Removed session 7.
Jan 13 20:24:18.037938 systemd[1]: Started sshd@7-138.199.153.200:22-139.178.89.65:56772.service - OpenSSH per-connection server daemon (139.178.89.65:56772).
Jan 13 20:24:19.041596 sshd[4743]: Accepted publickey for core from 139.178.89.65 port 56772 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc
Jan 13 20:24:19.043342 sshd-session[4743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:24:19.051571 systemd-logind[1447]: New session 8 of user core.
Jan 13 20:24:19.056833 systemd[1]: Started session-8.scope - Session 8 of User core.
Jan 13 20:24:19.882265 sshd[4745]: Connection closed by 139.178.89.65 port 56772
Jan 13 20:24:19.885387 sshd-session[4743]: pam_unix(sshd:session): session closed for user core
Jan 13 20:24:19.890121 systemd[1]: sshd@7-138.199.153.200:22-139.178.89.65:56772.service: Deactivated successfully.
Jan 13 20:24:19.893320 systemd[1]: session-8.scope: Deactivated successfully.
Jan 13 20:24:19.895894 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit.
Jan 13 20:24:19.897848 systemd-logind[1447]: Removed session 8.
Jan 13 20:24:20.063006 systemd[1]: Started sshd@8-138.199.153.200:22-139.178.89.65:56780.service - OpenSSH per-connection server daemon (139.178.89.65:56780).
Jan 13 20:24:21.071114 sshd[4760]: Accepted publickey for core from 139.178.89.65 port 56780 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc
Jan 13 20:24:21.074538 sshd-session[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:24:21.081970 systemd-logind[1447]: New session 9 of user core.
Jan 13 20:24:21.092199 systemd[1]: Started session-9.scope - Session 9 of User core.
Jan 13 20:24:21.881248 sshd[4766]: Connection closed by 139.178.89.65 port 56780
Jan 13 20:24:21.881928 sshd-session[4760]: pam_unix(sshd:session): session closed for user core
Jan 13 20:24:21.887667 systemd[1]: sshd@8-138.199.153.200:22-139.178.89.65:56780.service: Deactivated successfully.
Jan 13 20:24:21.889745 systemd[1]: session-9.scope: Deactivated successfully.
Jan 13 20:24:21.890807 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit.
Jan 13 20:24:21.892974 systemd-logind[1447]: Removed session 9.
Jan 13 20:24:27.058870 systemd[1]: Started sshd@9-138.199.153.200:22-139.178.89.65:46922.service - OpenSSH per-connection server daemon (139.178.89.65:46922).
Jan 13 20:24:28.054082 sshd[4811]: Accepted publickey for core from 139.178.89.65 port 46922 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc
Jan 13 20:24:28.056284 sshd-session[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:24:28.063787 systemd-logind[1447]: New session 10 of user core.
Jan 13 20:24:28.072914 systemd[1]: Started session-10.scope - Session 10 of User core.
Jan 13 20:24:28.828401 sshd[4813]: Connection closed by 139.178.89.65 port 46922
Jan 13 20:24:28.829617 sshd-session[4811]: pam_unix(sshd:session): session closed for user core
Jan 13 20:24:28.836106 systemd[1]: sshd@9-138.199.153.200:22-139.178.89.65:46922.service: Deactivated successfully.
Jan 13 20:24:28.841746 systemd[1]: session-10.scope: Deactivated successfully.
Jan 13 20:24:28.845936 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit.
Jan 13 20:24:28.848131 systemd-logind[1447]: Removed session 10.
Jan 13 20:24:29.010159 systemd[1]: Started sshd@10-138.199.153.200:22-139.178.89.65:46926.service - OpenSSH per-connection server daemon (139.178.89.65:46926).
Jan 13 20:24:30.002535 sshd[4823]: Accepted publickey for core from 139.178.89.65 port 46926 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc
Jan 13 20:24:30.004102 sshd-session[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:24:30.021835 systemd-logind[1447]: New session 11 of user core.
Jan 13 20:24:30.026724 systemd[1]: Started session-11.scope - Session 11 of User core.
Jan 13 20:24:30.832545 sshd[4831]: Connection closed by 139.178.89.65 port 46926
Jan 13 20:24:30.833556 sshd-session[4823]: pam_unix(sshd:session): session closed for user core
Jan 13 20:24:30.842713 systemd[1]: sshd@10-138.199.153.200:22-139.178.89.65:46926.service: Deactivated successfully.
Jan 13 20:24:30.851187 systemd[1]: session-11.scope: Deactivated successfully.
Jan 13 20:24:30.854051 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit.
Jan 13 20:24:30.856521 systemd-logind[1447]: Removed session 11.
Jan 13 20:24:31.017879 systemd[1]: Started sshd@11-138.199.153.200:22-139.178.89.65:46936.service - OpenSSH per-connection server daemon (139.178.89.65:46936).
Jan 13 20:24:32.021107 sshd[4840]: Accepted publickey for core from 139.178.89.65 port 46936 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc
Jan 13 20:24:32.023754 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:24:32.028758 systemd-logind[1447]: New session 12 of user core.
Jan 13 20:24:32.036833 systemd[1]: Started session-12.scope - Session 12 of User core.
Jan 13 20:24:34.457106 sshd[4859]: Connection closed by 139.178.89.65 port 46936
Jan 13 20:24:34.458858 sshd-session[4840]: pam_unix(sshd:session): session closed for user core
Jan 13 20:24:34.465381 systemd[1]: sshd@11-138.199.153.200:22-139.178.89.65:46936.service: Deactivated successfully.
Jan 13 20:24:34.468850 systemd[1]: session-12.scope: Deactivated successfully.
Jan 13 20:24:34.473741 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit.
Jan 13 20:24:34.475739 systemd-logind[1447]: Removed session 12.
Jan 13 20:24:34.638958 systemd[1]: Started sshd@12-138.199.153.200:22-139.178.89.65:59566.service - OpenSSH per-connection server daemon (139.178.89.65:59566).
Jan 13 20:24:35.626407 sshd[4875]: Accepted publickey for core from 139.178.89.65 port 59566 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc
Jan 13 20:24:35.628768 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:24:35.640501 systemd-logind[1447]: New session 13 of user core.
Jan 13 20:24:35.643743 systemd[1]: Started session-13.scope - Session 13 of User core.
Jan 13 20:24:36.537393 sshd[4883]: Connection closed by 139.178.89.65 port 59566
Jan 13 20:24:36.537945 sshd-session[4875]: pam_unix(sshd:session): session closed for user core
Jan 13 20:24:36.548203 systemd[1]: sshd@12-138.199.153.200:22-139.178.89.65:59566.service: Deactivated successfully.
Jan 13 20:24:36.553948 systemd[1]: session-13.scope: Deactivated successfully.
Jan 13 20:24:36.555381 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit.
Jan 13 20:24:36.556937 systemd-logind[1447]: Removed session 13.
Jan 13 20:24:36.730728 systemd[1]: Started sshd@13-138.199.153.200:22-139.178.89.65:59574.service - OpenSSH per-connection server daemon (139.178.89.65:59574).
Jan 13 20:24:37.740461 sshd[4907]: Accepted publickey for core from 139.178.89.65 port 59574 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc
Jan 13 20:24:37.742712 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:24:37.750931 systemd-logind[1447]: New session 14 of user core.
Jan 13 20:24:37.755716 systemd[1]: Started session-14.scope - Session 14 of User core.
Jan 13 20:24:38.499298 sshd[4909]: Connection closed by 139.178.89.65 port 59574
Jan 13 20:24:38.498506 sshd-session[4907]: pam_unix(sshd:session): session closed for user core
Jan 13 20:24:38.505423 systemd[1]: sshd@13-138.199.153.200:22-139.178.89.65:59574.service: Deactivated successfully.
Jan 13 20:24:38.511064 systemd[1]: session-14.scope: Deactivated successfully.
Jan 13 20:24:38.518720 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit.
Jan 13 20:24:38.521594 systemd-logind[1447]: Removed session 14.
Jan 13 20:24:43.677149 systemd[1]: Started sshd@14-138.199.153.200:22-139.178.89.65:47650.service - OpenSSH per-connection server daemon (139.178.89.65:47650).
Jan 13 20:24:44.679002 sshd[4944]: Accepted publickey for core from 139.178.89.65 port 47650 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc
Jan 13 20:24:44.682054 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:24:44.692288 systemd-logind[1447]: New session 15 of user core.
Jan 13 20:24:44.700190 systemd[1]: Started session-15.scope - Session 15 of User core.
Jan 13 20:24:45.448331 sshd[4946]: Connection closed by 139.178.89.65 port 47650
Jan 13 20:24:45.449731 sshd-session[4944]: pam_unix(sshd:session): session closed for user core
Jan 13 20:24:45.457015 systemd[1]: sshd@14-138.199.153.200:22-139.178.89.65:47650.service: Deactivated successfully.
Jan 13 20:24:45.462973 systemd[1]: session-15.scope: Deactivated successfully.
Jan 13 20:24:45.465695 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit.
Jan 13 20:24:45.467018 systemd-logind[1447]: Removed session 15.
Jan 13 20:24:50.623031 systemd[1]: Started sshd@15-138.199.153.200:22-139.178.89.65:47666.service - OpenSSH per-connection server daemon (139.178.89.65:47666).
Jan 13 20:24:51.608107 sshd[4984]: Accepted publickey for core from 139.178.89.65 port 47666 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc
Jan 13 20:24:51.611285 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:24:51.621286 systemd-logind[1447]: New session 16 of user core.
Jan 13 20:24:51.628723 systemd[1]: Started session-16.scope - Session 16 of User core.
Jan 13 20:24:52.392012 sshd[5001]: Connection closed by 139.178.89.65 port 47666
Jan 13 20:24:52.393955 sshd-session[4984]: pam_unix(sshd:session): session closed for user core
Jan 13 20:24:52.399351 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit.
Jan 13 20:24:52.400717 systemd[1]: sshd@15-138.199.153.200:22-139.178.89.65:47666.service: Deactivated successfully.
Jan 13 20:24:52.404300 systemd[1]: session-16.scope: Deactivated successfully.
Jan 13 20:24:52.406339 systemd-logind[1447]: Removed session 16.