Jan 13 20:21:53.907016 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Jan 13 20:21:53.907041 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025
Jan 13 20:21:53.907051 kernel: KASLR enabled
Jan 13 20:21:53.907057 kernel: efi: EFI v2.7 by EDK II
Jan 13 20:21:53.907063 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4d698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x132303d98 
Jan 13 20:21:53.907068 kernel: random: crng init done
Jan 13 20:21:53.907075 kernel: secureboot: Secure boot disabled
Jan 13 20:21:53.907081 kernel: ACPI: Early table checksum verification disabled
Jan 13 20:21:53.907087 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS )
Jan 13 20:21:53.907093 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS  BXPC     00000001      01000013)
Jan 13 20:21:53.907101 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:21:53.907107 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:21:53.907113 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:21:53.907119 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:21:53.907126 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:21:53.907134 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:21:53.907140 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:21:53.907147 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:21:53.907153 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:21:53.907159 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL  EDK2     00000002      01000013)
Jan 13 20:21:53.907165 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600
Jan 13 20:21:53.907172 kernel: NUMA: Failed to initialise from firmware
Jan 13 20:21:53.907178 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff]
Jan 13 20:21:53.907184 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff]
Jan 13 20:21:53.907190 kernel: Zone ranges:
Jan 13 20:21:53.907196 kernel:   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
Jan 13 20:21:53.907204 kernel:   DMA32    empty
Jan 13 20:21:53.907210 kernel:   Normal   [mem 0x0000000100000000-0x0000000139ffffff]
Jan 13 20:21:53.907217 kernel: Movable zone start for each node
Jan 13 20:21:53.907223 kernel: Early memory node ranges
Jan 13 20:21:53.907229 kernel:   node   0: [mem 0x0000000040000000-0x000000013243ffff]
Jan 13 20:21:53.907255 kernel:   node   0: [mem 0x0000000132440000-0x000000013272ffff]
Jan 13 20:21:53.907261 kernel:   node   0: [mem 0x0000000132730000-0x0000000135bfffff]
Jan 13 20:21:53.907267 kernel:   node   0: [mem 0x0000000135c00000-0x0000000135fdffff]
Jan 13 20:21:53.907273 kernel:   node   0: [mem 0x0000000135fe0000-0x0000000139ffffff]
Jan 13 20:21:53.907279 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff]
Jan 13 20:21:53.907286 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges
Jan 13 20:21:53.907294 kernel: psci: probing for conduit method from ACPI.
Jan 13 20:21:53.907300 kernel: psci: PSCIv1.1 detected in firmware.
Jan 13 20:21:53.907306 kernel: psci: Using standard PSCI v0.2 function IDs
Jan 13 20:21:53.907315 kernel: psci: Trusted OS migration not required
Jan 13 20:21:53.907322 kernel: psci: SMC Calling Convention v1.1
Jan 13 20:21:53.907328 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003)
Jan 13 20:21:53.907350 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Jan 13 20:21:53.907358 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Jan 13 20:21:53.907364 kernel: pcpu-alloc: [0] 0 [0] 1 
Jan 13 20:21:53.907371 kernel: Detected PIPT I-cache on CPU0
Jan 13 20:21:53.907378 kernel: CPU features: detected: GIC system register CPU interface
Jan 13 20:21:53.907385 kernel: CPU features: detected: Hardware dirty bit management
Jan 13 20:21:53.907391 kernel: CPU features: detected: Spectre-v4
Jan 13 20:21:53.907398 kernel: CPU features: detected: Spectre-BHB
Jan 13 20:21:53.907405 kernel: CPU features: kernel page table isolation forced ON by KASLR
Jan 13 20:21:53.907411 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Jan 13 20:21:53.907418 kernel: CPU features: detected: ARM erratum 1418040
Jan 13 20:21:53.907427 kernel: CPU features: detected: SSBS not fully self-synchronizing
Jan 13 20:21:53.907434 kernel: alternatives: applying boot alternatives
Jan 13 20:21:53.907442 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436
Jan 13 20:21:53.907449 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Jan 13 20:21:53.907456 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 13 20:21:53.907462 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Jan 13 20:21:53.907469 kernel: Fallback order for Node 0: 0 
Jan 13 20:21:53.907476 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 1008000
Jan 13 20:21:53.907482 kernel: Policy zone: Normal
Jan 13 20:21:53.907489 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 13 20:21:53.907495 kernel: software IO TLB: area num 2.
Jan 13 20:21:53.907504 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB)
Jan 13 20:21:53.907511 kernel: Memory: 3881336K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 214664K reserved, 0K cma-reserved)
Jan 13 20:21:53.907518 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Jan 13 20:21:53.907524 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 13 20:21:53.907532 kernel: rcu:         RCU event tracing is enabled.
Jan 13 20:21:53.907538 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Jan 13 20:21:53.907545 kernel:         Trampoline variant of Tasks RCU enabled.
Jan 13 20:21:53.907552 kernel:         Tracing variant of Tasks RCU enabled.
Jan 13 20:21:53.907559 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 13 20:21:53.907565 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Jan 13 20:21:53.907572 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Jan 13 20:21:53.907580 kernel: GICv3: 256 SPIs implemented
Jan 13 20:21:53.907587 kernel: GICv3: 0 Extended SPIs implemented
Jan 13 20:21:53.907593 kernel: Root IRQ handler: gic_handle_irq
Jan 13 20:21:53.907600 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Jan 13 20:21:53.907606 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000
Jan 13 20:21:53.907613 kernel: ITS [mem 0x08080000-0x0809ffff]
Jan 13 20:21:53.907620 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1)
Jan 13 20:21:53.907627 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1)
Jan 13 20:21:53.907633 kernel: GICv3: using LPI property table @0x00000001000e0000
Jan 13 20:21:53.907640 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000
Jan 13 20:21:53.907647 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 13 20:21:53.907655 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 13 20:21:53.907662 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Jan 13 20:21:53.907668 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Jan 13 20:21:53.907675 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Jan 13 20:21:53.907682 kernel: Console: colour dummy device 80x25
Jan 13 20:21:53.907689 kernel: ACPI: Core revision 20230628
Jan 13 20:21:53.907696 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Jan 13 20:21:53.907703 kernel: pid_max: default: 32768 minimum: 301
Jan 13 20:21:53.907710 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Jan 13 20:21:53.907717 kernel: landlock: Up and running.
Jan 13 20:21:53.907725 kernel: SELinux:  Initializing.
Jan 13 20:21:53.907732 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Jan 13 20:21:53.907739 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Jan 13 20:21:53.907746 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Jan 13 20:21:53.907753 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Jan 13 20:21:53.907760 kernel: rcu: Hierarchical SRCU implementation.
Jan 13 20:21:53.907767 kernel: rcu:         Max phase no-delay instances is 400.
Jan 13 20:21:53.907774 kernel: Platform MSI: ITS@0x8080000 domain created
Jan 13 20:21:53.907780 kernel: PCI/MSI: ITS@0x8080000 domain created
Jan 13 20:21:53.907789 kernel: Remapping and enabling EFI services.
Jan 13 20:21:53.907796 kernel: smp: Bringing up secondary CPUs ...
Jan 13 20:21:53.907802 kernel: Detected PIPT I-cache on CPU1
Jan 13 20:21:53.907809 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000
Jan 13 20:21:53.907817 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000
Jan 13 20:21:53.907824 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 13 20:21:53.907831 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Jan 13 20:21:53.907839 kernel: smp: Brought up 1 node, 2 CPUs
Jan 13 20:21:53.907846 kernel: SMP: Total of 2 processors activated.
Jan 13 20:21:53.907853 kernel: CPU features: detected: 32-bit EL0 Support
Jan 13 20:21:53.907861 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Jan 13 20:21:53.907868 kernel: CPU features: detected: Common not Private translations
Jan 13 20:21:53.907881 kernel: CPU features: detected: CRC32 instructions
Jan 13 20:21:53.907889 kernel: CPU features: detected: Enhanced Virtualization Traps
Jan 13 20:21:53.907896 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Jan 13 20:21:53.907904 kernel: CPU features: detected: LSE atomic instructions
Jan 13 20:21:53.907911 kernel: CPU features: detected: Privileged Access Never
Jan 13 20:21:53.907919 kernel: CPU features: detected: RAS Extension Support
Jan 13 20:21:53.907926 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS)
Jan 13 20:21:53.907935 kernel: CPU: All CPU(s) started at EL1
Jan 13 20:21:53.907942 kernel: alternatives: applying system-wide alternatives
Jan 13 20:21:53.907949 kernel: devtmpfs: initialized
Jan 13 20:21:53.907957 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 13 20:21:53.907965 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Jan 13 20:21:53.907972 kernel: pinctrl core: initialized pinctrl subsystem
Jan 13 20:21:53.907979 kernel: SMBIOS 3.0.0 present.
Jan 13 20:21:53.907986 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017
Jan 13 20:21:53.907995 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 13 20:21:53.908002 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Jan 13 20:21:53.908009 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 13 20:21:53.908017 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 13 20:21:53.908024 kernel: audit: initializing netlink subsys (disabled)
Jan 13 20:21:53.908031 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1
Jan 13 20:21:53.908038 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 13 20:21:53.908046 kernel: cpuidle: using governor menu
Jan 13 20:21:53.908053 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Jan 13 20:21:53.908062 kernel: ASID allocator initialised with 32768 entries
Jan 13 20:21:53.908069 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 13 20:21:53.908077 kernel: Serial: AMBA PL011 UART driver
Jan 13 20:21:53.908084 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Jan 13 20:21:53.908091 kernel: Modules: 0 pages in range for non-PLT usage
Jan 13 20:21:53.908098 kernel: Modules: 508960 pages in range for PLT usage
Jan 13 20:21:53.908106 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 13 20:21:53.908113 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Jan 13 20:21:53.908121 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Jan 13 20:21:53.908130 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Jan 13 20:21:53.908138 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 13 20:21:53.908147 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Jan 13 20:21:53.908154 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Jan 13 20:21:53.908163 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Jan 13 20:21:53.908170 kernel: ACPI: Added _OSI(Module Device)
Jan 13 20:21:53.908178 kernel: ACPI: Added _OSI(Processor Device)
Jan 13 20:21:53.908185 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Jan 13 20:21:53.908192 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 13 20:21:53.908201 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 13 20:21:53.908208 kernel: ACPI: Interpreter enabled
Jan 13 20:21:53.908216 kernel: ACPI: Using GIC for interrupt routing
Jan 13 20:21:53.908223 kernel: ACPI: MCFG table detected, 1 entries
Jan 13 20:21:53.908230 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA
Jan 13 20:21:53.908318 kernel: printk: console [ttyAMA0] enabled
Jan 13 20:21:53.910299 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 13 20:21:53.910502 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Jan 13 20:21:53.910583 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Jan 13 20:21:53.910648 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Jan 13 20:21:53.910711 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00
Jan 13 20:21:53.910773 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff]
Jan 13 20:21:53.910782 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io  0x0000-0xffff window]
Jan 13 20:21:53.910790 kernel: PCI host bridge to bus 0000:00
Jan 13 20:21:53.910861 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window]
Jan 13 20:21:53.910932 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Jan 13 20:21:53.910989 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window]
Jan 13 20:21:53.911049 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 13 20:21:53.911138 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000
Jan 13 20:21:53.911216 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000
Jan 13 20:21:53.911353 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff]
Jan 13 20:21:53.911430 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref]
Jan 13 20:21:53.911510 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400
Jan 13 20:21:53.911584 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff]
Jan 13 20:21:53.911676 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400
Jan 13 20:21:53.911760 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff]
Jan 13 20:21:53.911844 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400
Jan 13 20:21:53.911919 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff]
Jan 13 20:21:53.911996 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400
Jan 13 20:21:53.912061 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff]
Jan 13 20:21:53.912134 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400
Jan 13 20:21:53.912199 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff]
Jan 13 20:21:53.912876 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400
Jan 13 20:21:53.912958 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff]
Jan 13 20:21:53.913044 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400
Jan 13 20:21:53.913111 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff]
Jan 13 20:21:53.913184 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400
Jan 13 20:21:53.913274 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff]
Jan 13 20:21:53.913410 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400
Jan 13 20:21:53.913483 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff]
Jan 13 20:21:53.913565 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002
Jan 13 20:21:53.913633 kernel: pci 0000:00:04.0: reg 0x10: [io  0x8200-0x8207]
Jan 13 20:21:53.913713 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000
Jan 13 20:21:53.913781 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff]
Jan 13 20:21:53.913849 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref]
Jan 13 20:21:53.913916 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref]
Jan 13 20:21:53.913992 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330
Jan 13 20:21:53.914064 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit]
Jan 13 20:21:53.914141 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000
Jan 13 20:21:53.914209 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff]
Jan 13 20:21:53.914298 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref]
Jan 13 20:21:53.914398 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00
Jan 13 20:21:53.914468 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref]
Jan 13 20:21:53.914552 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00
Jan 13 20:21:53.914620 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref]
Jan 13 20:21:53.914694 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000
Jan 13 20:21:53.914761 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff]
Jan 13 20:21:53.914827 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref]
Jan 13 20:21:53.914902 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000
Jan 13 20:21:53.914976 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff]
Jan 13 20:21:53.915043 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref]
Jan 13 20:21:53.915110 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref]
Jan 13 20:21:53.915178 kernel: pci 0000:00:02.0: bridge window [io  0x1000-0x0fff] to [bus 01] add_size 1000
Jan 13 20:21:53.918454 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000
Jan 13 20:21:53.918585 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000
Jan 13 20:21:53.918662 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 02] add_size 1000
Jan 13 20:21:53.918738 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000
Jan 13 20:21:53.918838 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000
Jan 13 20:21:53.918909 kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Jan 13 20:21:53.918987 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000
Jan 13 20:21:53.919050 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000
Jan 13 20:21:53.919119 kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Jan 13 20:21:53.919185 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000
Jan 13 20:21:53.919359 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000
Jan 13 20:21:53.919438 kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Jan 13 20:21:53.919501 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000
Jan 13 20:21:53.920370 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000
Jan 13 20:21:53.920462 kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Jan 13 20:21:53.920527 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000
Jan 13 20:21:53.920590 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000
Jan 13 20:21:53.920658 kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Jan 13 20:21:53.920729 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000
Jan 13 20:21:53.920795 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000
Jan 13 20:21:53.920863 kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Jan 13 20:21:53.920928 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000
Jan 13 20:21:53.920992 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000
Jan 13 20:21:53.921064 kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Jan 13 20:21:53.921137 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000
Jan 13 20:21:53.921204 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000
Jan 13 20:21:53.922672 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff]
Jan 13 20:21:53.922771 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref]
Jan 13 20:21:53.922844 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff]
Jan 13 20:21:53.922911 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref]
Jan 13 20:21:53.922982 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff]
Jan 13 20:21:53.923047 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref]
Jan 13 20:21:53.923128 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff]
Jan 13 20:21:53.923197 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref]
Jan 13 20:21:53.924393 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff]
Jan 13 20:21:53.924488 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref]
Jan 13 20:21:53.924559 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff]
Jan 13 20:21:53.924627 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref]
Jan 13 20:21:53.924695 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff]
Jan 13 20:21:53.924770 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref]
Jan 13 20:21:53.924837 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff]
Jan 13 20:21:53.924901 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref]
Jan 13 20:21:53.924970 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff]
Jan 13 20:21:53.925039 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref]
Jan 13 20:21:53.925108 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref]
Jan 13 20:21:53.925179 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff]
Jan 13 20:21:53.926394 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff]
Jan 13 20:21:53.926509 kernel: pci 0000:00:02.0: BAR 13: assigned [io  0x1000-0x1fff]
Jan 13 20:21:53.926581 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff]
Jan 13 20:21:53.926647 kernel: pci 0000:00:02.1: BAR 13: assigned [io  0x2000-0x2fff]
Jan 13 20:21:53.926717 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff]
Jan 13 20:21:53.926781 kernel: pci 0000:00:02.2: BAR 13: assigned [io  0x3000-0x3fff]
Jan 13 20:21:53.926849 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff]
Jan 13 20:21:53.926917 kernel: pci 0000:00:02.3: BAR 13: assigned [io  0x4000-0x4fff]
Jan 13 20:21:53.926997 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff]
Jan 13 20:21:53.927064 kernel: pci 0000:00:02.4: BAR 13: assigned [io  0x5000-0x5fff]
Jan 13 20:21:53.927136 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff]
Jan 13 20:21:53.927206 kernel: pci 0000:00:02.5: BAR 13: assigned [io  0x6000-0x6fff]
Jan 13 20:21:53.927945 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff]
Jan 13 20:21:53.928025 kernel: pci 0000:00:02.6: BAR 13: assigned [io  0x7000-0x7fff]
Jan 13 20:21:53.928094 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff]
Jan 13 20:21:53.928159 kernel: pci 0000:00:02.7: BAR 13: assigned [io  0x8000-0x8fff]
Jan 13 20:21:53.928261 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff]
Jan 13 20:21:53.928381 kernel: pci 0000:00:03.0: BAR 13: assigned [io  0x9000-0x9fff]
Jan 13 20:21:53.928470 kernel: pci 0000:00:04.0: BAR 0: assigned [io  0xa000-0xa007]
Jan 13 20:21:53.928549 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref]
Jan 13 20:21:53.928617 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref]
Jan 13 20:21:53.928686 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff]
Jan 13 20:21:53.928755 kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
Jan 13 20:21:53.928828 kernel: pci 0000:00:02.0:   bridge window [io  0x1000-0x1fff]
Jan 13 20:21:53.928892 kernel: pci 0000:00:02.0:   bridge window [mem 0x10000000-0x101fffff]
Jan 13 20:21:53.930322 kernel: pci 0000:00:02.0:   bridge window [mem 0x8000000000-0x80001fffff 64bit pref]
Jan 13 20:21:53.930464 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit]
Jan 13 20:21:53.930537 kernel: pci 0000:00:02.1: PCI bridge to [bus 02]
Jan 13 20:21:53.930615 kernel: pci 0000:00:02.1:   bridge window [io  0x2000-0x2fff]
Jan 13 20:21:53.930680 kernel: pci 0000:00:02.1:   bridge window [mem 0x10200000-0x103fffff]
Jan 13 20:21:53.930745 kernel: pci 0000:00:02.1:   bridge window [mem 0x8000200000-0x80003fffff 64bit pref]
Jan 13 20:21:53.930820 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref]
Jan 13 20:21:53.930889 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff]
Jan 13 20:21:53.930957 kernel: pci 0000:00:02.2: PCI bridge to [bus 03]
Jan 13 20:21:53.931023 kernel: pci 0000:00:02.2:   bridge window [io  0x3000-0x3fff]
Jan 13 20:21:53.931087 kernel: pci 0000:00:02.2:   bridge window [mem 0x10400000-0x105fffff]
Jan 13 20:21:53.931162 kernel: pci 0000:00:02.2:   bridge window [mem 0x8000400000-0x80005fffff 64bit pref]
Jan 13 20:21:53.932300 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref]
Jan 13 20:21:53.932423 kernel: pci 0000:00:02.3: PCI bridge to [bus 04]
Jan 13 20:21:53.932492 kernel: pci 0000:00:02.3:   bridge window [io  0x4000-0x4fff]
Jan 13 20:21:53.932557 kernel: pci 0000:00:02.3:   bridge window [mem 0x10600000-0x107fffff]
Jan 13 20:21:53.932621 kernel: pci 0000:00:02.3:   bridge window [mem 0x8000600000-0x80007fffff 64bit pref]
Jan 13 20:21:53.932695 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref]
Jan 13 20:21:53.932762 kernel: pci 0000:00:02.4: PCI bridge to [bus 05]
Jan 13 20:21:53.932836 kernel: pci 0000:00:02.4:   bridge window [io  0x5000-0x5fff]
Jan 13 20:21:53.932899 kernel: pci 0000:00:02.4:   bridge window [mem 0x10800000-0x109fffff]
Jan 13 20:21:53.932962 kernel: pci 0000:00:02.4:   bridge window [mem 0x8000800000-0x80009fffff 64bit pref]
Jan 13 20:21:53.933035 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref]
Jan 13 20:21:53.933101 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff]
Jan 13 20:21:53.933170 kernel: pci 0000:00:02.5: PCI bridge to [bus 06]
Jan 13 20:21:53.934228 kernel: pci 0000:00:02.5:   bridge window [io  0x6000-0x6fff]
Jan 13 20:21:53.936488 kernel: pci 0000:00:02.5:   bridge window [mem 0x10a00000-0x10bfffff]
Jan 13 20:21:53.936578 kernel: pci 0000:00:02.5:   bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]
Jan 13 20:21:53.936659 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref]
Jan 13 20:21:53.936729 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref]
Jan 13 20:21:53.936797 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff]
Jan 13 20:21:53.936867 kernel: pci 0000:00:02.6: PCI bridge to [bus 07]
Jan 13 20:21:53.936934 kernel: pci 0000:00:02.6:   bridge window [io  0x7000-0x7fff]
Jan 13 20:21:53.936999 kernel: pci 0000:00:02.6:   bridge window [mem 0x10c00000-0x10dfffff]
Jan 13 20:21:53.937068 kernel: pci 0000:00:02.6:   bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]
Jan 13 20:21:53.937141 kernel: pci 0000:00:02.7: PCI bridge to [bus 08]
Jan 13 20:21:53.937209 kernel: pci 0000:00:02.7:   bridge window [io  0x8000-0x8fff]
Jan 13 20:21:53.938226 kernel: pci 0000:00:02.7:   bridge window [mem 0x10e00000-0x10ffffff]
Jan 13 20:21:53.938410 kernel: pci 0000:00:02.7:   bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]
Jan 13 20:21:53.938484 kernel: pci 0000:00:03.0: PCI bridge to [bus 09]
Jan 13 20:21:53.938551 kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Jan 13 20:21:53.938616 kernel: pci 0000:00:03.0:   bridge window [mem 0x11000000-0x111fffff]
Jan 13 20:21:53.938692 kernel: pci 0000:00:03.0:   bridge window [mem 0x8001000000-0x80011fffff 64bit pref]
Jan 13 20:21:53.938765 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window]
Jan 13 20:21:53.938824 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Jan 13 20:21:53.938881 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window]
Jan 13 20:21:53.938954 kernel: pci_bus 0000:01: resource 0 [io  0x1000-0x1fff]
Jan 13 20:21:53.939016 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff]
Jan 13 20:21:53.939075 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref]
Jan 13 20:21:53.939148 kernel: pci_bus 0000:02: resource 0 [io  0x2000-0x2fff]
Jan 13 20:21:53.939209 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff]
Jan 13 20:21:53.939311 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref]
Jan 13 20:21:53.939442 kernel: pci_bus 0000:03: resource 0 [io  0x3000-0x3fff]
Jan 13 20:21:53.939510 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff]
Jan 13 20:21:53.939572 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref]
Jan 13 20:21:53.939644 kernel: pci_bus 0000:04: resource 0 [io  0x4000-0x4fff]
Jan 13 20:21:53.939713 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff]
Jan 13 20:21:53.939779 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref]
Jan 13 20:21:53.939935 kernel: pci_bus 0000:05: resource 0 [io  0x5000-0x5fff]
Jan 13 20:21:53.940016 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff]
Jan 13 20:21:53.940079 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref]
Jan 13 20:21:53.940151 kernel: pci_bus 0000:06: resource 0 [io  0x6000-0x6fff]
Jan 13 20:21:53.940225 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff]
Jan 13 20:21:53.940446 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref]
Jan 13 20:21:53.940525 kernel: pci_bus 0000:07: resource 0 [io  0x7000-0x7fff]
Jan 13 20:21:53.940584 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff]
Jan 13 20:21:53.940650 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref]
Jan 13 20:21:53.940720 kernel: pci_bus 0000:08: resource 0 [io  0x8000-0x8fff]
Jan 13 20:21:53.940779 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff]
Jan 13 20:21:53.940837 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref]
Jan 13 20:21:53.940905 kernel: pci_bus 0000:09: resource 0 [io  0x9000-0x9fff]
Jan 13 20:21:53.940965 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff]
Jan 13 20:21:53.941024 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref]
Jan 13 20:21:53.941037 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Jan 13 20:21:53.941045 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Jan 13 20:21:53.941053 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Jan 13 20:21:53.941061 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Jan 13 20:21:53.941069 kernel: iommu: Default domain type: Translated
Jan 13 20:21:53.941077 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Jan 13 20:21:53.941085 kernel: efivars: Registered efivars operations
Jan 13 20:21:53.941093 kernel: vgaarb: loaded
Jan 13 20:21:53.941101 kernel: clocksource: Switched to clocksource arch_sys_counter
Jan 13 20:21:53.941110 kernel: VFS: Disk quotas dquot_6.6.0
Jan 13 20:21:53.941118 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 13 20:21:53.941126 kernel: pnp: PnP ACPI init
Jan 13 20:21:53.941203 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved
Jan 13 20:21:53.941215 kernel: pnp: PnP ACPI: found 1 devices
Jan 13 20:21:53.941223 kernel: NET: Registered PF_INET protocol family
Jan 13 20:21:53.941248 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 13 20:21:53.941258 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Jan 13 20:21:53.941266 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 13 20:21:53.941277 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Jan 13 20:21:53.941287 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Jan 13 20:21:53.941295 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Jan 13 20:21:53.941303 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Jan 13 20:21:53.941310 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Jan 13 20:21:53.941318 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 13 20:21:53.941418 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002)
Jan 13 20:21:53.941437 kernel: PCI: CLS 0 bytes, default 64
Jan 13 20:21:53.941448 kernel: kvm [1]: HYP mode not available
Jan 13 20:21:53.941456 kernel: Initialise system trusted keyrings
Jan 13 20:21:53.941464 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Jan 13 20:21:53.941472 kernel: Key type asymmetric registered
Jan 13 20:21:53.941480 kernel: Asymmetric key parser 'x509' registered
Jan 13 20:21:53.941490 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Jan 13 20:21:53.941498 kernel: io scheduler mq-deadline registered
Jan 13 20:21:53.941508 kernel: io scheduler kyber registered
Jan 13 20:21:53.941520 kernel: io scheduler bfq registered
Jan 13 20:21:53.941529 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37
Jan 13 20:21:53.941615 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50
Jan 13 20:21:53.941696 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50
Jan 13 20:21:53.941762 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:21:53.943586 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51
Jan 13 20:21:53.943697 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51
Jan 13 20:21:53.943767 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:21:53.943852 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52
Jan 13 20:21:53.943920 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52
Jan 13 20:21:53.943990 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:21:53.944064 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53
Jan 13 20:21:53.944134 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53
Jan 13 20:21:53.944203 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:21:53.944313 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54
Jan 13 20:21:53.944414 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54
Jan 13 20:21:53.945692 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:21:53.945803 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55
Jan 13 20:21:53.945874 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55
Jan 13 20:21:53.945943 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:21:53.946029 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56
Jan 13 20:21:53.946099 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56
Jan 13 20:21:53.946166 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:21:53.947493 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57
Jan 13 20:21:53.947631 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57
Jan 13 20:21:53.947702 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:21:53.947723 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38
Jan 13 20:21:53.947796 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58
Jan 13 20:21:53.947866 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58
Jan 13 20:21:53.947934 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+
Jan 13 20:21:53.947945 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Jan 13 20:21:53.947954 kernel: ACPI: button: Power Button [PWRB]
Jan 13 20:21:53.947962 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Jan 13 20:21:53.948042 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002)
Jan 13 20:21:53.948120 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002)
Jan 13 20:21:53.948193 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002)
Jan 13 20:21:53.948205 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 13 20:21:53.948213 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35
Jan 13 20:21:53.949394 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001)
Jan 13 20:21:53.949427 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A
Jan 13 20:21:53.949435 kernel: thunder_xcv, ver 1.0
Jan 13 20:21:53.949451 kernel: thunder_bgx, ver 1.0
Jan 13 20:21:53.949459 kernel: nicpf, ver 1.0
Jan 13 20:21:53.949467 kernel: nicvf, ver 1.0
Jan 13 20:21:53.949561 kernel: rtc-efi rtc-efi.0: registered as rtc0
Jan 13 20:21:53.949627 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:21:53 UTC (1736799713)
Jan 13 20:21:53.949637 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 13 20:21:53.949645 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
Jan 13 20:21:53.949653 kernel: watchdog: Delayed init of the lockup detector failed: -19
Jan 13 20:21:53.949663 kernel: watchdog: Hard watchdog permanently disabled
Jan 13 20:21:53.949671 kernel: NET: Registered PF_INET6 protocol family
Jan 13 20:21:53.949679 kernel: Segment Routing with IPv6
Jan 13 20:21:53.949688 kernel: In-situ OAM (IOAM) with IPv6
Jan 13 20:21:53.949696 kernel: NET: Registered PF_PACKET protocol family
Jan 13 20:21:53.949704 kernel: Key type dns_resolver registered
Jan 13 20:21:53.949712 kernel: registered taskstats version 1
Jan 13 20:21:53.949719 kernel: Loading compiled-in X.509 certificates
Jan 13 20:21:53.949727 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb'
Jan 13 20:21:53.949737 kernel: Key type .fscrypt registered
Jan 13 20:21:53.949746 kernel: Key type fscrypt-provisioning registered
Jan 13 20:21:53.949753 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 13 20:21:53.949761 kernel: ima: Allocated hash algorithm: sha1
Jan 13 20:21:53.949768 kernel: ima: No architecture policies found
Jan 13 20:21:53.949776 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Jan 13 20:21:53.949784 kernel: clk: Disabling unused clocks
Jan 13 20:21:53.949791 kernel: Freeing unused kernel memory: 39680K
Jan 13 20:21:53.949799 kernel: Run /init as init process
Jan 13 20:21:53.949809 kernel:   with arguments:
Jan 13 20:21:53.949817 kernel:     /init
Jan 13 20:21:53.949824 kernel:   with environment:
Jan 13 20:21:53.949832 kernel:     HOME=/
Jan 13 20:21:53.949839 kernel:     TERM=linux
Jan 13 20:21:53.949847 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Jan 13 20:21:53.949857 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 13 20:21:53.949867 systemd[1]: Detected virtualization kvm.
Jan 13 20:21:53.949878 systemd[1]: Detected architecture arm64.
Jan 13 20:21:53.949886 systemd[1]: Running in initrd.
Jan 13 20:21:53.949894 systemd[1]: No hostname configured, using default hostname.
Jan 13 20:21:53.949902 systemd[1]: Hostname set to <localhost>.
Jan 13 20:21:53.949910 systemd[1]: Initializing machine ID from VM UUID.
Jan 13 20:21:53.949919 systemd[1]: Queued start job for default target initrd.target.
Jan 13 20:21:53.949927 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 13 20:21:53.949935 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 13 20:21:53.949947 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Jan 13 20:21:53.949955 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 13 20:21:53.949963 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Jan 13 20:21:53.949972 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Jan 13 20:21:53.949981 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Jan 13 20:21:53.949989 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Jan 13 20:21:53.949999 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 13 20:21:53.950007 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 13 20:21:53.950016 systemd[1]: Reached target paths.target - Path Units.
Jan 13 20:21:53.950024 systemd[1]: Reached target slices.target - Slice Units.
Jan 13 20:21:53.950032 systemd[1]: Reached target swap.target - Swaps.
Jan 13 20:21:53.950040 systemd[1]: Reached target timers.target - Timer Units.
Jan 13 20:21:53.950048 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Jan 13 20:21:53.950056 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 13 20:21:53.950065 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Jan 13 20:21:53.950074 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Jan 13 20:21:53.950083 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 13 20:21:53.950091 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 13 20:21:53.950099 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 13 20:21:53.950108 systemd[1]: Reached target sockets.target - Socket Units.
Jan 13 20:21:53.950116 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Jan 13 20:21:53.950124 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 13 20:21:53.950132 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Jan 13 20:21:53.950141 systemd[1]: Starting systemd-fsck-usr.service...
Jan 13 20:21:53.950151 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 13 20:21:53.950159 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 13 20:21:53.950191 systemd-journald[237]: Collecting audit messages is disabled.
Jan 13 20:21:53.950213 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:21:53.950225 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Jan 13 20:21:53.950252 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 13 20:21:53.950261 systemd[1]: Finished systemd-fsck-usr.service.
Jan 13 20:21:53.950270 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 13 20:21:53.950281 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:21:53.950289 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 13 20:21:53.950298 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 13 20:21:53.950307 systemd-journald[237]: Journal started
Jan 13 20:21:53.950327 systemd-journald[237]: Runtime Journal (/run/log/journal/453259727a0a46a8a8df4a1d5c708d87) is 8.0M, max 76.5M, 68.5M free.
Jan 13 20:21:53.923452 systemd-modules-load[238]: Inserted module 'overlay'
Jan 13 20:21:53.952179 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 13 20:21:53.952206 kernel: Bridge firewalling registered
Jan 13 20:21:53.952282 systemd-modules-load[238]: Inserted module 'br_netfilter'
Jan 13 20:21:53.955168 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 13 20:21:53.956305 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 13 20:21:53.972593 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 13 20:21:53.975614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 13 20:21:53.980632 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 13 20:21:53.984394 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:21:53.986783 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:21:53.994559 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Jan 13 20:21:54.000278 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 13 20:21:54.008584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 13 20:21:54.010834 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 13 20:21:54.019268 dracut-cmdline[270]: dracut-dracut-053
Jan 13 20:21:54.024647 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436
Jan 13 20:21:54.047937 systemd-resolved[272]: Positive Trust Anchors:
Jan 13 20:21:54.048014 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 13 20:21:54.048045 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 13 20:21:54.058224 systemd-resolved[272]: Defaulting to hostname 'linux'.
Jan 13 20:21:54.060883 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 13 20:21:54.061640 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 13 20:21:54.109322 kernel: SCSI subsystem initialized
Jan 13 20:21:54.114267 kernel: Loading iSCSI transport class v2.0-870.
Jan 13 20:21:54.123514 kernel: iscsi: registered transport (tcp)
Jan 13 20:21:54.137480 kernel: iscsi: registered transport (qla4xxx)
Jan 13 20:21:54.137587 kernel: QLogic iSCSI HBA Driver
Jan 13 20:21:54.193920 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Jan 13 20:21:54.197544 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Jan 13 20:21:54.224607 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 13 20:21:54.224675 kernel: device-mapper: uevent: version 1.0.3
Jan 13 20:21:54.224686 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Jan 13 20:21:54.274292 kernel: raid6: neonx8   gen() 15659 MB/s
Jan 13 20:21:54.291302 kernel: raid6: neonx4   gen() 15543 MB/s
Jan 13 20:21:54.308293 kernel: raid6: neonx2   gen() 13111 MB/s
Jan 13 20:21:54.325419 kernel: raid6: neonx1   gen() 10432 MB/s
Jan 13 20:21:54.342278 kernel: raid6: int64x8  gen()  6925 MB/s
Jan 13 20:21:54.359290 kernel: raid6: int64x4  gen()  7245 MB/s
Jan 13 20:21:54.376721 kernel: raid6: int64x2  gen()  6093 MB/s
Jan 13 20:21:54.393297 kernel: raid6: int64x1  gen()  5034 MB/s
Jan 13 20:21:54.393398 kernel: raid6: using algorithm neonx8 gen() 15659 MB/s
Jan 13 20:21:54.410323 kernel: raid6: .... xor() 11778 MB/s, rmw enabled
Jan 13 20:21:54.410430 kernel: raid6: using neon recovery algorithm
Jan 13 20:21:54.415298 kernel: xor: measuring software checksum speed
Jan 13 20:21:54.415424 kernel:    8regs           : 19788 MB/sec
Jan 13 20:21:54.415453 kernel:    32regs          : 16607 MB/sec
Jan 13 20:21:54.416370 kernel:    arm64_neon      : 26919 MB/sec
Jan 13 20:21:54.416424 kernel: xor: using function: arm64_neon (26919 MB/sec)
Jan 13 20:21:54.469293 kernel: Btrfs loaded, zoned=no, fsverity=no
Jan 13 20:21:54.486799 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Jan 13 20:21:54.493665 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 13 20:21:54.510405 systemd-udevd[455]: Using default interface naming scheme 'v255'.
Jan 13 20:21:54.514764 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 13 20:21:54.524066 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Jan 13 20:21:54.542555 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation
Jan 13 20:21:54.580975 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 13 20:21:54.594702 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 13 20:21:54.646306 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 13 20:21:54.655649 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Jan 13 20:21:54.684615 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Jan 13 20:21:54.685694 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 13 20:21:54.688476 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 13 20:21:54.690752 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 13 20:21:54.697592 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Jan 13 20:21:54.731960 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Jan 13 20:21:54.780666 kernel: scsi host0: Virtio SCSI HBA
Jan 13 20:21:54.781665 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU CD-ROM      2.5+ PQ: 0 ANSI: 5
Jan 13 20:21:54.781707 kernel: scsi 0:0:0:1: Direct-Access     QEMU     QEMU HARDDISK    2.5+ PQ: 0 ANSI: 5
Jan 13 20:21:54.840344 kernel: ACPI: bus type USB registered
Jan 13 20:21:54.840412 kernel: usbcore: registered new interface driver usbfs
Jan 13 20:21:54.840074 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 13 20:21:54.840194 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:21:54.843357 kernel: usbcore: registered new interface driver hub
Jan 13 20:21:54.843385 kernel: usbcore: registered new device driver usb
Jan 13 20:21:54.843324 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 13 20:21:54.843932 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 13 20:21:54.844120 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:21:54.845094 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:21:54.856600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:21:54.866402 kernel: sr 0:0:0:0: Power-on or device reset occurred
Jan 13 20:21:54.866632 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray
Jan 13 20:21:54.866729 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 13 20:21:54.866750 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 13 20:21:54.878314 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:21:54.885767 kernel: sd 0:0:0:1: Power-on or device reset occurred
Jan 13 20:21:54.895448 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB)
Jan 13 20:21:54.895582 kernel: sd 0:0:0:1: [sda] Write Protect is off
Jan 13 20:21:54.895666 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08
Jan 13 20:21:54.895750 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 13 20:21:54.895851 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Jan 13 20:21:54.895862 kernel: GPT:17805311 != 80003071
Jan 13 20:21:54.895871 kernel: GPT:Alternate GPT header not at the end of the disk.
Jan 13 20:21:54.895881 kernel: GPT:17805311 != 80003071
Jan 13 20:21:54.895890 kernel: GPT: Use GNU Parted to correct GPT errors.
Jan 13 20:21:54.895899 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Jan 13 20:21:54.895910 kernel: sd 0:0:0:1: [sda] Attached SCSI disk
Jan 13 20:21:54.887116 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 13 20:21:54.903284 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller
Jan 13 20:21:54.925563 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1
Jan 13 20:21:54.925689 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010
Jan 13 20:21:54.925786 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller
Jan 13 20:21:54.925870 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2
Jan 13 20:21:54.925948 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed
Jan 13 20:21:54.926025 kernel: hub 1-0:1.0: USB hub found
Jan 13 20:21:54.927057 kernel: hub 1-0:1.0: 4 ports detected
Jan 13 20:21:54.927191 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM.
Jan 13 20:21:54.927407 kernel: hub 2-0:1.0: USB hub found
Jan 13 20:21:54.927524 kernel: hub 2-0:1.0: 4 ports detected
Jan 13 20:21:54.921663 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:21:54.967271 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (512)
Jan 13 20:21:54.969162 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (511)
Jan 13 20:21:54.970193 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM.
Jan 13 20:21:54.980062 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT.
Jan 13 20:21:54.992036 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM.
Jan 13 20:21:54.997158 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A.
Jan 13 20:21:55.000440 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A.
Jan 13 20:21:55.008505 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Jan 13 20:21:55.017309 disk-uuid[577]: Primary Header is updated.
Jan 13 20:21:55.017309 disk-uuid[577]: Secondary Entries is updated.
Jan 13 20:21:55.017309 disk-uuid[577]: Secondary Header is updated.
Jan 13 20:21:55.161307 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd
Jan 13 20:21:55.404284 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd
Jan 13 20:21:55.540863 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1
Jan 13 20:21:55.540923 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0
Jan 13 20:21:55.543289 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2
Jan 13 20:21:55.597619 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0
Jan 13 20:21:55.598149 kernel: usbcore: registered new interface driver usbhid
Jan 13 20:21:55.598217 kernel: usbhid: USB HID core driver
Jan 13 20:21:56.037276 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Jan 13 20:21:56.037382 disk-uuid[578]: The operation has completed successfully.
Jan 13 20:21:56.085198 systemd[1]: disk-uuid.service: Deactivated successfully.
Jan 13 20:21:56.085400 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Jan 13 20:21:56.107589 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Jan 13 20:21:56.113677 sh[587]: Success
Jan 13 20:21:56.134435 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Jan 13 20:21:56.191402 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Jan 13 20:21:56.193646 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Jan 13 20:21:56.196883 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Jan 13 20:21:56.229330 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78
Jan 13 20:21:56.229404 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:21:56.229416 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Jan 13 20:21:56.230301 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Jan 13 20:21:56.230397 kernel: BTRFS info (device dm-0): using free space tree
Jan 13 20:21:56.237277 kernel: BTRFS info (device dm-0): enabling ssd optimizations
Jan 13 20:21:56.240959 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Jan 13 20:21:56.242726 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Jan 13 20:21:56.249566 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Jan 13 20:21:56.255018 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Jan 13 20:21:56.267417 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:21:56.267482 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:21:56.267497 kernel: BTRFS info (device sda6): using free space tree
Jan 13 20:21:56.274225 kernel: BTRFS info (device sda6): enabling ssd optimizations
Jan 13 20:21:56.274339 kernel: BTRFS info (device sda6): auto enabling async discard
Jan 13 20:21:56.287079 systemd[1]: mnt-oem.mount: Deactivated successfully.
Jan 13 20:21:56.288355 kernel: BTRFS info (device sda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:21:56.293976 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Jan 13 20:21:56.305536 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Jan 13 20:21:56.383672 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 13 20:21:56.394626 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 13 20:21:56.409629 ignition[673]: Ignition 2.20.0
Jan 13 20:21:56.409644 ignition[673]: Stage: fetch-offline
Jan 13 20:21:56.409689 ignition[673]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:21:56.413365 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 13 20:21:56.409697 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner"
Jan 13 20:21:56.409855 ignition[673]: parsed url from cmdline: ""
Jan 13 20:21:56.409858 ignition[673]: no config URL provided
Jan 13 20:21:56.409863 ignition[673]: reading system config file "/usr/lib/ignition/user.ign"
Jan 13 20:21:56.409870 ignition[673]: no config at "/usr/lib/ignition/user.ign"
Jan 13 20:21:56.409875 ignition[673]: failed to fetch config: resource requires networking
Jan 13 20:21:56.410071 ignition[673]: Ignition finished successfully
Jan 13 20:21:56.421961 systemd-networkd[774]: lo: Link UP
Jan 13 20:21:56.421976 systemd-networkd[774]: lo: Gained carrier
Jan 13 20:21:56.424025 systemd-networkd[774]: Enumeration completed
Jan 13 20:21:56.424154 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 13 20:21:56.425190 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:21:56.425194 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 13 20:21:56.426010 systemd[1]: Reached target network.target - Network.
Jan 13 20:21:56.427672 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:21:56.427675 systemd-networkd[774]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 13 20:21:56.428290 systemd-networkd[774]: eth0: Link UP
Jan 13 20:21:56.428293 systemd-networkd[774]: eth0: Gained carrier
Jan 13 20:21:56.428301 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:21:56.433703 systemd-networkd[774]: eth1: Link UP
Jan 13 20:21:56.433706 systemd-networkd[774]: eth1: Gained carrier
Jan 13 20:21:56.433717 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:21:56.435570 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Jan 13 20:21:56.448686 ignition[778]: Ignition 2.20.0
Jan 13 20:21:56.449441 ignition[778]: Stage: fetch
Jan 13 20:21:56.449674 ignition[778]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:21:56.449686 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner"
Jan 13 20:21:56.449795 ignition[778]: parsed url from cmdline: ""
Jan 13 20:21:56.449799 ignition[778]: no config URL provided
Jan 13 20:21:56.449804 ignition[778]: reading system config file "/usr/lib/ignition/user.ign"
Jan 13 20:21:56.449813 ignition[778]: no config at "/usr/lib/ignition/user.ign"
Jan 13 20:21:56.449899 ignition[778]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1
Jan 13 20:21:56.453437 ignition[778]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable
Jan 13 20:21:56.474388 systemd-networkd[774]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1
Jan 13 20:21:56.481368 systemd-networkd[774]: eth0: DHCPv4 address 138.199.153.83/32, gateway 172.31.1.1 acquired from 172.31.1.1
Jan 13 20:21:56.654221 ignition[778]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2
Jan 13 20:21:56.659892 ignition[778]: GET result: OK
Jan 13 20:21:56.660017 ignition[778]: parsing config with SHA512: 499149c4bbc82481980bb93e22ad977ddff5ca5aed8dfb2708675f54b7da3b216e75ecd254605882efd5d7328e534318815ec4bca8f016a657716f14fd3562d2
Jan 13 20:21:56.665751 unknown[778]: fetched base config from "system"
Jan 13 20:21:56.665763 unknown[778]: fetched base config from "system"
Jan 13 20:21:56.666215 ignition[778]: fetch: fetch complete
Jan 13 20:21:56.665769 unknown[778]: fetched user config from "hetzner"
Jan 13 20:21:56.666222 ignition[778]: fetch: fetch passed
Jan 13 20:21:56.666296 ignition[778]: Ignition finished successfully
Jan 13 20:21:56.669085 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Jan 13 20:21:56.674525 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Jan 13 20:21:56.688378 ignition[785]: Ignition 2.20.0
Jan 13 20:21:56.688390 ignition[785]: Stage: kargs
Jan 13 20:21:56.688573 ignition[785]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:21:56.688582 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner"
Jan 13 20:21:56.689560 ignition[785]: kargs: kargs passed
Jan 13 20:21:56.689616 ignition[785]: Ignition finished successfully
Jan 13 20:21:56.692171 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Jan 13 20:21:56.697508 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Jan 13 20:21:56.722878 ignition[792]: Ignition 2.20.0
Jan 13 20:21:56.722891 ignition[792]: Stage: disks
Jan 13 20:21:56.723098 ignition[792]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:21:56.723107 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner"
Jan 13 20:21:56.724190 ignition[792]: disks: disks passed
Jan 13 20:21:56.726662 ignition[792]: Ignition finished successfully
Jan 13 20:21:56.728582 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Jan 13 20:21:56.730092 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Jan 13 20:21:56.731111 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Jan 13 20:21:56.731863 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 13 20:21:56.732450 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 13 20:21:56.733380 systemd[1]: Reached target basic.target - Basic System.
Jan 13 20:21:56.749761 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Jan 13 20:21:56.770622 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks
Jan 13 20:21:56.775214 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Jan 13 20:21:56.783445 systemd[1]: Mounting sysroot.mount - /sysroot...
Jan 13 20:21:56.834590 kernel: EXT4-fs (sda9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none.
Jan 13 20:21:56.835490 systemd[1]: Mounted sysroot.mount - /sysroot.
Jan 13 20:21:56.837029 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Jan 13 20:21:56.854487 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 13 20:21:56.859085 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Jan 13 20:21:56.861457 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent...
Jan 13 20:21:56.865109 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Jan 13 20:21:56.868395 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 13 20:21:56.871105 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (809)
Jan 13 20:21:56.871132 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:21:56.871145 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:21:56.872670 kernel: BTRFS info (device sda6): using free space tree
Jan 13 20:21:56.874747 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Jan 13 20:21:56.876731 kernel: BTRFS info (device sda6): enabling ssd optimizations
Jan 13 20:21:56.876768 kernel: BTRFS info (device sda6): auto enabling async discard
Jan 13 20:21:56.881649 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 13 20:21:56.885768 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Jan 13 20:21:56.940151 coreos-metadata[811]: Jan 13 20:21:56.939 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1
Jan 13 20:21:56.942402 coreos-metadata[811]: Jan 13 20:21:56.942 INFO Fetch successful
Jan 13 20:21:56.945469 coreos-metadata[811]: Jan 13 20:21:56.944 INFO wrote hostname ci-4152-2-0-6-49e4a12287 to /sysroot/etc/hostname
Jan 13 20:21:56.947003 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Jan 13 20:21:56.949516 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory
Jan 13 20:21:56.954425 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory
Jan 13 20:21:56.959391 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory
Jan 13 20:21:56.964194 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory
Jan 13 20:21:57.073086 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Jan 13 20:21:57.078738 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Jan 13 20:21:57.093508 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Jan 13 20:21:57.103286 kernel: BTRFS info (device sda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:21:57.124900 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Jan 13 20:21:57.131443 ignition[928]: INFO     : Ignition 2.20.0
Jan 13 20:21:57.131443 ignition[928]: INFO     : Stage: mount
Jan 13 20:21:57.131443 ignition[928]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 13 20:21:57.131443 ignition[928]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/hetzner"
Jan 13 20:21:57.135503 ignition[928]: INFO     : mount: mount passed
Jan 13 20:21:57.135503 ignition[928]: INFO     : Ignition finished successfully
Jan 13 20:21:57.135103 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Jan 13 20:21:57.142409 systemd[1]: Starting ignition-files.service - Ignition (files)...
Jan 13 20:21:57.229308 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Jan 13 20:21:57.238602 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 13 20:21:57.249272 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (938)
Jan 13 20:21:57.251266 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:21:57.251332 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:21:57.251346 kernel: BTRFS info (device sda6): using free space tree
Jan 13 20:21:57.254270 kernel: BTRFS info (device sda6): enabling ssd optimizations
Jan 13 20:21:57.254355 kernel: BTRFS info (device sda6): auto enabling async discard
Jan 13 20:21:57.258088 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 13 20:21:57.280900 ignition[955]: INFO     : Ignition 2.20.0
Jan 13 20:21:57.280900 ignition[955]: INFO     : Stage: files
Jan 13 20:21:57.280900 ignition[955]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 13 20:21:57.280900 ignition[955]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/hetzner"
Jan 13 20:21:57.283856 ignition[955]: DEBUG    : files: compiled without relabeling support, skipping
Jan 13 20:21:57.284676 ignition[955]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Jan 13 20:21:57.284676 ignition[955]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Jan 13 20:21:57.287931 ignition[955]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Jan 13 20:21:57.288880 ignition[955]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Jan 13 20:21:57.290296 unknown[955]: wrote ssh authorized keys file for user: core
Jan 13 20:21:57.291266 ignition[955]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Jan 13 20:21:57.295096 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Jan 13 20:21:57.295096 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1
Jan 13 20:21:57.386990 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Jan 13 20:21:57.702879 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Jan 13 20:21:57.702879 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Jan 13 20:21:57.705398 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1
Jan 13 20:21:57.893547 systemd-networkd[774]: eth1: Gained IPv6LL
Jan 13 20:21:58.277539 systemd-networkd[774]: eth0: Gained IPv6LL
Jan 13 20:21:58.296501 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Jan 13 20:21:58.384613 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Jan 13 20:21:58.385753 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/install.sh"
Jan 13 20:21:58.385753 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh"
Jan 13 20:21:58.385753 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nginx.yaml"
Jan 13 20:21:58.385753 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml"
Jan 13 20:21:58.385753 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Jan 13 20:21:58.385753 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Jan 13 20:21:58.385753 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Jan 13 20:21:58.385753 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Jan 13 20:21:58.385753 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Jan 13 20:21:58.385753 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Jan 13 20:21:58.385753 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Jan 13 20:21:58.395512 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Jan 13 20:21:58.395512 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Jan 13 20:21:58.395512 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1
Jan 13 20:21:58.926540 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET result: OK
Jan 13 20:21:59.361285 ignition[955]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Jan 13 20:21:59.361285 ignition[955]: INFO     : files: op(c): [started]  processing unit "prepare-helm.service"
Jan 13 20:21:59.365224 ignition[955]: INFO     : files: op(c): op(d): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Jan 13 20:21:59.365224 ignition[955]: INFO     : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Jan 13 20:21:59.365224 ignition[955]: INFO     : files: op(c): [finished] processing unit "prepare-helm.service"
Jan 13 20:21:59.365224 ignition[955]: INFO     : files: op(e): [started]  processing unit "coreos-metadata.service"
Jan 13 20:21:59.365224 ignition[955]: INFO     : files: op(e): op(f): [started]  writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf"
Jan 13 20:21:59.365224 ignition[955]: INFO     : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf"
Jan 13 20:21:59.365224 ignition[955]: INFO     : files: op(e): [finished] processing unit "coreos-metadata.service"
Jan 13 20:21:59.365224 ignition[955]: INFO     : files: op(10): [started]  setting preset to enabled for "prepare-helm.service"
Jan 13 20:21:59.376646 ignition[955]: INFO     : files: op(10): [finished] setting preset to enabled for "prepare-helm.service"
Jan 13 20:21:59.376646 ignition[955]: INFO     : files: createResultFile: createFiles: op(11): [started]  writing file "/sysroot/etc/.ignition-result.json"
Jan 13 20:21:59.376646 ignition[955]: INFO     : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json"
Jan 13 20:21:59.376646 ignition[955]: INFO     : files: files passed
Jan 13 20:21:59.376646 ignition[955]: INFO     : Ignition finished successfully
Jan 13 20:21:59.369983 systemd[1]: Finished ignition-files.service - Ignition (files).
Jan 13 20:21:59.379355 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Jan 13 20:21:59.383151 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Jan 13 20:21:59.385687 systemd[1]: ignition-quench.service: Deactivated successfully.
Jan 13 20:21:59.387630 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Jan 13 20:21:59.398968 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 13 20:21:59.398968 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Jan 13 20:21:59.401073 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 13 20:21:59.404437 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 13 20:21:59.405692 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Jan 13 20:21:59.424626 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Jan 13 20:21:59.466405 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 13 20:21:59.466609 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Jan 13 20:21:59.469595 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Jan 13 20:21:59.470294 systemd[1]: Reached target initrd.target - Initrd Default Target.
Jan 13 20:21:59.471227 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Jan 13 20:21:59.478580 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Jan 13 20:21:59.492392 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 13 20:21:59.498544 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Jan 13 20:21:59.510077 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Jan 13 20:21:59.511408 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 13 20:21:59.512109 systemd[1]: Stopped target timers.target - Timer Units.
Jan 13 20:21:59.513205 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 13 20:21:59.513393 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 13 20:21:59.514753 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Jan 13 20:21:59.515393 systemd[1]: Stopped target basic.target - Basic System.
Jan 13 20:21:59.516409 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Jan 13 20:21:59.517414 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 13 20:21:59.518388 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Jan 13 20:21:59.519976 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Jan 13 20:21:59.520952 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 13 20:21:59.522004 systemd[1]: Stopped target sysinit.target - System Initialization.
Jan 13 20:21:59.523071 systemd[1]: Stopped target local-fs.target - Local File Systems.
Jan 13 20:21:59.524006 systemd[1]: Stopped target swap.target - Swaps.
Jan 13 20:21:59.524827 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 13 20:21:59.525120 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Jan 13 20:21:59.526274 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Jan 13 20:21:59.527342 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 13 20:21:59.528368 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Jan 13 20:21:59.528856 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 13 20:21:59.529551 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 13 20:21:59.529669 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Jan 13 20:21:59.531155 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Jan 13 20:21:59.531285 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 13 20:21:59.533370 systemd[1]: ignition-files.service: Deactivated successfully.
Jan 13 20:21:59.533584 systemd[1]: Stopped ignition-files.service - Ignition (files).
Jan 13 20:21:59.534730 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Jan 13 20:21:59.534835 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Jan 13 20:21:59.541751 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Jan 13 20:21:59.545645 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Jan 13 20:21:59.546288 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 13 20:21:59.546468 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 13 20:21:59.549071 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 13 20:21:59.549417 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 13 20:21:59.560199 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 13 20:21:59.561073 ignition[1008]: INFO     : Ignition 2.20.0
Jan 13 20:21:59.561073 ignition[1008]: INFO     : Stage: umount
Jan 13 20:21:59.561073 ignition[1008]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 13 20:21:59.561073 ignition[1008]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/hetzner"
Jan 13 20:21:59.565398 ignition[1008]: INFO     : umount: umount passed
Jan 13 20:21:59.565398 ignition[1008]: INFO     : Ignition finished successfully
Jan 13 20:21:59.562846 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Jan 13 20:21:59.566035 systemd[1]: ignition-mount.service: Deactivated successfully.
Jan 13 20:21:59.568076 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Jan 13 20:21:59.573800 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Jan 13 20:21:59.574881 systemd[1]: sysroot-boot.service: Deactivated successfully.
Jan 13 20:21:59.574989 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Jan 13 20:21:59.576716 systemd[1]: ignition-disks.service: Deactivated successfully.
Jan 13 20:21:59.576831 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Jan 13 20:21:59.577900 systemd[1]: ignition-kargs.service: Deactivated successfully.
Jan 13 20:21:59.577954 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Jan 13 20:21:59.578822 systemd[1]: ignition-fetch.service: Deactivated successfully.
Jan 13 20:21:59.578884 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Jan 13 20:21:59.579715 systemd[1]: Stopped target network.target - Network.
Jan 13 20:21:59.580498 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Jan 13 20:21:59.580549 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 13 20:21:59.581478 systemd[1]: Stopped target paths.target - Path Units.
Jan 13 20:21:59.582214 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 13 20:21:59.586365 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 13 20:21:59.587780 systemd[1]: Stopped target slices.target - Slice Units.
Jan 13 20:21:59.588748 systemd[1]: Stopped target sockets.target - Socket Units.
Jan 13 20:21:59.589415 systemd[1]: iscsid.socket: Deactivated successfully.
Jan 13 20:21:59.589476 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Jan 13 20:21:59.590720 systemd[1]: iscsiuio.socket: Deactivated successfully.
Jan 13 20:21:59.590771 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 13 20:21:59.591976 systemd[1]: ignition-setup.service: Deactivated successfully.
Jan 13 20:21:59.592051 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Jan 13 20:21:59.593113 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Jan 13 20:21:59.593173 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Jan 13 20:21:59.594405 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Jan 13 20:21:59.594469 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Jan 13 20:21:59.595485 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Jan 13 20:21:59.596378 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Jan 13 20:21:59.600383 systemd-networkd[774]: eth0: DHCPv6 lease lost
Jan 13 20:21:59.603946 systemd[1]: systemd-resolved.service: Deactivated successfully.
Jan 13 20:21:59.604213 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Jan 13 20:21:59.605340 systemd-networkd[774]: eth1: DHCPv6 lease lost
Jan 13 20:21:59.608772 systemd[1]: systemd-networkd.service: Deactivated successfully.
Jan 13 20:21:59.608977 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Jan 13 20:21:59.611516 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Jan 13 20:21:59.611561 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Jan 13 20:21:59.617494 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Jan 13 20:21:59.618526 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Jan 13 20:21:59.618647 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 13 20:21:59.622423 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 13 20:21:59.622490 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:21:59.624584 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 13 20:21:59.624684 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Jan 13 20:21:59.626031 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 13 20:21:59.626095 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 13 20:21:59.627421 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 13 20:21:59.640935 systemd[1]: network-cleanup.service: Deactivated successfully.
Jan 13 20:21:59.641673 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Jan 13 20:21:59.647501 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 13 20:21:59.647697 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 13 20:21:59.649764 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 13 20:21:59.649819 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Jan 13 20:21:59.651589 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 13 20:21:59.651624 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 13 20:21:59.652559 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 13 20:21:59.652609 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Jan 13 20:21:59.654013 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 13 20:21:59.654057 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Jan 13 20:21:59.655438 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 13 20:21:59.655482 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:21:59.661514 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Jan 13 20:21:59.662370 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 13 20:21:59.662457 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 13 20:21:59.666038 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully.
Jan 13 20:21:59.666117 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 13 20:21:59.666904 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 13 20:21:59.666948 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 13 20:21:59.668427 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 13 20:21:59.668487 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:21:59.670119 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 13 20:21:59.670255 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Jan 13 20:21:59.671930 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Jan 13 20:21:59.679558 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Jan 13 20:21:59.688204 systemd[1]: Switching root.
Jan 13 20:21:59.723913 systemd-journald[237]: Journal stopped
Jan 13 20:22:00.738051 systemd-journald[237]: Received SIGTERM from PID 1 (systemd).
Jan 13 20:22:00.738117 kernel: SELinux:  policy capability network_peer_controls=1
Jan 13 20:22:00.738130 kernel: SELinux:  policy capability open_perms=1
Jan 13 20:22:00.738139 kernel: SELinux:  policy capability extended_socket_class=1
Jan 13 20:22:00.738149 kernel: SELinux:  policy capability always_check_network=0
Jan 13 20:22:00.738158 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 13 20:22:00.738171 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 13 20:22:00.738181 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Jan 13 20:22:00.738193 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Jan 13 20:22:00.738203 kernel: audit: type=1403 audit(1736799719.924:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 13 20:22:00.738218 systemd[1]: Successfully loaded SELinux policy in 34.112ms.
Jan 13 20:22:00.740165 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.844ms.
Jan 13 20:22:00.740203 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 13 20:22:00.740215 systemd[1]: Detected virtualization kvm.
Jan 13 20:22:00.740230 systemd[1]: Detected architecture arm64.
Jan 13 20:22:00.740260 systemd[1]: Detected first boot.
Jan 13 20:22:00.740271 systemd[1]: Hostname set to <ci-4152-2-0-6-49e4a12287>.
Jan 13 20:22:00.740281 systemd[1]: Initializing machine ID from VM UUID.
Jan 13 20:22:00.740336 zram_generator::config[1051]: No configuration found.
Jan 13 20:22:00.740352 systemd[1]: Populated /etc with preset unit settings.
Jan 13 20:22:00.740503 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 13 20:22:00.740520 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Jan 13 20:22:00.740537 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 13 20:22:00.740548 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Jan 13 20:22:00.740558 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Jan 13 20:22:00.740569 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Jan 13 20:22:00.740579 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Jan 13 20:22:00.740589 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Jan 13 20:22:00.740599 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Jan 13 20:22:00.740610 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Jan 13 20:22:00.740620 systemd[1]: Created slice user.slice - User and Session Slice.
Jan 13 20:22:00.740632 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 13 20:22:00.740650 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 13 20:22:00.740661 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Jan 13 20:22:00.740671 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Jan 13 20:22:00.740682 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Jan 13 20:22:00.740693 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 13 20:22:00.740703 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0...
Jan 13 20:22:00.740713 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 13 20:22:00.740724 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Jan 13 20:22:00.740736 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Jan 13 20:22:00.740746 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Jan 13 20:22:00.740757 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Jan 13 20:22:00.740767 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 13 20:22:00.740777 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 13 20:22:00.740827 systemd[1]: Reached target slices.target - Slice Units.
Jan 13 20:22:00.740844 systemd[1]: Reached target swap.target - Swaps.
Jan 13 20:22:00.740856 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Jan 13 20:22:00.740866 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Jan 13 20:22:00.740877 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 13 20:22:00.740887 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 13 20:22:00.740897 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 13 20:22:00.740908 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Jan 13 20:22:00.740918 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Jan 13 20:22:00.740928 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Jan 13 20:22:00.740940 systemd[1]: Mounting media.mount - External Media Directory...
Jan 13 20:22:00.740951 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Jan 13 20:22:00.740961 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Jan 13 20:22:00.740971 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Jan 13 20:22:00.740983 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 13 20:22:00.740993 systemd[1]: Reached target machines.target - Containers.
Jan 13 20:22:00.741007 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Jan 13 20:22:00.741019 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:22:00.741030 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 13 20:22:00.741040 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Jan 13 20:22:00.741050 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 13 20:22:00.741061 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 13 20:22:00.741073 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 13 20:22:00.741083 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Jan 13 20:22:00.741095 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 13 20:22:00.741106 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Jan 13 20:22:00.741116 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 13 20:22:00.741126 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Jan 13 20:22:00.741136 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Jan 13 20:22:00.741146 systemd[1]: Stopped systemd-fsck-usr.service.
Jan 13 20:22:00.741156 kernel: fuse: init (API version 7.39)
Jan 13 20:22:00.741167 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 13 20:22:00.741178 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 13 20:22:00.741190 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Jan 13 20:22:00.741201 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Jan 13 20:22:00.741211 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 13 20:22:00.741222 systemd[1]: verity-setup.service: Deactivated successfully.
Jan 13 20:22:00.749275 systemd[1]: Stopped verity-setup.service.
Jan 13 20:22:00.749365 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Jan 13 20:22:00.749378 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Jan 13 20:22:00.749390 systemd[1]: Mounted media.mount - External Media Directory.
Jan 13 20:22:00.749400 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Jan 13 20:22:00.749419 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Jan 13 20:22:00.749430 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Jan 13 20:22:00.749441 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 13 20:22:00.749452 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 13 20:22:00.749463 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Jan 13 20:22:00.749475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 13 20:22:00.749485 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 13 20:22:00.749495 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 13 20:22:00.749506 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 13 20:22:00.749518 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 13 20:22:00.749531 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Jan 13 20:22:00.749541 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Jan 13 20:22:00.749552 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Jan 13 20:22:00.749563 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 13 20:22:00.749574 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 13 20:22:00.749622 systemd-journald[1121]: Collecting audit messages is disabled.
Jan 13 20:22:00.749655 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Jan 13 20:22:00.749665 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Jan 13 20:22:00.749677 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Jan 13 20:22:00.749688 kernel: loop: module loaded
Jan 13 20:22:00.749700 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Jan 13 20:22:00.749711 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 13 20:22:00.749722 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Jan 13 20:22:00.749733 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Jan 13 20:22:00.749743 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Jan 13 20:22:00.749754 systemd-journald[1121]: Journal started
Jan 13 20:22:00.749778 systemd-journald[1121]: Runtime Journal (/run/log/journal/453259727a0a46a8a8df4a1d5c708d87) is 8.0M, max 76.5M, 68.5M free.
Jan 13 20:22:00.447207 systemd[1]: Queued start job for default target multi-user.target.
Jan 13 20:22:00.473343 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6.
Jan 13 20:22:00.473946 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 13 20:22:00.753323 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:22:00.764255 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Jan 13 20:22:00.764343 kernel: ACPI: bus type drm_connector registered
Jan 13 20:22:00.764361 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 13 20:22:00.769397 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Jan 13 20:22:00.772344 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 13 20:22:00.775600 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Jan 13 20:22:00.779262 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 13 20:22:00.781613 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Jan 13 20:22:00.782949 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 13 20:22:00.783119 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 13 20:22:00.784052 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 13 20:22:00.784200 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 13 20:22:00.785121 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Jan 13 20:22:00.786175 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Jan 13 20:22:00.817811 systemd[1]: Reached target network-pre.target - Preparation for Network.
Jan 13 20:22:00.829484 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Jan 13 20:22:00.831262 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 13 20:22:00.840062 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Jan 13 20:22:00.843550 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Jan 13 20:22:00.847198 kernel: loop0: detected capacity change from 0 to 194512
Jan 13 20:22:00.858461 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Jan 13 20:22:00.860907 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:22:00.868030 systemd-tmpfiles[1142]: ACLs are not supported, ignoring.
Jan 13 20:22:00.868046 systemd-tmpfiles[1142]: ACLs are not supported, ignoring.
Jan 13 20:22:00.878478 systemd-journald[1121]: Time spent on flushing to /var/log/journal/453259727a0a46a8a8df4a1d5c708d87 is 92.149ms for 1135 entries.
Jan 13 20:22:00.878478 systemd-journald[1121]: System Journal (/var/log/journal/453259727a0a46a8a8df4a1d5c708d87) is 8.0M, max 584.8M, 576.8M free.
Jan 13 20:22:00.988564 systemd-journald[1121]: Received client request to flush runtime journal.
Jan 13 20:22:00.988613 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Jan 13 20:22:00.988627 kernel: loop1: detected capacity change from 0 to 113536
Jan 13 20:22:00.881924 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 13 20:22:00.900447 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Jan 13 20:22:00.933384 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 13 20:22:00.948006 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Jan 13 20:22:00.952000 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Jan 13 20:22:00.954644 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Jan 13 20:22:00.989382 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Jan 13 20:22:00.992826 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Jan 13 20:22:00.998146 kernel: loop2: detected capacity change from 0 to 8
Jan 13 20:22:01.004647 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 13 20:22:01.007344 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Jan 13 20:22:01.031366 kernel: loop3: detected capacity change from 0 to 116808
Jan 13 20:22:01.037570 systemd-tmpfiles[1190]: ACLs are not supported, ignoring.
Jan 13 20:22:01.037591 systemd-tmpfiles[1190]: ACLs are not supported, ignoring.
Jan 13 20:22:01.049269 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 13 20:22:01.071445 kernel: loop4: detected capacity change from 0 to 194512
Jan 13 20:22:01.097508 kernel: loop5: detected capacity change from 0 to 113536
Jan 13 20:22:01.112722 kernel: loop6: detected capacity change from 0 to 8
Jan 13 20:22:01.116267 kernel: loop7: detected capacity change from 0 to 116808
Jan 13 20:22:01.124895 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'.
Jan 13 20:22:01.125484 (sd-merge)[1195]: Merged extensions into '/usr'.
Jan 13 20:22:01.131877 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)...
Jan 13 20:22:01.132071 systemd[1]: Reloading...
Jan 13 20:22:01.233344 zram_generator::config[1221]: No configuration found.
Jan 13 20:22:01.354671 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:22:01.400466 systemd[1]: Reloading finished in 267 ms.
Jan 13 20:22:01.433131 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Jan 13 20:22:01.444458 systemd[1]: Starting ensure-sysext.service...
Jan 13 20:22:01.446264 ldconfig[1144]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Jan 13 20:22:01.448839 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 13 20:22:01.459360 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Jan 13 20:22:01.471471 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)...
Jan 13 20:22:01.471500 systemd[1]: Reloading...
Jan 13 20:22:01.510878 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Jan 13 20:22:01.511137 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Jan 13 20:22:01.512048 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Jan 13 20:22:01.514973 systemd-tmpfiles[1258]: ACLs are not supported, ignoring.
Jan 13 20:22:01.515042 systemd-tmpfiles[1258]: ACLs are not supported, ignoring.
Jan 13 20:22:01.520789 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot.
Jan 13 20:22:01.520807 systemd-tmpfiles[1258]: Skipping /boot
Jan 13 20:22:01.537742 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot.
Jan 13 20:22:01.537763 systemd-tmpfiles[1258]: Skipping /boot
Jan 13 20:22:01.577296 zram_generator::config[1285]: No configuration found.
Jan 13 20:22:01.679046 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:22:01.725542 systemd[1]: Reloading finished in 253 ms.
Jan 13 20:22:01.746752 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Jan 13 20:22:01.753091 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 13 20:22:01.772768 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 13 20:22:01.777592 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Jan 13 20:22:01.781535 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Jan 13 20:22:01.789413 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 13 20:22:01.793632 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 13 20:22:01.799613 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Jan 13 20:22:01.804359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:22:01.808717 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 13 20:22:01.819022 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 13 20:22:01.824443 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 13 20:22:01.826418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:22:01.835963 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Jan 13 20:22:01.839739 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:22:01.839893 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:22:01.846558 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:22:01.857699 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 13 20:22:01.858874 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:22:01.860336 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Jan 13 20:22:01.862858 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 13 20:22:01.864506 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 13 20:22:01.873758 systemd-udevd[1329]: Using default interface naming scheme 'v255'.
Jan 13 20:22:01.876267 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 13 20:22:01.883607 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Jan 13 20:22:01.884969 systemd[1]: Finished ensure-sysext.service.
Jan 13 20:22:01.891140 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 13 20:22:01.891337 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 13 20:22:01.897168 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Jan 13 20:22:01.900993 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 13 20:22:01.902361 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 13 20:22:01.907128 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 13 20:22:01.907321 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 13 20:22:01.912108 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 13 20:22:01.920384 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Jan 13 20:22:01.921326 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 13 20:22:01.924000 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Jan 13 20:22:01.950706 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 13 20:22:01.951272 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Jan 13 20:22:01.951739 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Jan 13 20:22:01.997439 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Jan 13 20:22:02.000095 augenrules[1387]: No rules
Jan 13 20:22:02.005131 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 13 20:22:02.006174 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 13 20:22:02.026037 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped.
Jan 13 20:22:02.150317 systemd-networkd[1368]: lo: Link UP
Jan 13 20:22:02.150330 systemd-networkd[1368]: lo: Gained carrier
Jan 13 20:22:02.151991 systemd-networkd[1368]: Enumeration completed
Jan 13 20:22:02.152261 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 13 20:22:02.155342 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:22:02.155347 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 13 20:22:02.156618 systemd-networkd[1368]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:22:02.156637 systemd-networkd[1368]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 13 20:22:02.158640 systemd-networkd[1368]: eth0: Link UP
Jan 13 20:22:02.158653 systemd-networkd[1368]: eth0: Gained carrier
Jan 13 20:22:02.158675 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:22:02.163654 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Jan 13 20:22:02.164385 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Jan 13 20:22:02.164970 systemd-networkd[1368]: eth1: Link UP
Jan 13 20:22:02.164974 systemd-networkd[1368]: eth1: Gained carrier
Jan 13 20:22:02.164996 systemd-networkd[1368]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:22:02.165036 systemd[1]: Reached target time-set.target - System Time Set.
Jan 13 20:22:02.182017 systemd-resolved[1328]: Positive Trust Anchors:
Jan 13 20:22:02.182100 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 13 20:22:02.182133 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 13 20:22:02.188596 systemd-resolved[1328]: Using system hostname 'ci-4152-2-0-6-49e4a12287'.
Jan 13 20:22:02.191597 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 13 20:22:02.192336 systemd[1]: Reached target network.target - Network.
Jan 13 20:22:02.192777 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 13 20:22:02.196910 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:22:02.208411 kernel: mousedev: PS/2 mouse device common for all mice
Jan 13 20:22:02.208369 systemd-networkd[1368]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1
Jan 13 20:22:02.210433 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection.
Jan 13 20:22:02.218337 systemd-networkd[1368]: eth0: DHCPv4 address 138.199.153.83/32, gateway 172.31.1.1 acquired from 172.31.1.1
Jan 13 20:22:02.253628 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped.
Jan 13 20:22:02.254407 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:22:02.262526 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 13 20:22:02.264647 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 13 20:22:02.269323 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 13 20:22:02.270450 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:22:02.270488 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Jan 13 20:22:02.270874 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 13 20:22:02.271657 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 13 20:22:02.280766 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 13 20:22:02.281781 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 13 20:22:02.288628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 13 20:22:02.298274 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1377)
Jan 13 20:22:02.300779 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 13 20:22:02.300960 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 13 20:22:02.302551 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 13 20:22:02.359123 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:22:02.367756 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0
Jan 13 20:22:02.367828 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 13 20:22:02.367841 kernel: [drm] features: -context_init
Jan 13 20:22:02.367884 kernel: [drm] number of scanouts: 1
Jan 13 20:22:02.367896 kernel: [drm] number of cap sets: 0
Jan 13 20:22:02.368097 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM.
Jan 13 20:22:02.371266 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0
Jan 13 20:22:02.376880 systemd-timesyncd[1357]: Contacted time server 148.251.5.46:123 (0.flatcar.pool.ntp.org).
Jan 13 20:22:02.377059 systemd-timesyncd[1357]: Initial clock synchronization to Mon 2025-01-13 20:22:02.098749 UTC.
Jan 13 20:22:02.378699 kernel: Console: switching to colour frame buffer device 160x50
Jan 13 20:22:02.379465 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Jan 13 20:22:02.388275 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 13 20:22:02.404752 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 13 20:22:02.404982 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:22:02.412727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:22:02.416792 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Jan 13 20:22:02.483781 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:22:02.514816 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Jan 13 20:22:02.522526 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Jan 13 20:22:02.537679 lvm[1443]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 13 20:22:02.570105 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Jan 13 20:22:02.572435 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 13 20:22:02.573947 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 13 20:22:02.575518 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Jan 13 20:22:02.576179 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Jan 13 20:22:02.577098 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Jan 13 20:22:02.577879 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Jan 13 20:22:02.578565 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Jan 13 20:22:02.579160 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Jan 13 20:22:02.579195 systemd[1]: Reached target paths.target - Path Units.
Jan 13 20:22:02.579735 systemd[1]: Reached target timers.target - Timer Units.
Jan 13 20:22:02.581659 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Jan 13 20:22:02.586109 systemd[1]: Starting docker.socket - Docker Socket for the API...
Jan 13 20:22:02.592743 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Jan 13 20:22:02.595299 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Jan 13 20:22:02.596752 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Jan 13 20:22:02.597526 systemd[1]: Reached target sockets.target - Socket Units.
Jan 13 20:22:02.598104 systemd[1]: Reached target basic.target - Basic System.
Jan 13 20:22:02.598807 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Jan 13 20:22:02.598847 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Jan 13 20:22:02.601490 systemd[1]: Starting containerd.service - containerd container runtime...
Jan 13 20:22:02.606979 lvm[1447]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 13 20:22:02.607520 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Jan 13 20:22:02.616595 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Jan 13 20:22:02.620473 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Jan 13 20:22:02.625889 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Jan 13 20:22:02.628028 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Jan 13 20:22:02.638909 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Jan 13 20:22:02.641510 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Jan 13 20:22:02.647581 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent.
Jan 13 20:22:02.668335 jq[1451]: false
Jan 13 20:22:02.671645 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Jan 13 20:22:02.678516 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Jan 13 20:22:02.686603 systemd[1]: Starting systemd-logind.service - User Login Management...
Jan 13 20:22:02.687989 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Jan 13 20:22:02.688653 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Jan 13 20:22:02.690698 dbus-daemon[1450]: [system] SELinux support is enabled
Jan 13 20:22:02.692880 systemd[1]: Starting update-engine.service - Update Engine...
Jan 13 20:22:02.697128 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Jan 13 20:22:02.699053 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Jan 13 20:22:02.706528 extend-filesystems[1452]: Found loop4
Jan 13 20:22:02.706528 extend-filesystems[1452]: Found loop5
Jan 13 20:22:02.706528 extend-filesystems[1452]: Found loop6
Jan 13 20:22:02.706528 extend-filesystems[1452]: Found loop7
Jan 13 20:22:02.706528 extend-filesystems[1452]: Found sda
Jan 13 20:22:02.706528 extend-filesystems[1452]: Found sda1
Jan 13 20:22:02.706528 extend-filesystems[1452]: Found sda2
Jan 13 20:22:02.706528 extend-filesystems[1452]: Found sda3
Jan 13 20:22:02.706528 extend-filesystems[1452]: Found usr
Jan 13 20:22:02.706528 extend-filesystems[1452]: Found sda4
Jan 13 20:22:02.706528 extend-filesystems[1452]: Found sda6
Jan 13 20:22:02.706528 extend-filesystems[1452]: Found sda7
Jan 13 20:22:02.706528 extend-filesystems[1452]: Found sda9
Jan 13 20:22:02.706528 extend-filesystems[1452]: Checking size of /dev/sda9
Jan 13 20:22:02.704199 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Jan 13 20:22:02.766997 coreos-metadata[1449]: Jan 13 20:22:02.706 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1
Jan 13 20:22:02.766997 coreos-metadata[1449]: Jan 13 20:22:02.709 INFO Fetch successful
Jan 13 20:22:02.766997 coreos-metadata[1449]: Jan 13 20:22:02.709 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1
Jan 13 20:22:02.766997 coreos-metadata[1449]: Jan 13 20:22:02.709 INFO Fetch successful
Jan 13 20:22:02.769300 extend-filesystems[1452]: Resized partition /dev/sda9
Jan 13 20:22:02.713675 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Jan 13 20:22:02.770818 extend-filesystems[1485]: resize2fs 1.47.1 (20-May-2024)
Jan 13 20:22:02.713870 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Jan 13 20:22:02.734099 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Jan 13 20:22:02.773814 jq[1463]: true
Jan 13 20:22:02.734145 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Jan 13 20:22:02.735444 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Jan 13 20:22:02.735468 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Jan 13 20:22:02.741933 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Jan 13 20:22:02.742114 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Jan 13 20:22:02.789271 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks
Jan 13 20:22:02.790212 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Jan 13 20:22:02.799506 jq[1479]: true
Jan 13 20:22:02.807887 tar[1469]: linux-arm64/helm
Jan 13 20:22:02.815398 systemd[1]: motdgen.service: Deactivated successfully.
Jan 13 20:22:02.816651 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Jan 13 20:22:02.853420 update_engine[1462]: I20250113 20:22:02.850649  1462 main.cc:92] Flatcar Update Engine starting
Jan 13 20:22:02.865760 systemd[1]: Started update-engine.service - Update Engine.
Jan 13 20:22:02.868488 update_engine[1462]: I20250113 20:22:02.867883  1462 update_check_scheduler.cc:74] Next update check in 4m24s
Jan 13 20:22:02.874532 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Jan 13 20:22:02.930874 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1361)
Jan 13 20:22:02.965131 systemd-logind[1461]: New seat seat0.
Jan 13 20:22:02.967269 systemd-logind[1461]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 13 20:22:02.967309 systemd-logind[1461]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard)
Jan 13 20:22:02.967327 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Jan 13 20:22:02.968299 systemd[1]: Started systemd-logind.service - User Login Management.
Jan 13 20:22:02.970876 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Jan 13 20:22:02.986384 kernel: EXT4-fs (sda9): resized filesystem to 9393147
Jan 13 20:22:02.999691 extend-filesystems[1485]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required
Jan 13 20:22:02.999691 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 5
Jan 13 20:22:02.999691 extend-filesystems[1485]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long.
Jan 13 20:22:03.007178 extend-filesystems[1452]: Resized filesystem in /dev/sda9
Jan 13 20:22:03.007178 extend-filesystems[1452]: Found sr0
Jan 13 20:22:03.001262 systemd[1]: extend-filesystems.service: Deactivated successfully.
Jan 13 20:22:03.009033 bash[1519]: Updated "/home/core/.ssh/authorized_keys"
Jan 13 20:22:03.003089 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Jan 13 20:22:03.008711 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Jan 13 20:22:03.027375 systemd[1]: Starting sshkeys.service...
Jan 13 20:22:03.057717 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Jan 13 20:22:03.072631 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Jan 13 20:22:03.131731 coreos-metadata[1528]: Jan 13 20:22:03.130 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1
Jan 13 20:22:03.132782 coreos-metadata[1528]: Jan 13 20:22:03.132 INFO Fetch successful
Jan 13 20:22:03.137968 unknown[1528]: wrote ssh authorized keys file for user: core
Jan 13 20:22:03.181181 update-ssh-keys[1535]: Updated "/home/core/.ssh/authorized_keys"
Jan 13 20:22:03.182044 containerd[1481]: time="2025-01-13T20:22:03.181949429Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Jan 13 20:22:03.183654 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Jan 13 20:22:03.190357 systemd[1]: Finished sshkeys.service.
Jan 13 20:22:03.202347 locksmithd[1506]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Jan 13 20:22:03.247824 containerd[1481]: time="2025-01-13T20:22:03.247769620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:22:03.249483 containerd[1481]: time="2025-01-13T20:22:03.249436141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:22:03.249603 containerd[1481]: time="2025-01-13T20:22:03.249590388Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jan 13 20:22:03.250851 containerd[1481]: time="2025-01-13T20:22:03.249670774Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jan 13 20:22:03.250851 containerd[1481]: time="2025-01-13T20:22:03.249840464Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jan 13 20:22:03.250851 containerd[1481]: time="2025-01-13T20:22:03.249865406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jan 13 20:22:03.250851 containerd[1481]: time="2025-01-13T20:22:03.249926333Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:22:03.250851 containerd[1481]: time="2025-01-13T20:22:03.249939538Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:22:03.250851 containerd[1481]: time="2025-01-13T20:22:03.250111313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:22:03.250851 containerd[1481]: time="2025-01-13T20:22:03.250125097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jan 13 20:22:03.250851 containerd[1481]: time="2025-01-13T20:22:03.250136834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:22:03.250851 containerd[1481]: time="2025-01-13T20:22:03.250144981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jan 13 20:22:03.250851 containerd[1481]: time="2025-01-13T20:22:03.250223012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:22:03.250851 containerd[1481]: time="2025-01-13T20:22:03.250455675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:22:03.251102 containerd[1481]: time="2025-01-13T20:22:03.250573899Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:22:03.251102 containerd[1481]: time="2025-01-13T20:22:03.250588223Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jan 13 20:22:03.251102 containerd[1481]: time="2025-01-13T20:22:03.250658455Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jan 13 20:22:03.251102 containerd[1481]: time="2025-01-13T20:22:03.250696640Z" level=info msg="metadata content store policy set" policy=shared
Jan 13 20:22:03.256832 containerd[1481]: time="2025-01-13T20:22:03.256786626Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jan 13 20:22:03.257012 containerd[1481]: time="2025-01-13T20:22:03.256997783Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jan 13 20:22:03.257293 containerd[1481]: time="2025-01-13T20:22:03.257272570Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jan 13 20:22:03.257420 containerd[1481]: time="2025-01-13T20:22:03.257403844Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jan 13 20:22:03.257577 containerd[1481]: time="2025-01-13T20:22:03.257558747Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jan 13 20:22:03.258073 containerd[1481]: time="2025-01-13T20:22:03.258048630Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jan 13 20:22:03.258598 containerd[1481]: time="2025-01-13T20:22:03.258572336Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jan 13 20:22:03.258865 containerd[1481]: time="2025-01-13T20:22:03.258846968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jan 13 20:22:03.258953 containerd[1481]: time="2025-01-13T20:22:03.258937817Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jan 13 20:22:03.259048 containerd[1481]: time="2025-01-13T20:22:03.259034149Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jan 13 20:22:03.259123 containerd[1481]: time="2025-01-13T20:22:03.259110327Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jan 13 20:22:03.259197 containerd[1481]: time="2025-01-13T20:22:03.259183879Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jan 13 20:22:03.259325 containerd[1481]: time="2025-01-13T20:22:03.259309052Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jan 13 20:22:03.259407 containerd[1481]: time="2025-01-13T20:22:03.259392720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jan 13 20:22:03.259486 containerd[1481]: time="2025-01-13T20:22:03.259471600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jan 13 20:22:03.259605 containerd[1481]: time="2025-01-13T20:22:03.259543916Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jan 13 20:22:03.259605 containerd[1481]: time="2025-01-13T20:22:03.259564341Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jan 13 20:22:03.259605 containerd[1481]: time="2025-01-13T20:22:03.259582063Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jan 13 20:22:03.261262 containerd[1481]: time="2025-01-13T20:22:03.259712217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261262 containerd[1481]: time="2025-01-13T20:22:03.259737931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261262 containerd[1481]: time="2025-01-13T20:22:03.259753066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261262 containerd[1481]: time="2025-01-13T20:22:03.259778279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261262 containerd[1481]: time="2025-01-13T20:22:03.259797815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261262 containerd[1481]: time="2025-01-13T20:22:03.259814147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261262 containerd[1481]: time="2025-01-13T20:22:03.259829128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261262 containerd[1481]: time="2025-01-13T20:22:03.259845576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261262 containerd[1481]: time="2025-01-13T20:22:03.259861483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261262 containerd[1481]: time="2025-01-13T20:22:03.259887197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261262 containerd[1481]: time="2025-01-13T20:22:03.259902718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261262 containerd[1481]: time="2025-01-13T20:22:03.259917429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261262 containerd[1481]: time="2025-01-13T20:22:03.260145845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261262 containerd[1481]: time="2025-01-13T20:22:03.260165266Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jan 13 20:22:03.261262 containerd[1481]: time="2025-01-13T20:22:03.260194378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261573 containerd[1481]: time="2025-01-13T20:22:03.260211135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261573 containerd[1481]: time="2025-01-13T20:22:03.260230285Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jan 13 20:22:03.261573 containerd[1481]: time="2025-01-13T20:22:03.260430710Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jan 13 20:22:03.261573 containerd[1481]: time="2025-01-13T20:22:03.260459474Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jan 13 20:22:03.261573 containerd[1481]: time="2025-01-13T20:22:03.260471057Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jan 13 20:22:03.261573 containerd[1481]: time="2025-01-13T20:22:03.260483798Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jan 13 20:22:03.261573 containerd[1481]: time="2025-01-13T20:22:03.260492949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261573 containerd[1481]: time="2025-01-13T20:22:03.260505999Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jan 13 20:22:03.261573 containerd[1481]: time="2025-01-13T20:22:03.260516694Z" level=info msg="NRI interface is disabled by configuration."
Jan 13 20:22:03.261573 containerd[1481]: time="2025-01-13T20:22:03.260527929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Jan 13 20:22:03.261732 containerd[1481]: time="2025-01-13T20:22:03.260883102Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Jan 13 20:22:03.261732 containerd[1481]: time="2025-01-13T20:22:03.260942755Z" level=info msg="Connect containerd service"
Jan 13 20:22:03.261732 containerd[1481]: time="2025-01-13T20:22:03.260978932Z" level=info msg="using legacy CRI server"
Jan 13 20:22:03.261732 containerd[1481]: time="2025-01-13T20:22:03.260985187Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Jan 13 20:22:03.262145 containerd[1481]: time="2025-01-13T20:22:03.262112250Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Jan 13 20:22:03.263436 containerd[1481]: time="2025-01-13T20:22:03.263399042Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 13 20:22:03.263803 containerd[1481]: time="2025-01-13T20:22:03.263759775Z" level=info msg="Start subscribing containerd event"
Jan 13 20:22:03.263940 containerd[1481]: time="2025-01-13T20:22:03.263921396Z" level=info msg="Start recovering state"
Jan 13 20:22:03.264106 containerd[1481]: time="2025-01-13T20:22:03.264090353Z" level=info msg="Start event monitor"
Jan 13 20:22:03.264161 containerd[1481]: time="2025-01-13T20:22:03.264149427Z" level=info msg="Start snapshots syncer"
Jan 13 20:22:03.264351 containerd[1481]: time="2025-01-13T20:22:03.264334368Z" level=info msg="Start cni network conf syncer for default"
Jan 13 20:22:03.264401 containerd[1481]: time="2025-01-13T20:22:03.264390353Z" level=info msg="Start streaming server"
Jan 13 20:22:03.265605 containerd[1481]: time="2025-01-13T20:22:03.265576605Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Jan 13 20:22:03.265672 containerd[1481]: time="2025-01-13T20:22:03.265641122Z" level=info msg=serving... address=/run/containerd/containerd.sock
Jan 13 20:22:03.265820 systemd[1]: Started containerd.service - containerd container runtime.
Jan 13 20:22:03.267024 containerd[1481]: time="2025-01-13T20:22:03.266765868Z" level=info msg="containerd successfully booted in 0.088227s"
Jan 13 20:22:03.458078 tar[1469]: linux-arm64/LICENSE
Jan 13 20:22:03.458609 tar[1469]: linux-arm64/README.md
Jan 13 20:22:03.472084 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Jan 13 20:22:03.624844 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Jan 13 20:22:03.650874 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Jan 13 20:22:03.653613 systemd-networkd[1368]: eth1: Gained IPv6LL
Jan 13 20:22:03.660650 systemd[1]: Starting issuegen.service - Generate /run/issue...
Jan 13 20:22:03.662691 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Jan 13 20:22:03.666638 systemd[1]: Reached target network-online.target - Network is Online.
Jan 13 20:22:03.675060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:22:03.684810 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Jan 13 20:22:03.687270 systemd[1]: issuegen.service: Deactivated successfully.
Jan 13 20:22:03.689354 systemd[1]: Finished issuegen.service - Generate /run/issue.
Jan 13 20:22:03.704362 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Jan 13 20:22:03.714521 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Jan 13 20:22:03.721611 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Jan 13 20:22:03.732193 systemd[1]: Started getty@tty1.service - Getty on tty1.
Jan 13 20:22:03.735435 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0.
Jan 13 20:22:03.737344 systemd[1]: Reached target getty.target - Login Prompts.
Jan 13 20:22:03.781446 systemd-networkd[1368]: eth0: Gained IPv6LL
Jan 13 20:22:04.391538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:22:04.392831 systemd[1]: Reached target multi-user.target - Multi-User System.
Jan 13 20:22:04.395341 systemd[1]: Startup finished in 785ms (kernel) + 6.234s (initrd) + 4.504s (userspace) = 11.525s.
Jan 13 20:22:04.400299 (kubelet)[1580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:22:05.019649 kubelet[1580]: E0113 20:22:05.019538    1580 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:22:05.022723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:22:05.022879 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:22:15.273798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Jan 13 20:22:15.285605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:22:15.397573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:22:15.400167 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:22:15.457761 kubelet[1600]: E0113 20:22:15.457615    1600 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:22:15.462148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:22:15.462475 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:22:25.712803 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Jan 13 20:22:25.718604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:22:25.840303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:22:25.846465 (kubelet)[1616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:22:25.896280 kubelet[1616]: E0113 20:22:25.896203    1616 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:22:25.899105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:22:25.899296 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:22:35.965611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Jan 13 20:22:35.980624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:22:36.085080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:22:36.099087 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:22:36.154214 kubelet[1632]: E0113 20:22:36.154140    1632 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:22:36.157487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:22:36.157779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:22:46.214966 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
Jan 13 20:22:46.222626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:22:46.351707 (kubelet)[1648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:22:46.352462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:22:46.400937 kubelet[1648]: E0113 20:22:46.400870    1648 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:22:46.403520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:22:46.403654 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:22:47.916095 update_engine[1462]: I20250113 20:22:47.915978  1462 update_attempter.cc:509] Updating boot flags...
Jan 13 20:22:47.963055 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1666)
Jan 13 20:22:48.023427 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1661)
Jan 13 20:22:56.464804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
Jan 13 20:22:56.472708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:22:56.587754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:22:56.599796 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:22:56.650540 kubelet[1683]: E0113 20:22:56.650410    1683 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:22:56.653201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:22:56.653371 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:23:06.715344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
Jan 13 20:23:06.731607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:23:06.841704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:23:06.855038 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:23:06.906665 kubelet[1699]: E0113 20:23:06.906533    1699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:23:06.908921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:23:06.909066 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:23:16.965085 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
Jan 13 20:23:16.976641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:23:17.095625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:23:17.106858 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:23:17.164943 kubelet[1716]: E0113 20:23:17.164889    1716 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:23:17.168988 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:23:17.169280 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:23:27.214906 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
Jan 13 20:23:27.223572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:23:27.355114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:23:27.362803 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:23:27.418936 kubelet[1733]: E0113 20:23:27.418864    1733 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:23:27.422777 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:23:27.422975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:23:37.465117 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9.
Jan 13 20:23:37.473624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:23:37.598784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:23:37.604564 (kubelet)[1749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:23:37.660947 kubelet[1749]: E0113 20:23:37.660804    1749 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:23:37.664409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:23:37.664611 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:23:47.715102 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10.
Jan 13 20:23:47.720510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:23:47.839025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:23:47.844215 (kubelet)[1765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:23:47.901878 kubelet[1765]: E0113 20:23:47.901737    1765 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:23:47.905400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:23:47.905678 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:23:51.745441 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Jan 13 20:23:51.755738 systemd[1]: Started sshd@0-138.199.153.83:22-147.75.109.163:47260.service - OpenSSH per-connection server daemon (147.75.109.163:47260).
Jan 13 20:23:52.744293 sshd[1775]: Accepted publickey for core from 147.75.109.163 port 47260 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:23:52.747657 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:23:52.757419 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Jan 13 20:23:52.763755 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Jan 13 20:23:52.767563 systemd-logind[1461]: New session 1 of user core.
Jan 13 20:23:52.778356 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Jan 13 20:23:52.785693 systemd[1]: Starting user@500.service - User Manager for UID 500...
Jan 13 20:23:52.806470 (systemd)[1779]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Jan 13 20:23:52.911941 systemd[1779]: Queued start job for default target default.target.
Jan 13 20:23:52.923586 systemd[1779]: Created slice app.slice - User Application Slice.
Jan 13 20:23:52.923676 systemd[1779]: Reached target paths.target - Paths.
Jan 13 20:23:52.923711 systemd[1779]: Reached target timers.target - Timers.
Jan 13 20:23:52.925864 systemd[1779]: Starting dbus.socket - D-Bus User Message Bus Socket...
Jan 13 20:23:52.942081 systemd[1779]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Jan 13 20:23:52.942213 systemd[1779]: Reached target sockets.target - Sockets.
Jan 13 20:23:52.942227 systemd[1779]: Reached target basic.target - Basic System.
Jan 13 20:23:52.942292 systemd[1779]: Reached target default.target - Main User Target.
Jan 13 20:23:52.942322 systemd[1779]: Startup finished in 128ms.
Jan 13 20:23:52.942457 systemd[1]: Started user@500.service - User Manager for UID 500.
Jan 13 20:23:52.951553 systemd[1]: Started session-1.scope - Session 1 of User core.
Jan 13 20:23:53.646172 systemd[1]: Started sshd@1-138.199.153.83:22-147.75.109.163:47262.service - OpenSSH per-connection server daemon (147.75.109.163:47262).
Jan 13 20:23:54.631301 sshd[1790]: Accepted publickey for core from 147.75.109.163 port 47262 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:23:54.633251 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:23:54.638042 systemd-logind[1461]: New session 2 of user core.
Jan 13 20:23:54.646571 systemd[1]: Started session-2.scope - Session 2 of User core.
Jan 13 20:23:55.313585 sshd[1792]: Connection closed by 147.75.109.163 port 47262
Jan 13 20:23:55.314441 sshd-session[1790]: pam_unix(sshd:session): session closed for user core
Jan 13 20:23:55.321069 systemd[1]: sshd@1-138.199.153.83:22-147.75.109.163:47262.service: Deactivated successfully.
Jan 13 20:23:55.323201 systemd[1]: session-2.scope: Deactivated successfully.
Jan 13 20:23:55.324657 systemd-logind[1461]: Session 2 logged out. Waiting for processes to exit.
Jan 13 20:23:55.326429 systemd-logind[1461]: Removed session 2.
Jan 13 20:23:55.484530 systemd[1]: Started sshd@2-138.199.153.83:22-147.75.109.163:47264.service - OpenSSH per-connection server daemon (147.75.109.163:47264).
Jan 13 20:23:56.502159 sshd[1797]: Accepted publickey for core from 147.75.109.163 port 47264 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:23:56.504082 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:23:56.510502 systemd-logind[1461]: New session 3 of user core.
Jan 13 20:23:56.519663 systemd[1]: Started session-3.scope - Session 3 of User core.
Jan 13 20:23:57.185741 sshd[1799]: Connection closed by 147.75.109.163 port 47264
Jan 13 20:23:57.186537 sshd-session[1797]: pam_unix(sshd:session): session closed for user core
Jan 13 20:23:57.191099 systemd[1]: sshd@2-138.199.153.83:22-147.75.109.163:47264.service: Deactivated successfully.
Jan 13 20:23:57.193079 systemd[1]: session-3.scope: Deactivated successfully.
Jan 13 20:23:57.194177 systemd-logind[1461]: Session 3 logged out. Waiting for processes to exit.
Jan 13 20:23:57.195224 systemd-logind[1461]: Removed session 3.
Jan 13 20:23:57.360711 systemd[1]: Started sshd@3-138.199.153.83:22-147.75.109.163:47274.service - OpenSSH per-connection server daemon (147.75.109.163:47274).
Jan 13 20:23:57.965064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11.
Jan 13 20:23:57.971555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:23:58.105678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:23:58.110732 (kubelet)[1814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:23:58.164824 kubelet[1814]: E0113 20:23:58.164122    1814 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:23:58.167874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:23:58.168039 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:23:58.348185 sshd[1804]: Accepted publickey for core from 147.75.109.163 port 47274 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:23:58.350730 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:23:58.358027 systemd-logind[1461]: New session 4 of user core.
Jan 13 20:23:58.368540 systemd[1]: Started session-4.scope - Session 4 of User core.
Jan 13 20:23:59.032618 sshd[1822]: Connection closed by 147.75.109.163 port 47274
Jan 13 20:23:59.032484 sshd-session[1804]: pam_unix(sshd:session): session closed for user core
Jan 13 20:23:59.038414 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit.
Jan 13 20:23:59.039364 systemd[1]: sshd@3-138.199.153.83:22-147.75.109.163:47274.service: Deactivated successfully.
Jan 13 20:23:59.042603 systemd[1]: session-4.scope: Deactivated successfully.
Jan 13 20:23:59.044471 systemd-logind[1461]: Removed session 4.
Jan 13 20:23:59.204311 systemd[1]: Started sshd@4-138.199.153.83:22-147.75.109.163:36664.service - OpenSSH per-connection server daemon (147.75.109.163:36664).
Jan 13 20:24:00.199627 sshd[1827]: Accepted publickey for core from 147.75.109.163 port 36664 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:24:00.202052 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:24:00.207621 systemd-logind[1461]: New session 5 of user core.
Jan 13 20:24:00.216180 systemd[1]: Started session-5.scope - Session 5 of User core.
Jan 13 20:24:00.734077 sudo[1830]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Jan 13 20:24:00.734854 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:24:00.751123 sudo[1830]: pam_unix(sudo:session): session closed for user root
Jan 13 20:24:00.912333 sshd[1829]: Connection closed by 147.75.109.163 port 36664
Jan 13 20:24:00.913503 sshd-session[1827]: pam_unix(sshd:session): session closed for user core
Jan 13 20:24:00.919048 systemd[1]: sshd@4-138.199.153.83:22-147.75.109.163:36664.service: Deactivated successfully.
Jan 13 20:24:00.921746 systemd[1]: session-5.scope: Deactivated successfully.
Jan 13 20:24:00.922778 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit.
Jan 13 20:24:00.924188 systemd-logind[1461]: Removed session 5.
Jan 13 20:24:01.085755 systemd[1]: Started sshd@5-138.199.153.83:22-147.75.109.163:36672.service - OpenSSH per-connection server daemon (147.75.109.163:36672).
Jan 13 20:24:02.064593 sshd[1835]: Accepted publickey for core from 147.75.109.163 port 36672 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:24:02.067063 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:24:02.075163 systemd-logind[1461]: New session 6 of user core.
Jan 13 20:24:02.080852 systemd[1]: Started session-6.scope - Session 6 of User core.
Jan 13 20:24:02.583326 sudo[1839]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Jan 13 20:24:02.583685 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:24:02.588659 sudo[1839]: pam_unix(sudo:session): session closed for user root
Jan 13 20:24:02.594986 sudo[1838]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Jan 13 20:24:02.595326 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:24:02.613120 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 13 20:24:02.646195 augenrules[1861]: No rules
Jan 13 20:24:02.648040 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 13 20:24:02.648249 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 13 20:24:02.652546 sudo[1838]: pam_unix(sudo:session): session closed for user root
Jan 13 20:24:02.810716 sshd[1837]: Connection closed by 147.75.109.163 port 36672
Jan 13 20:24:02.811594 sshd-session[1835]: pam_unix(sshd:session): session closed for user core
Jan 13 20:24:02.818593 systemd[1]: sshd@5-138.199.153.83:22-147.75.109.163:36672.service: Deactivated successfully.
Jan 13 20:24:02.820310 systemd[1]: session-6.scope: Deactivated successfully.
Jan 13 20:24:02.822009 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit.
Jan 13 20:24:02.823421 systemd-logind[1461]: Removed session 6.
Jan 13 20:24:02.988163 systemd[1]: Started sshd@6-138.199.153.83:22-147.75.109.163:36684.service - OpenSSH per-connection server daemon (147.75.109.163:36684).
Jan 13 20:24:04.003549 sshd[1869]: Accepted publickey for core from 147.75.109.163 port 36684 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:24:04.005400 sshd-session[1869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:24:04.010567 systemd-logind[1461]: New session 7 of user core.
Jan 13 20:24:04.025574 systemd[1]: Started session-7.scope - Session 7 of User core.
Jan 13 20:24:04.532633 sudo[1872]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Jan 13 20:24:04.532950 sudo[1872]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:24:04.854857 (dockerd)[1890]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Jan 13 20:24:04.855501 systemd[1]: Starting docker.service - Docker Application Container Engine...
Jan 13 20:24:05.104551 dockerd[1890]: time="2025-01-13T20:24:05.103646840Z" level=info msg="Starting up"
Jan 13 20:24:05.207463 dockerd[1890]: time="2025-01-13T20:24:05.207316179Z" level=info msg="Loading containers: start."
Jan 13 20:24:05.381295 kernel: Initializing XFRM netlink socket
Jan 13 20:24:05.483406 systemd-networkd[1368]: docker0: Link UP
Jan 13 20:24:05.521061 dockerd[1890]: time="2025-01-13T20:24:05.520974430Z" level=info msg="Loading containers: done."
Jan 13 20:24:05.535473 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3479263260-merged.mount: Deactivated successfully.
Jan 13 20:24:05.540104 dockerd[1890]: time="2025-01-13T20:24:05.539559350Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Jan 13 20:24:05.540104 dockerd[1890]: time="2025-01-13T20:24:05.539727072Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
Jan 13 20:24:05.540104 dockerd[1890]: time="2025-01-13T20:24:05.539873274Z" level=info msg="Daemon has completed initialization"
Jan 13 20:24:05.583636 dockerd[1890]: time="2025-01-13T20:24:05.583091672Z" level=info msg="API listen on /run/docker.sock"
Jan 13 20:24:05.583428 systemd[1]: Started docker.service - Docker Application Container Engine.
Jan 13 20:24:06.724810 containerd[1481]: time="2025-01-13T20:24:06.724751135Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\""
Jan 13 20:24:07.416308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount764472839.mount: Deactivated successfully.
Jan 13 20:24:08.214639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12.
Jan 13 20:24:08.225940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:24:08.349464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:24:08.356145 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:24:08.411609 kubelet[2146]: E0113 20:24:08.411347    2146 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:24:08.415582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:24:08.415780 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:24:08.524110 containerd[1481]: time="2025-01-13T20:24:08.523967228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:08.529273 containerd[1481]: time="2025-01-13T20:24:08.529196138Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:08.530972 containerd[1481]: time="2025-01-13T20:24:08.530918401Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201342"
Jan 13 20:24:08.535614 containerd[1481]: time="2025-01-13T20:24:08.535554143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:08.538303 containerd[1481]: time="2025-01-13T20:24:08.538254819Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 1.813435324s"
Jan 13 20:24:08.538453 containerd[1481]: time="2025-01-13T20:24:08.538435942Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\""
Jan 13 20:24:08.565270 containerd[1481]: time="2025-01-13T20:24:08.565221260Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\""
Jan 13 20:24:09.891428 containerd[1481]: time="2025-01-13T20:24:09.890285376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:09.892853 containerd[1481]: time="2025-01-13T20:24:09.892801290Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381317"
Jan 13 20:24:09.894370 containerd[1481]: time="2025-01-13T20:24:09.894295030Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:09.899076 containerd[1481]: time="2025-01-13T20:24:09.899018174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:09.901378 containerd[1481]: time="2025-01-13T20:24:09.901324085Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.335887222s"
Jan 13 20:24:09.901546 containerd[1481]: time="2025-01-13T20:24:09.901528848Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\""
Jan 13 20:24:09.926700 containerd[1481]: time="2025-01-13T20:24:09.926606708Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\""
Jan 13 20:24:10.812828 containerd[1481]: time="2025-01-13T20:24:10.811607405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:10.814166 containerd[1481]: time="2025-01-13T20:24:10.814104679Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765660"
Jan 13 20:24:10.815515 containerd[1481]: time="2025-01-13T20:24:10.815447058Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:10.818819 containerd[1481]: time="2025-01-13T20:24:10.818764943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:10.820514 containerd[1481]: time="2025-01-13T20:24:10.820472326Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 893.821538ms"
Jan 13 20:24:10.820668 containerd[1481]: time="2025-01-13T20:24:10.820652209Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\""
Jan 13 20:24:10.846390 containerd[1481]: time="2025-01-13T20:24:10.846349080Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\""
Jan 13 20:24:11.839195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4272624946.mount: Deactivated successfully.
Jan 13 20:24:12.183923 containerd[1481]: time="2025-01-13T20:24:12.183709689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:12.185581 containerd[1481]: time="2025-01-13T20:24:12.185525074Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25274003"
Jan 13 20:24:12.186254 containerd[1481]: time="2025-01-13T20:24:12.185927440Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:12.189327 containerd[1481]: time="2025-01-13T20:24:12.189266567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:12.190343 containerd[1481]: time="2025-01-13T20:24:12.190126179Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.343508734s"
Jan 13 20:24:12.190343 containerd[1481]: time="2025-01-13T20:24:12.190165379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\""
Jan 13 20:24:12.216970 containerd[1481]: time="2025-01-13T20:24:12.216916792Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Jan 13 20:24:12.794051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194122982.mount: Deactivated successfully.
Jan 13 20:24:13.452890 containerd[1481]: time="2025-01-13T20:24:13.452824662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:13.455104 containerd[1481]: time="2025-01-13T20:24:13.455010932Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461"
Jan 13 20:24:13.456620 containerd[1481]: time="2025-01-13T20:24:13.456557274Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:13.461587 containerd[1481]: time="2025-01-13T20:24:13.460125605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:13.465392 containerd[1481]: time="2025-01-13T20:24:13.461343862Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.244371349s"
Jan 13 20:24:13.465392 containerd[1481]: time="2025-01-13T20:24:13.462726281Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\""
Jan 13 20:24:13.493822 containerd[1481]: time="2025-01-13T20:24:13.493775759Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Jan 13 20:24:14.063017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3424368558.mount: Deactivated successfully.
Jan 13 20:24:14.072146 containerd[1481]: time="2025-01-13T20:24:14.071913792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:14.073288 containerd[1481]: time="2025-01-13T20:24:14.073216450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841"
Jan 13 20:24:14.075652 containerd[1481]: time="2025-01-13T20:24:14.074466428Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:14.077355 containerd[1481]: time="2025-01-13T20:24:14.077307509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:14.078489 containerd[1481]: time="2025-01-13T20:24:14.078449045Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 584.369922ms"
Jan 13 20:24:14.078627 containerd[1481]: time="2025-01-13T20:24:14.078610847Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\""
Jan 13 20:24:14.106081 containerd[1481]: time="2025-01-13T20:24:14.106010717Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\""
Jan 13 20:24:14.695391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3808782170.mount: Deactivated successfully.
Jan 13 20:24:16.078257 containerd[1481]: time="2025-01-13T20:24:16.076391786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:16.079970 containerd[1481]: time="2025-01-13T20:24:16.079914557Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200866"
Jan 13 20:24:16.081382 containerd[1481]: time="2025-01-13T20:24:16.081346058Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:16.085980 containerd[1481]: time="2025-01-13T20:24:16.085935044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:16.087821 containerd[1481]: time="2025-01-13T20:24:16.087752911Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 1.981692834s"
Jan 13 20:24:16.087821 containerd[1481]: time="2025-01-13T20:24:16.087814672Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\""
Jan 13 20:24:18.464925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13.
Jan 13 20:24:18.473511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:24:18.597443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:24:18.610618 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:24:18.673271 kubelet[2359]: E0113 20:24:18.673038    2359 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:24:18.677166 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:24:18.677529 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:24:22.727069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:24:22.733989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:24:22.767996 systemd[1]: Reloading requested from client PID 2375 ('systemctl') (unit session-7.scope)...
Jan 13 20:24:22.768014 systemd[1]: Reloading...
Jan 13 20:24:22.904274 zram_generator::config[2415]: No configuration found.
Jan 13 20:24:23.009703 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:24:23.077681 systemd[1]: Reloading finished in 308 ms.
Jan 13 20:24:23.141249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:24:23.146137 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:24:23.149650 systemd[1]: kubelet.service: Deactivated successfully.
Jan 13 20:24:23.150509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:24:23.160602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:24:23.287217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:24:23.298730 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Jan 13 20:24:23.348446 kubelet[2465]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 13 20:24:23.348446 kubelet[2465]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jan 13 20:24:23.348446 kubelet[2465]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 13 20:24:23.349094 kubelet[2465]: I0113 20:24:23.348475    2465 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jan 13 20:24:24.593040 kubelet[2465]: I0113 20:24:24.592965    2465 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Jan 13 20:24:24.593040 kubelet[2465]: I0113 20:24:24.593009    2465 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 13 20:24:24.593706 kubelet[2465]: I0113 20:24:24.593360    2465 server.go:919] "Client rotation is on, will bootstrap in background"
Jan 13 20:24:24.617562 kubelet[2465]: E0113 20:24:24.617496    2465 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://138.199.153.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:24.617940 kubelet[2465]: I0113 20:24:24.617767    2465 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 13 20:24:24.631664 kubelet[2465]: I0113 20:24:24.631621    2465 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jan 13 20:24:24.631895 kubelet[2465]: I0113 20:24:24.631877    2465 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jan 13 20:24:24.632120 kubelet[2465]: I0113 20:24:24.632086    2465 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jan 13 20:24:24.632120 kubelet[2465]: I0113 20:24:24.632115    2465 topology_manager.go:138] "Creating topology manager with none policy"
Jan 13 20:24:24.632286 kubelet[2465]: I0113 20:24:24.632125    2465 container_manager_linux.go:301] "Creating device plugin manager"
Jan 13 20:24:24.633728 kubelet[2465]: I0113 20:24:24.633668    2465 state_mem.go:36] "Initialized new in-memory state store"
Jan 13 20:24:24.637222 kubelet[2465]: I0113 20:24:24.637158    2465 kubelet.go:396] "Attempting to sync node with API server"
Jan 13 20:24:24.637964 kubelet[2465]: I0113 20:24:24.637712    2465 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 13 20:24:24.637964 kubelet[2465]: I0113 20:24:24.637749    2465 kubelet.go:312] "Adding apiserver pod source"
Jan 13 20:24:24.637964 kubelet[2465]: I0113 20:24:24.637767    2465 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 13 20:24:24.637964 kubelet[2465]: W0113 20:24:24.637854    2465 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://138.199.153.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-6-49e4a12287&limit=500&resourceVersion=0": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:24.637964 kubelet[2465]: E0113 20:24:24.637941    2465 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.153.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-6-49e4a12287&limit=500&resourceVersion=0": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:24.640212 kubelet[2465]: W0113 20:24:24.640148    2465 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://138.199.153.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:24.640462 kubelet[2465]: E0113 20:24:24.640291    2465 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.153.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:24.641424 kubelet[2465]: I0113 20:24:24.641004    2465 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Jan 13 20:24:24.641782 kubelet[2465]: I0113 20:24:24.641762    2465 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jan 13 20:24:24.642790 kubelet[2465]: W0113 20:24:24.642762    2465 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jan 13 20:24:24.643911 kubelet[2465]: I0113 20:24:24.643879    2465 server.go:1256] "Started kubelet"
Jan 13 20:24:24.647545 kubelet[2465]: I0113 20:24:24.647496    2465 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Jan 13 20:24:24.648922 kubelet[2465]: I0113 20:24:24.648214    2465 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jan 13 20:24:24.648922 kubelet[2465]: I0113 20:24:24.648439    2465 server.go:461] "Adding debug handlers to kubelet server"
Jan 13 20:24:24.648922 kubelet[2465]: I0113 20:24:24.648577    2465 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jan 13 20:24:24.651572 kubelet[2465]: E0113 20:24:24.651532    2465 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.153.83:6443/api/v1/namespaces/default/events\": dial tcp 138.199.153.83:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-0-6-49e4a12287.181a5a46620ad224  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-6-49e4a12287,UID:ci-4152-2-0-6-49e4a12287,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-6-49e4a12287,},FirstTimestamp:2025-01-13 20:24:24.643834404 +0000 UTC m=+1.340823029,LastTimestamp:2025-01-13 20:24:24.643834404 +0000 UTC m=+1.340823029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-6-49e4a12287,}"
Jan 13 20:24:24.652210 kubelet[2465]: I0113 20:24:24.652169    2465 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 13 20:24:24.657442 kubelet[2465]: I0113 20:24:24.657204    2465 volume_manager.go:291] "Starting Kubelet Volume Manager"
Jan 13 20:24:24.657728 kubelet[2465]: I0113 20:24:24.657706    2465 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Jan 13 20:24:24.657891 kubelet[2465]: I0113 20:24:24.657877    2465 reconciler_new.go:29] "Reconciler: start to sync state"
Jan 13 20:24:24.658453 kubelet[2465]: W0113 20:24:24.658408    2465 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://138.199.153.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:24.658600 kubelet[2465]: E0113 20:24:24.658586    2465 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.153.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:24.659504 kubelet[2465]: E0113 20:24:24.659454    2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-6-49e4a12287?timeout=10s\": dial tcp 138.199.153.83:6443: connect: connection refused" interval="200ms"
Jan 13 20:24:24.660115 kubelet[2465]: E0113 20:24:24.659743    2465 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 13 20:24:24.661832 kubelet[2465]: I0113 20:24:24.661784    2465 factory.go:221] Registration of the systemd container factory successfully
Jan 13 20:24:24.663269 kubelet[2465]: I0113 20:24:24.663099    2465 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jan 13 20:24:24.667015 kubelet[2465]: I0113 20:24:24.665531    2465 factory.go:221] Registration of the containerd container factory successfully
Jan 13 20:24:24.683859 kubelet[2465]: I0113 20:24:24.683804    2465 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jan 13 20:24:24.688804 kubelet[2465]: I0113 20:24:24.688757    2465 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jan 13 20:24:24.688804 kubelet[2465]: I0113 20:24:24.688797    2465 status_manager.go:217] "Starting to sync pod status with apiserver"
Jan 13 20:24:24.689000 kubelet[2465]: I0113 20:24:24.688825    2465 kubelet.go:2329] "Starting kubelet main sync loop"
Jan 13 20:24:24.689000 kubelet[2465]: E0113 20:24:24.688903    2465 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 13 20:24:24.693338 kubelet[2465]: W0113 20:24:24.693268    2465 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://138.199.153.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:24.693779 kubelet[2465]: E0113 20:24:24.693749    2465 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.153.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:24.698466 kubelet[2465]: I0113 20:24:24.698431    2465 cpu_manager.go:214] "Starting CPU manager" policy="none"
Jan 13 20:24:24.698639 kubelet[2465]: I0113 20:24:24.698619    2465 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Jan 13 20:24:24.698914 kubelet[2465]: I0113 20:24:24.698852    2465 state_mem.go:36] "Initialized new in-memory state store"
Jan 13 20:24:24.702276 kubelet[2465]: I0113 20:24:24.702009    2465 policy_none.go:49] "None policy: Start"
Jan 13 20:24:24.703746 kubelet[2465]: I0113 20:24:24.703528    2465 memory_manager.go:170] "Starting memorymanager" policy="None"
Jan 13 20:24:24.703746 kubelet[2465]: I0113 20:24:24.703671    2465 state_mem.go:35] "Initializing new in-memory state store"
Jan 13 20:24:24.710071 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Jan 13 20:24:24.725388 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Jan 13 20:24:24.743753 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Jan 13 20:24:24.746744 kubelet[2465]: I0113 20:24:24.745737    2465 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 13 20:24:24.746744 kubelet[2465]: I0113 20:24:24.746070    2465 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jan 13 20:24:24.748451 kubelet[2465]: E0113 20:24:24.748408    2465 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-0-6-49e4a12287\" not found"
Jan 13 20:24:24.761308 kubelet[2465]: I0113 20:24:24.761215    2465 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:24.761768 kubelet[2465]: E0113 20:24:24.761740    2465 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.153.83:6443/api/v1/nodes\": dial tcp 138.199.153.83:6443: connect: connection refused" node="ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:24.789687 kubelet[2465]: I0113 20:24:24.789162    2465 topology_manager.go:215] "Topology Admit Handler" podUID="5aa8018309e7466c38f1ce9a58bfdfe4" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:24.791719 kubelet[2465]: I0113 20:24:24.791643    2465 topology_manager.go:215] "Topology Admit Handler" podUID="9b6c068311a689998d4ae17f559aa605" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:24.793957 kubelet[2465]: I0113 20:24:24.793926    2465 topology_manager.go:215] "Topology Admit Handler" podUID="9b4a0500d9018eb4d7cc490291b335ad" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:24.802086 systemd[1]: Created slice kubepods-burstable-pod5aa8018309e7466c38f1ce9a58bfdfe4.slice - libcontainer container kubepods-burstable-pod5aa8018309e7466c38f1ce9a58bfdfe4.slice.
Jan 13 20:24:24.820595 systemd[1]: Created slice kubepods-burstable-pod9b6c068311a689998d4ae17f559aa605.slice - libcontainer container kubepods-burstable-pod9b6c068311a689998d4ae17f559aa605.slice.
Jan 13 20:24:24.823542 kubelet[2465]: E0113 20:24:24.823499    2465 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.153.83:6443/api/v1/namespaces/default/events\": dial tcp 138.199.153.83:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-0-6-49e4a12287.181a5a46620ad224  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-6-49e4a12287,UID:ci-4152-2-0-6-49e4a12287,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-6-49e4a12287,},FirstTimestamp:2025-01-13 20:24:24.643834404 +0000 UTC m=+1.340823029,LastTimestamp:2025-01-13 20:24:24.643834404 +0000 UTC m=+1.340823029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-6-49e4a12287,}"
Jan 13 20:24:24.832557 systemd[1]: Created slice kubepods-burstable-pod9b4a0500d9018eb4d7cc490291b335ad.slice - libcontainer container kubepods-burstable-pod9b4a0500d9018eb4d7cc490291b335ad.slice.
Jan 13 20:24:24.861022 kubelet[2465]: E0113 20:24:24.860765    2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-6-49e4a12287?timeout=10s\": dial tcp 138.199.153.83:6443: connect: connection refused" interval="400ms"
Jan 13 20:24:24.959833 kubelet[2465]: I0113 20:24:24.959427    2465 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5aa8018309e7466c38f1ce9a58bfdfe4-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-6-49e4a12287\" (UID: \"5aa8018309e7466c38f1ce9a58bfdfe4\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:24.959833 kubelet[2465]: I0113 20:24:24.959513    2465 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b4a0500d9018eb4d7cc490291b335ad-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-6-49e4a12287\" (UID: \"9b4a0500d9018eb4d7cc490291b335ad\") " pod="kube-system/kube-scheduler-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:24.959833 kubelet[2465]: I0113 20:24:24.959548    2465 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b6c068311a689998d4ae17f559aa605-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-6-49e4a12287\" (UID: \"9b6c068311a689998d4ae17f559aa605\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:24.959833 kubelet[2465]: I0113 20:24:24.959576    2465 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b6c068311a689998d4ae17f559aa605-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-6-49e4a12287\" (UID: \"9b6c068311a689998d4ae17f559aa605\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:24.959833 kubelet[2465]: I0113 20:24:24.959607    2465 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b6c068311a689998d4ae17f559aa605-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-6-49e4a12287\" (UID: \"9b6c068311a689998d4ae17f559aa605\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:24.960188 kubelet[2465]: I0113 20:24:24.959635    2465 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5aa8018309e7466c38f1ce9a58bfdfe4-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-6-49e4a12287\" (UID: \"5aa8018309e7466c38f1ce9a58bfdfe4\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:24.960188 kubelet[2465]: I0113 20:24:24.959664    2465 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5aa8018309e7466c38f1ce9a58bfdfe4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-6-49e4a12287\" (UID: \"5aa8018309e7466c38f1ce9a58bfdfe4\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:24.960188 kubelet[2465]: I0113 20:24:24.959693    2465 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b6c068311a689998d4ae17f559aa605-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-6-49e4a12287\" (UID: \"9b6c068311a689998d4ae17f559aa605\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:24.960188 kubelet[2465]: I0113 20:24:24.959727    2465 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b6c068311a689998d4ae17f559aa605-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-6-49e4a12287\" (UID: \"9b6c068311a689998d4ae17f559aa605\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:24.965344 kubelet[2465]: I0113 20:24:24.965248    2465 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:24.965985 kubelet[2465]: E0113 20:24:24.965883    2465 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.153.83:6443/api/v1/nodes\": dial tcp 138.199.153.83:6443: connect: connection refused" node="ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:25.117942 containerd[1481]: time="2025-01-13T20:24:25.117699656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-6-49e4a12287,Uid:5aa8018309e7466c38f1ce9a58bfdfe4,Namespace:kube-system,Attempt:0,}"
Jan 13 20:24:25.129652 containerd[1481]: time="2025-01-13T20:24:25.129586278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-6-49e4a12287,Uid:9b6c068311a689998d4ae17f559aa605,Namespace:kube-system,Attempt:0,}"
Jan 13 20:24:25.136935 containerd[1481]: time="2025-01-13T20:24:25.136865910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-6-49e4a12287,Uid:9b4a0500d9018eb4d7cc490291b335ad,Namespace:kube-system,Attempt:0,}"
Jan 13 20:24:25.262394 kubelet[2465]: E0113 20:24:25.262338    2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-6-49e4a12287?timeout=10s\": dial tcp 138.199.153.83:6443: connect: connection refused" interval="800ms"
Jan 13 20:24:25.369510 kubelet[2465]: I0113 20:24:25.369003    2465 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:25.370036 kubelet[2465]: E0113 20:24:25.369962    2465 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.153.83:6443/api/v1/nodes\": dial tcp 138.199.153.83:6443: connect: connection refused" node="ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:25.529568 kubelet[2465]: W0113 20:24:25.529472    2465 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://138.199.153.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-6-49e4a12287&limit=500&resourceVersion=0": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:25.529568 kubelet[2465]: E0113 20:24:25.529542    2465 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.153.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-6-49e4a12287&limit=500&resourceVersion=0": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:25.650304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount810081069.mount: Deactivated successfully.
Jan 13 20:24:25.659438 containerd[1481]: time="2025-01-13T20:24:25.659391063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:24:25.664560 containerd[1481]: time="2025-01-13T20:24:25.664186497Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193"
Jan 13 20:24:25.666941 containerd[1481]: time="2025-01-13T20:24:25.665899403Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:24:25.668388 containerd[1481]: time="2025-01-13T20:24:25.668074157Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jan 13 20:24:25.670386 containerd[1481]: time="2025-01-13T20:24:25.669897105Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:24:25.674349 containerd[1481]: time="2025-01-13T20:24:25.673556841Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:24:25.674913 containerd[1481]: time="2025-01-13T20:24:25.674823540Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jan 13 20:24:25.676166 containerd[1481]: time="2025-01-13T20:24:25.676110720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:24:25.678225 containerd[1481]: time="2025-01-13T20:24:25.678169192Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 541.16432ms"
Jan 13 20:24:25.680221 containerd[1481]: time="2025-01-13T20:24:25.680170303Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 562.361125ms"
Jan 13 20:24:25.681327 containerd[1481]: time="2025-01-13T20:24:25.681288400Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.59364ms"
Jan 13 20:24:25.825362 containerd[1481]: time="2025-01-13T20:24:25.824420800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:24:25.825362 containerd[1481]: time="2025-01-13T20:24:25.824717645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:24:25.825362 containerd[1481]: time="2025-01-13T20:24:25.824731925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:24:25.825362 containerd[1481]: time="2025-01-13T20:24:25.825353614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:24:25.828574 containerd[1481]: time="2025-01-13T20:24:25.828376661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:24:25.828933 containerd[1481]: time="2025-01-13T20:24:25.828480903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:24:25.828933 containerd[1481]: time="2025-01-13T20:24:25.828614185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:24:25.828933 containerd[1481]: time="2025-01-13T20:24:25.828739026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:24:25.834048 containerd[1481]: time="2025-01-13T20:24:25.833767704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:24:25.834048 containerd[1481]: time="2025-01-13T20:24:25.833840305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:24:25.834048 containerd[1481]: time="2025-01-13T20:24:25.833867385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:24:25.834394 containerd[1481]: time="2025-01-13T20:24:25.834309832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:24:25.856010 systemd[1]: Started cri-containerd-2f28c890605e8ad01bf50ffdde300f1a8cc0d0672d3f5fdcaee6d4bb99601782.scope - libcontainer container 2f28c890605e8ad01bf50ffdde300f1a8cc0d0672d3f5fdcaee6d4bb99601782.
Jan 13 20:24:25.856991 kubelet[2465]: W0113 20:24:25.856938    2465 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://138.199.153.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:25.857802 kubelet[2465]: E0113 20:24:25.857251    2465 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.153.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:25.868504 systemd[1]: Started cri-containerd-1aed82f21f04d9daa993ca005f1f9219044f0d9ef4d0e2016c1e108a9fee5325.scope - libcontainer container 1aed82f21f04d9daa993ca005f1f9219044f0d9ef4d0e2016c1e108a9fee5325.
Jan 13 20:24:25.884822 systemd[1]: Started cri-containerd-7060d96a6fb2740b44b3d022de4b7b3a05100c1fcbdb2acc5e47316e6d7ee49b.scope - libcontainer container 7060d96a6fb2740b44b3d022de4b7b3a05100c1fcbdb2acc5e47316e6d7ee49b.
Jan 13 20:24:25.943353 containerd[1481]: time="2025-01-13T20:24:25.942117289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-6-49e4a12287,Uid:5aa8018309e7466c38f1ce9a58bfdfe4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f28c890605e8ad01bf50ffdde300f1a8cc0d0672d3f5fdcaee6d4bb99601782\""
Jan 13 20:24:25.953932 containerd[1481]: time="2025-01-13T20:24:25.953145139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-6-49e4a12287,Uid:9b6c068311a689998d4ae17f559aa605,Namespace:kube-system,Attempt:0,} returns sandbox id \"1aed82f21f04d9daa993ca005f1f9219044f0d9ef4d0e2016c1e108a9fee5325\""
Jan 13 20:24:25.955922 containerd[1481]: time="2025-01-13T20:24:25.955813620Z" level=info msg="CreateContainer within sandbox \"2f28c890605e8ad01bf50ffdde300f1a8cc0d0672d3f5fdcaee6d4bb99601782\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Jan 13 20:24:25.960740 containerd[1481]: time="2025-01-13T20:24:25.960698575Z" level=info msg="CreateContainer within sandbox \"1aed82f21f04d9daa993ca005f1f9219044f0d9ef4d0e2016c1e108a9fee5325\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Jan 13 20:24:25.974000 containerd[1481]: time="2025-01-13T20:24:25.973865218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-6-49e4a12287,Uid:9b4a0500d9018eb4d7cc490291b335ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"7060d96a6fb2740b44b3d022de4b7b3a05100c1fcbdb2acc5e47316e6d7ee49b\""
Jan 13 20:24:25.981789 containerd[1481]: time="2025-01-13T20:24:25.981510415Z" level=info msg="CreateContainer within sandbox \"7060d96a6fb2740b44b3d022de4b7b3a05100c1fcbdb2acc5e47316e6d7ee49b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Jan 13 20:24:25.985311 containerd[1481]: time="2025-01-13T20:24:25.985072750Z" level=info msg="CreateContainer within sandbox \"2f28c890605e8ad01bf50ffdde300f1a8cc0d0672d3f5fdcaee6d4bb99601782\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"82534e58db0eb7807af11ec2d79d08bf713b53ce6fa64d0c501b4352320ba532\""
Jan 13 20:24:25.987174 containerd[1481]: time="2025-01-13T20:24:25.986997899Z" level=info msg="StartContainer for \"82534e58db0eb7807af11ec2d79d08bf713b53ce6fa64d0c501b4352320ba532\""
Jan 13 20:24:25.991952 containerd[1481]: time="2025-01-13T20:24:25.991500329Z" level=info msg="CreateContainer within sandbox \"1aed82f21f04d9daa993ca005f1f9219044f0d9ef4d0e2016c1e108a9fee5325\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"43fa7befb7f38c3b42b32f42808f20dc17c6301fde80a458113c2217d5e0746c\""
Jan 13 20:24:25.993305 containerd[1481]: time="2025-01-13T20:24:25.992264500Z" level=info msg="StartContainer for \"43fa7befb7f38c3b42b32f42808f20dc17c6301fde80a458113c2217d5e0746c\""
Jan 13 20:24:26.012376 containerd[1481]: time="2025-01-13T20:24:26.012328730Z" level=info msg="CreateContainer within sandbox \"7060d96a6fb2740b44b3d022de4b7b3a05100c1fcbdb2acc5e47316e6d7ee49b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d54340dbf3012b12f68a9e96c9b36087a317b2697151a77f6ed1cd68b702982b\""
Jan 13 20:24:26.013948 containerd[1481]: time="2025-01-13T20:24:26.013909234Z" level=info msg="StartContainer for \"d54340dbf3012b12f68a9e96c9b36087a317b2697151a77f6ed1cd68b702982b\""
Jan 13 20:24:26.026453 systemd[1]: Started cri-containerd-82534e58db0eb7807af11ec2d79d08bf713b53ce6fa64d0c501b4352320ba532.scope - libcontainer container 82534e58db0eb7807af11ec2d79d08bf713b53ce6fa64d0c501b4352320ba532.
Jan 13 20:24:26.029625 kubelet[2465]: W0113 20:24:26.029589    2465 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://138.199.153.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:26.029805 kubelet[2465]: E0113 20:24:26.029792    2465 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.153.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:26.044369 systemd[1]: Started cri-containerd-43fa7befb7f38c3b42b32f42808f20dc17c6301fde80a458113c2217d5e0746c.scope - libcontainer container 43fa7befb7f38c3b42b32f42808f20dc17c6301fde80a458113c2217d5e0746c.
Jan 13 20:24:26.066271 kubelet[2465]: E0113 20:24:26.064283    2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-6-49e4a12287?timeout=10s\": dial tcp 138.199.153.83:6443: connect: connection refused" interval="1.6s"
Jan 13 20:24:26.071481 systemd[1]: Started cri-containerd-d54340dbf3012b12f68a9e96c9b36087a317b2697151a77f6ed1cd68b702982b.scope - libcontainer container d54340dbf3012b12f68a9e96c9b36087a317b2697151a77f6ed1cd68b702982b.
Jan 13 20:24:26.101557 containerd[1481]: time="2025-01-13T20:24:26.101497548Z" level=info msg="StartContainer for \"82534e58db0eb7807af11ec2d79d08bf713b53ce6fa64d0c501b4352320ba532\" returns successfully"
Jan 13 20:24:26.124865 containerd[1481]: time="2025-01-13T20:24:26.124577905Z" level=info msg="StartContainer for \"43fa7befb7f38c3b42b32f42808f20dc17c6301fde80a458113c2217d5e0746c\" returns successfully"
Jan 13 20:24:26.137503 kubelet[2465]: W0113 20:24:26.137349    2465 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://138.199.153.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:26.137503 kubelet[2465]: E0113 20:24:26.137430    2465 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.153.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.153.83:6443: connect: connection refused
Jan 13 20:24:26.159684 containerd[1481]: time="2025-01-13T20:24:26.159633207Z" level=info msg="StartContainer for \"d54340dbf3012b12f68a9e96c9b36087a317b2697151a77f6ed1cd68b702982b\" returns successfully"
Jan 13 20:24:26.172916 kubelet[2465]: I0113 20:24:26.172522    2465 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:26.172916 kubelet[2465]: E0113 20:24:26.172887    2465 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.153.83:6443/api/v1/nodes\": dial tcp 138.199.153.83:6443: connect: connection refused" node="ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:27.775493 kubelet[2465]: I0113 20:24:27.774813    2465 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:29.771262 kubelet[2465]: I0113 20:24:29.770021    2465 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:30.197520 kubelet[2465]: E0113 20:24:30.196657    2465 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-0-6-49e4a12287\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:30.379853 kubelet[2465]: E0113 20:24:30.379345    2465 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4152-2-0-6-49e4a12287\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:30.643451 kubelet[2465]: I0113 20:24:30.643396    2465 apiserver.go:52] "Watching apiserver"
Jan 13 20:24:30.658089 kubelet[2465]: I0113 20:24:30.657976    2465 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Jan 13 20:24:32.473889 systemd[1]: Reloading requested from client PID 2736 ('systemctl') (unit session-7.scope)...
Jan 13 20:24:32.473928 systemd[1]: Reloading...
Jan 13 20:24:32.599370 zram_generator::config[2777]: No configuration found.
Jan 13 20:24:32.703950 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:24:32.785601 systemd[1]: Reloading finished in 310 ms.
Jan 13 20:24:32.848967 kubelet[2465]: I0113 20:24:32.848856    2465 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 13 20:24:32.851400 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:24:32.864769 systemd[1]: kubelet.service: Deactivated successfully.
Jan 13 20:24:32.865042 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:24:32.865107 systemd[1]: kubelet.service: Consumed 1.801s CPU time, 112.6M memory peak, 0B memory swap peak.
Jan 13 20:24:32.869867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:24:33.013295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:24:33.029738 (kubelet)[2821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Jan 13 20:24:33.108997 kubelet[2821]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 13 20:24:33.108997 kubelet[2821]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jan 13 20:24:33.108997 kubelet[2821]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 13 20:24:33.110252 kubelet[2821]: I0113 20:24:33.109536    2821 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jan 13 20:24:33.114769 kubelet[2821]: I0113 20:24:33.114734    2821 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Jan 13 20:24:33.114962 kubelet[2821]: I0113 20:24:33.114951    2821 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 13 20:24:33.115408 kubelet[2821]: I0113 20:24:33.115388    2821 server.go:919] "Client rotation is on, will bootstrap in background"
Jan 13 20:24:33.118528 kubelet[2821]: I0113 20:24:33.118495    2821 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jan 13 20:24:33.128184 kubelet[2821]: I0113 20:24:33.128139    2821 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 13 20:24:33.138123 kubelet[2821]: I0113 20:24:33.138088    2821 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jan 13 20:24:33.138710 kubelet[2821]: I0113 20:24:33.138507    2821 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jan 13 20:24:33.138888 kubelet[2821]: I0113 20:24:33.138870    2821 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jan 13 20:24:33.139079 kubelet[2821]: I0113 20:24:33.139064    2821 topology_manager.go:138] "Creating topology manager with none policy"
Jan 13 20:24:33.139160 kubelet[2821]: I0113 20:24:33.139148    2821 container_manager_linux.go:301] "Creating device plugin manager"
Jan 13 20:24:33.139580 kubelet[2821]: I0113 20:24:33.139259    2821 state_mem.go:36] "Initialized new in-memory state store"
Jan 13 20:24:33.139580 kubelet[2821]: I0113 20:24:33.139384    2821 kubelet.go:396] "Attempting to sync node with API server"
Jan 13 20:24:33.139580 kubelet[2821]: I0113 20:24:33.139403    2821 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 13 20:24:33.139580 kubelet[2821]: I0113 20:24:33.139427    2821 kubelet.go:312] "Adding apiserver pod source"
Jan 13 20:24:33.139580 kubelet[2821]: I0113 20:24:33.139447    2821 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 13 20:24:33.145295 kubelet[2821]: I0113 20:24:33.142829    2821 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Jan 13 20:24:33.145295 kubelet[2821]: I0113 20:24:33.143053    2821 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jan 13 20:24:33.145295 kubelet[2821]: I0113 20:24:33.143499    2821 server.go:1256] "Started kubelet"
Jan 13 20:24:33.147047 kubelet[2821]: I0113 20:24:33.146986    2821 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 13 20:24:33.161246 kubelet[2821]: I0113 20:24:33.160368    2821 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Jan 13 20:24:33.163629 kubelet[2821]: I0113 20:24:33.161229    2821 server.go:461] "Adding debug handlers to kubelet server"
Jan 13 20:24:33.166218 kubelet[2821]: I0113 20:24:33.166161    2821 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jan 13 20:24:33.166447 sudo[2834]:     root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin
Jan 13 20:24:33.166791 sudo[2834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0)
Jan 13 20:24:33.169218 kubelet[2821]: I0113 20:24:33.169155    2821 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jan 13 20:24:33.180011 kubelet[2821]: I0113 20:24:33.178560    2821 volume_manager.go:291] "Starting Kubelet Volume Manager"
Jan 13 20:24:33.180462 kubelet[2821]: I0113 20:24:33.180424    2821 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Jan 13 20:24:33.182514 kubelet[2821]: I0113 20:24:33.181006    2821 reconciler_new.go:29] "Reconciler: start to sync state"
Jan 13 20:24:33.195721 kubelet[2821]: I0113 20:24:33.195355    2821 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jan 13 20:24:33.199584 kubelet[2821]: I0113 20:24:33.199549    2821 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jan 13 20:24:33.201136 kubelet[2821]: I0113 20:24:33.200063    2821 status_manager.go:217] "Starting to sync pod status with apiserver"
Jan 13 20:24:33.201136 kubelet[2821]: I0113 20:24:33.200099    2821 kubelet.go:2329] "Starting kubelet main sync loop"
Jan 13 20:24:33.201136 kubelet[2821]: E0113 20:24:33.200156    2821 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 13 20:24:33.210787 kubelet[2821]: E0113 20:24:33.210192    2821 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 13 20:24:33.218814 kubelet[2821]: I0113 20:24:33.216763    2821 factory.go:221] Registration of the systemd container factory successfully
Jan 13 20:24:33.219292 kubelet[2821]: I0113 20:24:33.219146    2821 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jan 13 20:24:33.233747 kubelet[2821]: I0113 20:24:33.233363    2821 factory.go:221] Registration of the containerd container factory successfully
Jan 13 20:24:33.289148 kubelet[2821]: I0113 20:24:33.289107    2821 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:33.301069 kubelet[2821]: E0113 20:24:33.300722    2821 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Jan 13 20:24:33.307987 kubelet[2821]: I0113 20:24:33.307393    2821 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:33.307987 kubelet[2821]: I0113 20:24:33.307474    2821 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:33.315844 kubelet[2821]: I0113 20:24:33.314681    2821 cpu_manager.go:214] "Starting CPU manager" policy="none"
Jan 13 20:24:33.315844 kubelet[2821]: I0113 20:24:33.315406    2821 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Jan 13 20:24:33.315844 kubelet[2821]: I0113 20:24:33.315561    2821 state_mem.go:36] "Initialized new in-memory state store"
Jan 13 20:24:33.317727 kubelet[2821]: I0113 20:24:33.317320    2821 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Jan 13 20:24:33.317727 kubelet[2821]: I0113 20:24:33.317366    2821 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Jan 13 20:24:33.317727 kubelet[2821]: I0113 20:24:33.317375    2821 policy_none.go:49] "None policy: Start"
Jan 13 20:24:33.321820 kubelet[2821]: I0113 20:24:33.321610    2821 memory_manager.go:170] "Starting memorymanager" policy="None"
Jan 13 20:24:33.321820 kubelet[2821]: I0113 20:24:33.321744    2821 state_mem.go:35] "Initializing new in-memory state store"
Jan 13 20:24:33.323035 kubelet[2821]: I0113 20:24:33.323001    2821 state_mem.go:75] "Updated machine memory state"
Jan 13 20:24:33.339206 kubelet[2821]: I0113 20:24:33.339167    2821 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 13 20:24:33.342178 kubelet[2821]: I0113 20:24:33.341519    2821 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jan 13 20:24:33.502541 kubelet[2821]: I0113 20:24:33.501382    2821 topology_manager.go:215] "Topology Admit Handler" podUID="5aa8018309e7466c38f1ce9a58bfdfe4" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:33.502541 kubelet[2821]: I0113 20:24:33.501476    2821 topology_manager.go:215] "Topology Admit Handler" podUID="9b6c068311a689998d4ae17f559aa605" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:33.502541 kubelet[2821]: I0113 20:24:33.501532    2821 topology_manager.go:215] "Topology Admit Handler" podUID="9b4a0500d9018eb4d7cc490291b335ad" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:33.582663 kubelet[2821]: I0113 20:24:33.582625    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b4a0500d9018eb4d7cc490291b335ad-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-6-49e4a12287\" (UID: \"9b4a0500d9018eb4d7cc490291b335ad\") " pod="kube-system/kube-scheduler-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:33.582663 kubelet[2821]: I0113 20:24:33.582671    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5aa8018309e7466c38f1ce9a58bfdfe4-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-6-49e4a12287\" (UID: \"5aa8018309e7466c38f1ce9a58bfdfe4\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:33.582814 kubelet[2821]: I0113 20:24:33.582706    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5aa8018309e7466c38f1ce9a58bfdfe4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-6-49e4a12287\" (UID: \"5aa8018309e7466c38f1ce9a58bfdfe4\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:33.582814 kubelet[2821]: I0113 20:24:33.582726    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b6c068311a689998d4ae17f559aa605-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-6-49e4a12287\" (UID: \"9b6c068311a689998d4ae17f559aa605\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:33.582814 kubelet[2821]: I0113 20:24:33.582745    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b6c068311a689998d4ae17f559aa605-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-6-49e4a12287\" (UID: \"9b6c068311a689998d4ae17f559aa605\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:33.583106 kubelet[2821]: I0113 20:24:33.583081    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b6c068311a689998d4ae17f559aa605-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-6-49e4a12287\" (UID: \"9b6c068311a689998d4ae17f559aa605\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:33.583147 kubelet[2821]: I0113 20:24:33.583130    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b6c068311a689998d4ae17f559aa605-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-6-49e4a12287\" (UID: \"9b6c068311a689998d4ae17f559aa605\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:33.583172 kubelet[2821]: I0113 20:24:33.583155    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5aa8018309e7466c38f1ce9a58bfdfe4-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-6-49e4a12287\" (UID: \"5aa8018309e7466c38f1ce9a58bfdfe4\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:33.583194 kubelet[2821]: I0113 20:24:33.583177    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b6c068311a689998d4ae17f559aa605-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-6-49e4a12287\" (UID: \"9b6c068311a689998d4ae17f559aa605\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:33.703881 sudo[2834]: pam_unix(sudo:session): session closed for user root
Jan 13 20:24:34.152260 kubelet[2821]: I0113 20:24:34.151113    2821 apiserver.go:52] "Watching apiserver"
Jan 13 20:24:34.180722 kubelet[2821]: I0113 20:24:34.180663    2821 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Jan 13 20:24:34.284247 kubelet[2821]: E0113 20:24:34.281966    2821 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-0-6-49e4a12287\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-0-6-49e4a12287"
Jan 13 20:24:34.315484 kubelet[2821]: I0113 20:24:34.315433    2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-0-6-49e4a12287" podStartSLOduration=1.315363448 podStartE2EDuration="1.315363448s" podCreationTimestamp="2025-01-13 20:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:24:34.30174835 +0000 UTC m=+1.265144937" watchObservedRunningTime="2025-01-13 20:24:34.315363448 +0000 UTC m=+1.278760075"
Jan 13 20:24:34.328335 kubelet[2821]: I0113 20:24:34.328283    2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-0-6-49e4a12287" podStartSLOduration=1.3282300949999999 podStartE2EDuration="1.328230095s" podCreationTimestamp="2025-01-13 20:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:24:34.316160461 +0000 UTC m=+1.279557128" watchObservedRunningTime="2025-01-13 20:24:34.328230095 +0000 UTC m=+1.291626682"
Jan 13 20:24:34.341733 kubelet[2821]: I0113 20:24:34.341678    2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-0-6-49e4a12287" podStartSLOduration=1.34163015 podStartE2EDuration="1.34163015s" podCreationTimestamp="2025-01-13 20:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:24:34.328974147 +0000 UTC m=+1.292370774" watchObservedRunningTime="2025-01-13 20:24:34.34163015 +0000 UTC m=+1.305026777"
Jan 13 20:24:35.661988 sudo[1872]: pam_unix(sudo:session): session closed for user root
Jan 13 20:24:35.823067 sshd[1871]: Connection closed by 147.75.109.163 port 36684
Jan 13 20:24:35.824088 sshd-session[1869]: pam_unix(sshd:session): session closed for user core
Jan 13 20:24:35.829017 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit.
Jan 13 20:24:35.829466 systemd[1]: sshd@6-138.199.153.83:22-147.75.109.163:36684.service: Deactivated successfully.
Jan 13 20:24:35.833161 systemd[1]: session-7.scope: Deactivated successfully.
Jan 13 20:24:35.833388 systemd[1]: session-7.scope: Consumed 8.311s CPU time, 190.9M memory peak, 0B memory swap peak.
Jan 13 20:24:35.835750 systemd-logind[1461]: Removed session 7.
Jan 13 20:24:47.031702 kubelet[2821]: I0113 20:24:47.031040    2821 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Jan 13 20:24:47.032211 kubelet[2821]: I0113 20:24:47.032170    2821 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Jan 13 20:24:47.032265 containerd[1481]: time="2025-01-13T20:24:47.031818014Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Jan 13 20:24:48.044626 kubelet[2821]: I0113 20:24:48.044573    2821 topology_manager.go:215] "Topology Admit Handler" podUID="fde277d0-b309-49be-8111-01007f4e93e8" podNamespace="kube-system" podName="kube-proxy-p75fx"
Jan 13 20:24:48.058884 systemd[1]: Created slice kubepods-besteffort-podfde277d0_b309_49be_8111_01007f4e93e8.slice - libcontainer container kubepods-besteffort-podfde277d0_b309_49be_8111_01007f4e93e8.slice.
Jan 13 20:24:48.073320 kubelet[2821]: I0113 20:24:48.073279    2821 topology_manager.go:215] "Topology Admit Handler" podUID="3ce12128-e669-4946-b129-f0f9a7dff7d9" podNamespace="kube-system" podName="cilium-6knrb"
Jan 13 20:24:48.083478 kubelet[2821]: I0113 20:24:48.083429    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxlvm\" (UniqueName: \"kubernetes.io/projected/fde277d0-b309-49be-8111-01007f4e93e8-kube-api-access-mxlvm\") pod \"kube-proxy-p75fx\" (UID: \"fde277d0-b309-49be-8111-01007f4e93e8\") " pod="kube-system/kube-proxy-p75fx"
Jan 13 20:24:48.083613 kubelet[2821]: I0113 20:24:48.083479    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fde277d0-b309-49be-8111-01007f4e93e8-lib-modules\") pod \"kube-proxy-p75fx\" (UID: \"fde277d0-b309-49be-8111-01007f4e93e8\") " pod="kube-system/kube-proxy-p75fx"
Jan 13 20:24:48.083613 kubelet[2821]: I0113 20:24:48.083525    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fde277d0-b309-49be-8111-01007f4e93e8-kube-proxy\") pod \"kube-proxy-p75fx\" (UID: \"fde277d0-b309-49be-8111-01007f4e93e8\") " pod="kube-system/kube-proxy-p75fx"
Jan 13 20:24:48.083613 kubelet[2821]: I0113 20:24:48.083545    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fde277d0-b309-49be-8111-01007f4e93e8-xtables-lock\") pod \"kube-proxy-p75fx\" (UID: \"fde277d0-b309-49be-8111-01007f4e93e8\") " pod="kube-system/kube-proxy-p75fx"
Jan 13 20:24:48.089035 systemd[1]: Created slice kubepods-burstable-pod3ce12128_e669_4946_b129_f0f9a7dff7d9.slice - libcontainer container kubepods-burstable-pod3ce12128_e669_4946_b129_f0f9a7dff7d9.slice.
Jan 13 20:24:48.184782 kubelet[2821]: I0113 20:24:48.184737    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-lib-modules\") pod \"cilium-6knrb\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") " pod="kube-system/cilium-6knrb"
Jan 13 20:24:48.184782 kubelet[2821]: I0113 20:24:48.184788    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-cilium-cgroup\") pod \"cilium-6knrb\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") " pod="kube-system/cilium-6knrb"
Jan 13 20:24:48.184983 kubelet[2821]: I0113 20:24:48.184820    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-cilium-run\") pod \"cilium-6knrb\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") " pod="kube-system/cilium-6knrb"
Jan 13 20:24:48.184983 kubelet[2821]: I0113 20:24:48.184847    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-bpf-maps\") pod \"cilium-6knrb\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") " pod="kube-system/cilium-6knrb"
Jan 13 20:24:48.184983 kubelet[2821]: I0113 20:24:48.184871    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-xtables-lock\") pod \"cilium-6knrb\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") " pod="kube-system/cilium-6knrb"
Jan 13 20:24:48.184983 kubelet[2821]: I0113 20:24:48.184914    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-hostproc\") pod \"cilium-6knrb\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") " pod="kube-system/cilium-6knrb"
Jan 13 20:24:48.184983 kubelet[2821]: I0113 20:24:48.184941    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-cni-path\") pod \"cilium-6knrb\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") " pod="kube-system/cilium-6knrb"
Jan 13 20:24:48.184983 kubelet[2821]: I0113 20:24:48.184964    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-host-proc-sys-net\") pod \"cilium-6knrb\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") " pod="kube-system/cilium-6knrb"
Jan 13 20:24:48.185217 kubelet[2821]: I0113 20:24:48.184988    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-host-proc-sys-kernel\") pod \"cilium-6knrb\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") " pod="kube-system/cilium-6knrb"
Jan 13 20:24:48.185217 kubelet[2821]: I0113 20:24:48.185008    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-etc-cni-netd\") pod \"cilium-6knrb\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") " pod="kube-system/cilium-6knrb"
Jan 13 20:24:48.185217 kubelet[2821]: I0113 20:24:48.185084    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ce12128-e669-4946-b129-f0f9a7dff7d9-cilium-config-path\") pod \"cilium-6knrb\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") " pod="kube-system/cilium-6knrb"
Jan 13 20:24:48.185217 kubelet[2821]: I0113 20:24:48.185106    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ce12128-e669-4946-b129-f0f9a7dff7d9-hubble-tls\") pod \"cilium-6knrb\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") " pod="kube-system/cilium-6knrb"
Jan 13 20:24:48.185217 kubelet[2821]: I0113 20:24:48.185175    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ce12128-e669-4946-b129-f0f9a7dff7d9-clustermesh-secrets\") pod \"cilium-6knrb\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") " pod="kube-system/cilium-6knrb"
Jan 13 20:24:48.185408 kubelet[2821]: I0113 20:24:48.185200    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfkmj\" (UniqueName: \"kubernetes.io/projected/3ce12128-e669-4946-b129-f0f9a7dff7d9-kube-api-access-wfkmj\") pod \"cilium-6knrb\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") " pod="kube-system/cilium-6knrb"
Jan 13 20:24:48.227509 kubelet[2821]: I0113 20:24:48.225045    2821 topology_manager.go:215] "Topology Admit Handler" podUID="379ee6dc-408a-4db4-9545-4fcd69154c0d" podNamespace="kube-system" podName="cilium-operator-5cc964979-5d55p"
Jan 13 20:24:48.240720 systemd[1]: Created slice kubepods-besteffort-pod379ee6dc_408a_4db4_9545_4fcd69154c0d.slice - libcontainer container kubepods-besteffort-pod379ee6dc_408a_4db4_9545_4fcd69154c0d.slice.
Jan 13 20:24:48.286596 kubelet[2821]: I0113 20:24:48.286278    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5crkr\" (UniqueName: \"kubernetes.io/projected/379ee6dc-408a-4db4-9545-4fcd69154c0d-kube-api-access-5crkr\") pod \"cilium-operator-5cc964979-5d55p\" (UID: \"379ee6dc-408a-4db4-9545-4fcd69154c0d\") " pod="kube-system/cilium-operator-5cc964979-5d55p"
Jan 13 20:24:48.286596 kubelet[2821]: I0113 20:24:48.286448    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/379ee6dc-408a-4db4-9545-4fcd69154c0d-cilium-config-path\") pod \"cilium-operator-5cc964979-5d55p\" (UID: \"379ee6dc-408a-4db4-9545-4fcd69154c0d\") " pod="kube-system/cilium-operator-5cc964979-5d55p"
Jan 13 20:24:48.369564 containerd[1481]: time="2025-01-13T20:24:48.369146811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p75fx,Uid:fde277d0-b309-49be-8111-01007f4e93e8,Namespace:kube-system,Attempt:0,}"
Jan 13 20:24:48.400271 containerd[1481]: time="2025-01-13T20:24:48.400179773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6knrb,Uid:3ce12128-e669-4946-b129-f0f9a7dff7d9,Namespace:kube-system,Attempt:0,}"
Jan 13 20:24:48.407735 containerd[1481]: time="2025-01-13T20:24:48.406801644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:24:48.407735 containerd[1481]: time="2025-01-13T20:24:48.406923686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:24:48.407735 containerd[1481]: time="2025-01-13T20:24:48.406940726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:24:48.407735 containerd[1481]: time="2025-01-13T20:24:48.407033568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:24:48.432507 systemd[1]: Started cri-containerd-837b42a0e496544c89b7d877a8bfb103c372fdf6671beb30d3d10af46503d464.scope - libcontainer container 837b42a0e496544c89b7d877a8bfb103c372fdf6671beb30d3d10af46503d464.
Jan 13 20:24:48.444660 containerd[1481]: time="2025-01-13T20:24:48.443634143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:24:48.445456 containerd[1481]: time="2025-01-13T20:24:48.444358755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:24:48.445456 containerd[1481]: time="2025-01-13T20:24:48.444383516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:24:48.445456 containerd[1481]: time="2025-01-13T20:24:48.444500958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:24:48.470565 systemd[1]: Started cri-containerd-992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81.scope - libcontainer container 992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81.
Jan 13 20:24:48.484957 containerd[1481]: time="2025-01-13T20:24:48.484913637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p75fx,Uid:fde277d0-b309-49be-8111-01007f4e93e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"837b42a0e496544c89b7d877a8bfb103c372fdf6671beb30d3d10af46503d464\""
Jan 13 20:24:48.499615 containerd[1481]: time="2025-01-13T20:24:48.499202477Z" level=info msg="CreateContainer within sandbox \"837b42a0e496544c89b7d877a8bfb103c372fdf6671beb30d3d10af46503d464\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Jan 13 20:24:48.523430 containerd[1481]: time="2025-01-13T20:24:48.523372043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6knrb,Uid:3ce12128-e669-4946-b129-f0f9a7dff7d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\""
Jan 13 20:24:48.525126 containerd[1481]: time="2025-01-13T20:24:48.525045392Z" level=info msg="CreateContainer within sandbox \"837b42a0e496544c89b7d877a8bfb103c372fdf6671beb30d3d10af46503d464\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a7479f2113a9a3e361324663011c0085ebe51ec5e1e4c4c2db02c7517b8da543\""
Jan 13 20:24:48.528623 containerd[1481]: time="2025-01-13T20:24:48.526982344Z" level=info msg="StartContainer for \"a7479f2113a9a3e361324663011c0085ebe51ec5e1e4c4c2db02c7517b8da543\""
Jan 13 20:24:48.529770 containerd[1481]: time="2025-01-13T20:24:48.529729510Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Jan 13 20:24:48.547131 containerd[1481]: time="2025-01-13T20:24:48.547068402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-5d55p,Uid:379ee6dc-408a-4db4-9545-4fcd69154c0d,Namespace:kube-system,Attempt:0,}"
Jan 13 20:24:48.563513 systemd[1]: Started cri-containerd-a7479f2113a9a3e361324663011c0085ebe51ec5e1e4c4c2db02c7517b8da543.scope - libcontainer container a7479f2113a9a3e361324663011c0085ebe51ec5e1e4c4c2db02c7517b8da543.
Jan 13 20:24:48.587381 containerd[1481]: time="2025-01-13T20:24:48.587251317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:24:48.587381 containerd[1481]: time="2025-01-13T20:24:48.587316438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:24:48.587381 containerd[1481]: time="2025-01-13T20:24:48.587339239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:24:48.588548 containerd[1481]: time="2025-01-13T20:24:48.588446897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:24:48.611213 containerd[1481]: time="2025-01-13T20:24:48.611032397Z" level=info msg="StartContainer for \"a7479f2113a9a3e361324663011c0085ebe51ec5e1e4c4c2db02c7517b8da543\" returns successfully"
Jan 13 20:24:48.617489 systemd[1]: Started cri-containerd-642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5.scope - libcontainer container 642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5.
Jan 13 20:24:48.665573 containerd[1481]: time="2025-01-13T20:24:48.665382470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-5d55p,Uid:379ee6dc-408a-4db4-9545-4fcd69154c0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5\""
Jan 13 20:24:49.312194 kubelet[2821]: I0113 20:24:49.311727    2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-p75fx" podStartSLOduration=1.311679267 podStartE2EDuration="1.311679267s" podCreationTimestamp="2025-01-13 20:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:24:49.310933534 +0000 UTC m=+16.274330161" watchObservedRunningTime="2025-01-13 20:24:49.311679267 +0000 UTC m=+16.275075894"
Jan 13 20:24:52.647803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount598274283.mount: Deactivated successfully.
Jan 13 20:24:54.204834 containerd[1481]: time="2025-01-13T20:24:54.203619889Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:54.204834 containerd[1481]: time="2025-01-13T20:24:54.204765108Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651506"
Jan 13 20:24:54.205592 containerd[1481]: time="2025-01-13T20:24:54.205560602Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:54.208328 containerd[1481]: time="2025-01-13T20:24:54.208275488Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.678274213s"
Jan 13 20:24:54.208328 containerd[1481]: time="2025-01-13T20:24:54.208323329Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\""
Jan 13 20:24:54.210810 containerd[1481]: time="2025-01-13T20:24:54.210768011Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Jan 13 20:24:54.213551 containerd[1481]: time="2025-01-13T20:24:54.213291214Z" level=info msg="CreateContainer within sandbox \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Jan 13 20:24:54.228449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943151187.mount: Deactivated successfully.
Jan 13 20:24:54.232451 containerd[1481]: time="2025-01-13T20:24:54.232403059Z" level=info msg="CreateContainer within sandbox \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157\""
Jan 13 20:24:54.234974 containerd[1481]: time="2025-01-13T20:24:54.233363236Z" level=info msg="StartContainer for \"3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157\""
Jan 13 20:24:54.268555 systemd[1]: Started cri-containerd-3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157.scope - libcontainer container 3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157.
Jan 13 20:24:54.301274 containerd[1481]: time="2025-01-13T20:24:54.300919907Z" level=info msg="StartContainer for \"3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157\" returns successfully"
Jan 13 20:24:54.326838 systemd[1]: cri-containerd-3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157.scope: Deactivated successfully.
Jan 13 20:24:54.584321 containerd[1481]: time="2025-01-13T20:24:54.584221814Z" level=info msg="shim disconnected" id=3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157 namespace=k8s.io
Jan 13 20:24:54.584321 containerd[1481]: time="2025-01-13T20:24:54.584313936Z" level=warning msg="cleaning up after shim disconnected" id=3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157 namespace=k8s.io
Jan 13 20:24:54.584321 containerd[1481]: time="2025-01-13T20:24:54.584325136Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:24:55.227008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157-rootfs.mount: Deactivated successfully.
Jan 13 20:24:55.320407 containerd[1481]: time="2025-01-13T20:24:55.320352769Z" level=info msg="CreateContainer within sandbox \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Jan 13 20:24:55.349319 containerd[1481]: time="2025-01-13T20:24:55.347329670Z" level=info msg="CreateContainer within sandbox \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268\""
Jan 13 20:24:55.351755 containerd[1481]: time="2025-01-13T20:24:55.351451500Z" level=info msg="StartContainer for \"299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268\""
Jan 13 20:24:55.388628 systemd[1]: Started cri-containerd-299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268.scope - libcontainer container 299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268.
Jan 13 20:24:55.426469 containerd[1481]: time="2025-01-13T20:24:55.426295618Z" level=info msg="StartContainer for \"299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268\" returns successfully"
Jan 13 20:24:55.441072 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 13 20:24:55.441531 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:24:55.441789 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables...
Jan 13 20:24:55.452760 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 13 20:24:55.453098 systemd[1]: cri-containerd-299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268.scope: Deactivated successfully.
Jan 13 20:24:55.475327 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:24:55.485218 containerd[1481]: time="2025-01-13T20:24:55.484967660Z" level=info msg="shim disconnected" id=299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268 namespace=k8s.io
Jan 13 20:24:55.485684 containerd[1481]: time="2025-01-13T20:24:55.485194184Z" level=warning msg="cleaning up after shim disconnected" id=299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268 namespace=k8s.io
Jan 13 20:24:55.485684 containerd[1481]: time="2025-01-13T20:24:55.485548070Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:24:56.132178 containerd[1481]: time="2025-01-13T20:24:56.132114674Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:56.133991 containerd[1481]: time="2025-01-13T20:24:56.133800663Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138282"
Jan 13 20:24:56.135263 containerd[1481]: time="2025-01-13T20:24:56.134892602Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:24:56.137147 containerd[1481]: time="2025-01-13T20:24:56.136493189Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.925664938s"
Jan 13 20:24:56.137147 containerd[1481]: time="2025-01-13T20:24:56.136531910Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\""
Jan 13 20:24:56.139041 containerd[1481]: time="2025-01-13T20:24:56.139005872Z" level=info msg="CreateContainer within sandbox \"642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Jan 13 20:24:56.159754 containerd[1481]: time="2025-01-13T20:24:56.159674146Z" level=info msg="CreateContainer within sandbox \"642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1\""
Jan 13 20:24:56.164466 containerd[1481]: time="2025-01-13T20:24:56.163584173Z" level=info msg="StartContainer for \"2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1\""
Jan 13 20:24:56.195509 systemd[1]: Started cri-containerd-2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1.scope - libcontainer container 2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1.
Jan 13 20:24:56.228661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268-rootfs.mount: Deactivated successfully.
Jan 13 20:24:56.229646 containerd[1481]: time="2025-01-13T20:24:56.229534261Z" level=info msg="StartContainer for \"2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1\" returns successfully"
Jan 13 20:24:56.329723 containerd[1481]: time="2025-01-13T20:24:56.329333208Z" level=info msg="CreateContainer within sandbox \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Jan 13 20:24:56.360262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2601918129.mount: Deactivated successfully.
Jan 13 20:24:56.362512 containerd[1481]: time="2025-01-13T20:24:56.362072808Z" level=info msg="CreateContainer within sandbox \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5\""
Jan 13 20:24:56.365376 containerd[1481]: time="2025-01-13T20:24:56.365100180Z" level=info msg="StartContainer for \"a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5\""
Jan 13 20:24:56.381587 kubelet[2821]: I0113 20:24:56.381421    2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-5d55p" podStartSLOduration=0.912437857 podStartE2EDuration="8.381345138s" podCreationTimestamp="2025-01-13 20:24:48 +0000 UTC" firstStartedPulling="2025-01-13 20:24:48.667825592 +0000 UTC m=+15.631222179" lastFinishedPulling="2025-01-13 20:24:56.136732833 +0000 UTC m=+23.100129460" observedRunningTime="2025-01-13 20:24:56.379967195 +0000 UTC m=+23.343363822" watchObservedRunningTime="2025-01-13 20:24:56.381345138 +0000 UTC m=+23.344741725"
Jan 13 20:24:56.432474 systemd[1]: Started cri-containerd-a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5.scope - libcontainer container a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5.
Jan 13 20:24:56.482524 containerd[1481]: time="2025-01-13T20:24:56.482471548Z" level=info msg="StartContainer for \"a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5\" returns successfully"
Jan 13 20:24:56.495400 systemd[1]: cri-containerd-a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5.scope: Deactivated successfully.
Jan 13 20:24:56.590914 containerd[1481]: time="2025-01-13T20:24:56.590150150Z" level=info msg="shim disconnected" id=a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5 namespace=k8s.io
Jan 13 20:24:56.591145 containerd[1481]: time="2025-01-13T20:24:56.590920844Z" level=warning msg="cleaning up after shim disconnected" id=a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5 namespace=k8s.io
Jan 13 20:24:56.591145 containerd[1481]: time="2025-01-13T20:24:56.590958044Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:24:57.227636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5-rootfs.mount: Deactivated successfully.
Jan 13 20:24:57.341286 containerd[1481]: time="2025-01-13T20:24:57.340828124Z" level=info msg="CreateContainer within sandbox \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Jan 13 20:24:57.365581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3785086604.mount: Deactivated successfully.
Jan 13 20:24:57.375914 containerd[1481]: time="2025-01-13T20:24:57.375636081Z" level=info msg="CreateContainer within sandbox \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a\""
Jan 13 20:24:57.379006 containerd[1481]: time="2025-01-13T20:24:57.378450929Z" level=info msg="StartContainer for \"e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a\""
Jan 13 20:24:57.418516 systemd[1]: Started cri-containerd-e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a.scope - libcontainer container e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a.
Jan 13 20:24:57.451652 systemd[1]: cri-containerd-e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a.scope: Deactivated successfully.
Jan 13 20:24:57.454612 containerd[1481]: time="2025-01-13T20:24:57.454539153Z" level=info msg="StartContainer for \"e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a\" returns successfully"
Jan 13 20:24:57.481465 containerd[1481]: time="2025-01-13T20:24:57.481185810Z" level=info msg="shim disconnected" id=e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a namespace=k8s.io
Jan 13 20:24:57.481465 containerd[1481]: time="2025-01-13T20:24:57.481270811Z" level=warning msg="cleaning up after shim disconnected" id=e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a namespace=k8s.io
Jan 13 20:24:57.481465 containerd[1481]: time="2025-01-13T20:24:57.481282052Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:24:58.225063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a-rootfs.mount: Deactivated successfully.
Jan 13 20:24:58.352028 containerd[1481]: time="2025-01-13T20:24:58.351969187Z" level=info msg="CreateContainer within sandbox \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Jan 13 20:24:58.390649 containerd[1481]: time="2025-01-13T20:24:58.390581370Z" level=info msg="CreateContainer within sandbox \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59\""
Jan 13 20:24:58.394264 containerd[1481]: time="2025-01-13T20:24:58.392264679Z" level=info msg="StartContainer for \"416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59\""
Jan 13 20:24:58.450433 systemd[1]: Started cri-containerd-416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59.scope - libcontainer container 416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59.
Jan 13 20:24:58.521547 containerd[1481]: time="2025-01-13T20:24:58.521002369Z" level=info msg="StartContainer for \"416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59\" returns successfully"
Jan 13 20:24:58.618652 kubelet[2821]: I0113 20:24:58.617703    2821 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Jan 13 20:24:58.652602 kubelet[2821]: I0113 20:24:58.651279    2821 topology_manager.go:215] "Topology Admit Handler" podUID="6cc99e97-2145-4576-972b-6db3fdabd52f" podNamespace="kube-system" podName="coredns-76f75df574-2584n"
Jan 13 20:24:58.655629 kubelet[2821]: I0113 20:24:58.654976    2821 topology_manager.go:215] "Topology Admit Handler" podUID="8e70c202-2570-48f6-a550-23607fe9fba0" podNamespace="kube-system" podName="coredns-76f75df574-q7jp6"
Jan 13 20:24:58.660739 systemd[1]: Created slice kubepods-burstable-pod6cc99e97_2145_4576_972b_6db3fdabd52f.slice - libcontainer container kubepods-burstable-pod6cc99e97_2145_4576_972b_6db3fdabd52f.slice.
Jan 13 20:24:58.668368 kubelet[2821]: I0113 20:24:58.667976    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9fqp\" (UniqueName: \"kubernetes.io/projected/6cc99e97-2145-4576-972b-6db3fdabd52f-kube-api-access-q9fqp\") pod \"coredns-76f75df574-2584n\" (UID: \"6cc99e97-2145-4576-972b-6db3fdabd52f\") " pod="kube-system/coredns-76f75df574-2584n"
Jan 13 20:24:58.668368 kubelet[2821]: I0113 20:24:58.668023    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cc99e97-2145-4576-972b-6db3fdabd52f-config-volume\") pod \"coredns-76f75df574-2584n\" (UID: \"6cc99e97-2145-4576-972b-6db3fdabd52f\") " pod="kube-system/coredns-76f75df574-2584n"
Jan 13 20:24:58.671995 systemd[1]: Created slice kubepods-burstable-pod8e70c202_2570_48f6_a550_23607fe9fba0.slice - libcontainer container kubepods-burstable-pod8e70c202_2570_48f6_a550_23607fe9fba0.slice.
Jan 13 20:24:58.769114 kubelet[2821]: I0113 20:24:58.769058    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5plbg\" (UniqueName: \"kubernetes.io/projected/8e70c202-2570-48f6-a550-23607fe9fba0-kube-api-access-5plbg\") pod \"coredns-76f75df574-q7jp6\" (UID: \"8e70c202-2570-48f6-a550-23607fe9fba0\") " pod="kube-system/coredns-76f75df574-q7jp6"
Jan 13 20:24:58.769114 kubelet[2821]: I0113 20:24:58.769112    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e70c202-2570-48f6-a550-23607fe9fba0-config-volume\") pod \"coredns-76f75df574-q7jp6\" (UID: \"8e70c202-2570-48f6-a550-23607fe9fba0\") " pod="kube-system/coredns-76f75df574-q7jp6"
Jan 13 20:24:58.969786 containerd[1481]: time="2025-01-13T20:24:58.969113104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2584n,Uid:6cc99e97-2145-4576-972b-6db3fdabd52f,Namespace:kube-system,Attempt:0,}"
Jan 13 20:24:58.977048 containerd[1481]: time="2025-01-13T20:24:58.977006080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q7jp6,Uid:8e70c202-2570-48f6-a550-23607fe9fba0,Namespace:kube-system,Attempt:0,}"
Jan 13 20:24:59.371909 kubelet[2821]: I0113 20:24:59.371728    2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6knrb" podStartSLOduration=5.691872069 podStartE2EDuration="11.371674148s" podCreationTimestamp="2025-01-13 20:24:48 +0000 UTC" firstStartedPulling="2025-01-13 20:24:48.528898336 +0000 UTC m=+15.492294923" lastFinishedPulling="2025-01-13 20:24:54.208700295 +0000 UTC m=+21.172097002" observedRunningTime="2025-01-13 20:24:59.369900198 +0000 UTC m=+26.333296865" watchObservedRunningTime="2025-01-13 20:24:59.371674148 +0000 UTC m=+26.335070775"
Jan 13 20:25:00.755452 systemd-networkd[1368]: cilium_host: Link UP
Jan 13 20:25:00.755752 systemd-networkd[1368]: cilium_net: Link UP
Jan 13 20:25:00.755898 systemd-networkd[1368]: cilium_net: Gained carrier
Jan 13 20:25:00.756017 systemd-networkd[1368]: cilium_host: Gained carrier
Jan 13 20:25:00.887223 systemd-networkd[1368]: cilium_vxlan: Link UP
Jan 13 20:25:00.889067 systemd-networkd[1368]: cilium_vxlan: Gained carrier
Jan 13 20:25:01.021472 systemd-networkd[1368]: cilium_host: Gained IPv6LL
Jan 13 20:25:01.199269 kernel: NET: Registered PF_ALG protocol family
Jan 13 20:25:01.573493 systemd-networkd[1368]: cilium_net: Gained IPv6LL
Jan 13 20:25:01.957389 systemd-networkd[1368]: cilium_vxlan: Gained IPv6LL
Jan 13 20:25:02.045878 systemd-networkd[1368]: lxc_health: Link UP
Jan 13 20:25:02.053156 systemd-networkd[1368]: lxc_health: Gained carrier
Jan 13 20:25:02.559751 systemd-networkd[1368]: lxc7a7993a0e55e: Link UP
Jan 13 20:25:02.565345 kernel: eth0: renamed from tmp1eb08
Jan 13 20:25:02.575445 systemd-networkd[1368]: lxc7a7993a0e55e: Gained carrier
Jan 13 20:25:02.575715 systemd-networkd[1368]: lxc9fca746711ce: Link UP
Jan 13 20:25:02.585100 kernel: eth0: renamed from tmpa476a
Jan 13 20:25:02.588656 systemd-networkd[1368]: lxc9fca746711ce: Gained carrier
Jan 13 20:25:03.685449 systemd-networkd[1368]: lxc9fca746711ce: Gained IPv6LL
Jan 13 20:25:03.750548 systemd-networkd[1368]: lxc_health: Gained IPv6LL
Jan 13 20:25:04.137399 systemd-networkd[1368]: lxc7a7993a0e55e: Gained IPv6LL
Jan 13 20:25:07.112628 containerd[1481]: time="2025-01-13T20:25:07.112390904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:25:07.112628 containerd[1481]: time="2025-01-13T20:25:07.112459906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:25:07.112628 containerd[1481]: time="2025-01-13T20:25:07.112473706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:25:07.112628 containerd[1481]: time="2025-01-13T20:25:07.112574068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:25:07.151191 systemd[1]: run-containerd-runc-k8s.io-1eb08b3733b990ba8d6b70d2b8460bbd3fe238a6e864fdb750a1ff6ec46175a8-runc.2TD89p.mount: Deactivated successfully.
Jan 13 20:25:07.161512 systemd[1]: Started cri-containerd-1eb08b3733b990ba8d6b70d2b8460bbd3fe238a6e864fdb750a1ff6ec46175a8.scope - libcontainer container 1eb08b3733b990ba8d6b70d2b8460bbd3fe238a6e864fdb750a1ff6ec46175a8.
Jan 13 20:25:07.185923 containerd[1481]: time="2025-01-13T20:25:07.185191972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:25:07.185923 containerd[1481]: time="2025-01-13T20:25:07.185293934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:25:07.185923 containerd[1481]: time="2025-01-13T20:25:07.185310854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:25:07.185923 containerd[1481]: time="2025-01-13T20:25:07.185398176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:25:07.232576 systemd[1]: Started cri-containerd-a476a89f84776502b610bedb3b9c492e360ad36e94fb5634d47ead4cf13325e5.scope - libcontainer container a476a89f84776502b610bedb3b9c492e360ad36e94fb5634d47ead4cf13325e5.
Jan 13 20:25:07.250217 containerd[1481]: time="2025-01-13T20:25:07.250106742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2584n,Uid:6cc99e97-2145-4576-972b-6db3fdabd52f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1eb08b3733b990ba8d6b70d2b8460bbd3fe238a6e864fdb750a1ff6ec46175a8\""
Jan 13 20:25:07.256381 containerd[1481]: time="2025-01-13T20:25:07.256326611Z" level=info msg="CreateContainer within sandbox \"1eb08b3733b990ba8d6b70d2b8460bbd3fe238a6e864fdb750a1ff6ec46175a8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Jan 13 20:25:07.278578 containerd[1481]: time="2025-01-13T20:25:07.278510957Z" level=info msg="CreateContainer within sandbox \"1eb08b3733b990ba8d6b70d2b8460bbd3fe238a6e864fdb750a1ff6ec46175a8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"33bc4eca1987275439e0e959db683f8b7dbf237afb7ddc927667e285365a5543\""
Jan 13 20:25:07.280380 containerd[1481]: time="2025-01-13T20:25:07.280330829Z" level=info msg="StartContainer for \"33bc4eca1987275439e0e959db683f8b7dbf237afb7ddc927667e285365a5543\""
Jan 13 20:25:07.311367 containerd[1481]: time="2025-01-13T20:25:07.308947007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q7jp6,Uid:8e70c202-2570-48f6-a550-23607fe9fba0,Namespace:kube-system,Attempt:0,} returns sandbox id \"a476a89f84776502b610bedb3b9c492e360ad36e94fb5634d47ead4cf13325e5\""
Jan 13 20:25:07.315722 containerd[1481]: time="2025-01-13T20:25:07.314401142Z" level=info msg="CreateContainer within sandbox \"a476a89f84776502b610bedb3b9c492e360ad36e94fb5634d47ead4cf13325e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Jan 13 20:25:07.347495 systemd[1]: Started cri-containerd-33bc4eca1987275439e0e959db683f8b7dbf237afb7ddc927667e285365a5543.scope - libcontainer container 33bc4eca1987275439e0e959db683f8b7dbf237afb7ddc927667e285365a5543.
Jan 13 20:25:07.356040 containerd[1481]: time="2025-01-13T20:25:07.355454337Z" level=info msg="CreateContainer within sandbox \"a476a89f84776502b610bedb3b9c492e360ad36e94fb5634d47ead4cf13325e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a8ce70365ed58a9784a5976aa6f86412ac39e91ddab8a4c100dbedc9b9abdfc\""
Jan 13 20:25:07.357568 containerd[1481]: time="2025-01-13T20:25:07.357518893Z" level=info msg="StartContainer for \"9a8ce70365ed58a9784a5976aa6f86412ac39e91ddab8a4c100dbedc9b9abdfc\""
Jan 13 20:25:07.399578 systemd[1]: Started cri-containerd-9a8ce70365ed58a9784a5976aa6f86412ac39e91ddab8a4c100dbedc9b9abdfc.scope - libcontainer container 9a8ce70365ed58a9784a5976aa6f86412ac39e91ddab8a4c100dbedc9b9abdfc.
Jan 13 20:25:07.415142 containerd[1481]: time="2025-01-13T20:25:07.414930453Z" level=info msg="StartContainer for \"33bc4eca1987275439e0e959db683f8b7dbf237afb7ddc927667e285365a5543\" returns successfully"
Jan 13 20:25:07.450502 containerd[1481]: time="2025-01-13T20:25:07.450422871Z" level=info msg="StartContainer for \"9a8ce70365ed58a9784a5976aa6f86412ac39e91ddab8a4c100dbedc9b9abdfc\" returns successfully"
Jan 13 20:25:08.406244 kubelet[2821]: I0113 20:25:08.406179    2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-q7jp6" podStartSLOduration=20.405652554 podStartE2EDuration="20.405652554s" podCreationTimestamp="2025-01-13 20:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:25:08.404567255 +0000 UTC m=+35.367963922" watchObservedRunningTime="2025-01-13 20:25:08.405652554 +0000 UTC m=+35.369049181"
Jan 13 20:25:08.429254 kubelet[2821]: I0113 20:25:08.427688    2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2584n" podStartSLOduration=20.427645577 podStartE2EDuration="20.427645577s" podCreationTimestamp="2025-01-13 20:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:25:08.42552142 +0000 UTC m=+35.388918127" watchObservedRunningTime="2025-01-13 20:25:08.427645577 +0000 UTC m=+35.391042204"
Jan 13 20:26:26.951189 update_engine[1462]: I20250113 20:26:26.950630  1462 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs
Jan 13 20:26:26.951189 update_engine[1462]: I20250113 20:26:26.950709  1462 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs
Jan 13 20:26:26.951189 update_engine[1462]: I20250113 20:26:26.951000  1462 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs
Jan 13 20:26:26.954047 update_engine[1462]: I20250113 20:26:26.952818  1462 omaha_request_params.cc:62] Current group set to stable
Jan 13 20:26:26.954047 update_engine[1462]: I20250113 20:26:26.952961  1462 update_attempter.cc:499] Already updated boot flags. Skipping.
Jan 13 20:26:26.954047 update_engine[1462]: I20250113 20:26:26.952973  1462 update_attempter.cc:643] Scheduling an action processor start.
Jan 13 20:26:26.954047 update_engine[1462]: I20250113 20:26:26.952996  1462 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction
Jan 13 20:26:26.954047 update_engine[1462]: I20250113 20:26:26.953032  1462 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs
Jan 13 20:26:26.954047 update_engine[1462]: I20250113 20:26:26.953094  1462 omaha_request_action.cc:271] Posting an Omaha request to disabled
Jan 13 20:26:26.954047 update_engine[1462]: I20250113 20:26:26.953102  1462 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?>
Jan 13 20:26:26.954047 update_engine[1462]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1">
Jan 13 20:26:26.954047 update_engine[1462]:     <os version="Chateau" platform="CoreOS" sp="4152.2.0_aarch64"></os>
Jan 13 20:26:26.954047 update_engine[1462]:     <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4152.2.0" track="stable" bootid="{b1ade0e9-247b-474f-bcc5-7fbe154fa83b}" oem="hetzner" oemversion="0" alephversion="4152.2.0" machineid="453259727a0a46a8a8df4a1d5c708d87" machinealias="" lang="en-US" board="arm64-usr" hardware_class="" delta_okay="false" >
Jan 13 20:26:26.954047 update_engine[1462]:         <ping active="1"></ping>
Jan 13 20:26:26.954047 update_engine[1462]:         <updatecheck></updatecheck>
Jan 13 20:26:26.954047 update_engine[1462]:         <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event>
Jan 13 20:26:26.954047 update_engine[1462]:     </app>
Jan 13 20:26:26.954047 update_engine[1462]: </request>
Jan 13 20:26:26.954047 update_engine[1462]: I20250113 20:26:26.953109  1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Jan 13 20:26:26.955446 locksmithd[1506]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0
Jan 13 20:26:26.959039 update_engine[1462]: I20250113 20:26:26.958503  1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Jan 13 20:26:26.959039 update_engine[1462]: I20250113 20:26:26.958909  1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Jan 13 20:26:26.959637 update_engine[1462]: E20250113 20:26:26.959546  1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Jan 13 20:26:26.959737 update_engine[1462]: I20250113 20:26:26.959668  1462 libcurl_http_fetcher.cc:283] No HTTP response, retry 1
Jan 13 20:26:36.879398 update_engine[1462]: I20250113 20:26:36.879309  1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Jan 13 20:26:36.879810 update_engine[1462]: I20250113 20:26:36.879602  1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Jan 13 20:26:36.879979 update_engine[1462]: I20250113 20:26:36.879861  1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Jan 13 20:26:36.880442 update_engine[1462]: E20250113 20:26:36.880348  1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Jan 13 20:26:36.880442 update_engine[1462]: I20250113 20:26:36.880405  1462 libcurl_http_fetcher.cc:283] No HTTP response, retry 2
Jan 13 20:26:46.881552 update_engine[1462]: I20250113 20:26:46.880476  1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Jan 13 20:26:46.881552 update_engine[1462]: I20250113 20:26:46.880937  1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Jan 13 20:26:46.881552 update_engine[1462]: I20250113 20:26:46.881322  1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Jan 13 20:26:46.882700 update_engine[1462]: E20250113 20:26:46.882550  1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Jan 13 20:26:46.882700 update_engine[1462]: I20250113 20:26:46.882663  1462 libcurl_http_fetcher.cc:283] No HTTP response, retry 3
Jan 13 20:26:56.872318 update_engine[1462]: I20250113 20:26:56.872028  1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Jan 13 20:26:56.872839 update_engine[1462]: I20250113 20:26:56.872520  1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Jan 13 20:26:56.873225 update_engine[1462]: I20250113 20:26:56.873059  1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Jan 13 20:26:56.873642 update_engine[1462]: E20250113 20:26:56.873576  1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Jan 13 20:26:56.873739 update_engine[1462]: I20250113 20:26:56.873651  1462 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded
Jan 13 20:26:56.873739 update_engine[1462]: I20250113 20:26:56.873663  1462 omaha_request_action.cc:617] Omaha request response:
Jan 13 20:26:56.873828 update_engine[1462]: E20250113 20:26:56.873750  1462 omaha_request_action.cc:636] Omaha request network transfer failed.
Jan 13 20:26:56.873828 update_engine[1462]: I20250113 20:26:56.873771  1462 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing.
Jan 13 20:26:56.873828 update_engine[1462]: I20250113 20:26:56.873780  1462 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction
Jan 13 20:26:56.873828 update_engine[1462]: I20250113 20:26:56.873785  1462 update_attempter.cc:306] Processing Done.
Jan 13 20:26:56.873828 update_engine[1462]: E20250113 20:26:56.873802  1462 update_attempter.cc:619] Update failed.
Jan 13 20:26:56.873828 update_engine[1462]: I20250113 20:26:56.873808  1462 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse
Jan 13 20:26:56.873828 update_engine[1462]: I20250113 20:26:56.873814  1462 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse)
Jan 13 20:26:56.873828 update_engine[1462]: I20250113 20:26:56.873821  1462 payload_state.cc:103] Ignoring failures until we get a valid Omaha response.
Jan 13 20:26:56.874125 update_engine[1462]: I20250113 20:26:56.873899  1462 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction
Jan 13 20:26:56.874125 update_engine[1462]: I20250113 20:26:56.873924  1462 omaha_request_action.cc:271] Posting an Omaha request to disabled
Jan 13 20:26:56.874125 update_engine[1462]: I20250113 20:26:56.873930  1462 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?>
Jan 13 20:26:56.874125 update_engine[1462]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1">
Jan 13 20:26:56.874125 update_engine[1462]:     <os version="Chateau" platform="CoreOS" sp="4152.2.0_aarch64"></os>
Jan 13 20:26:56.874125 update_engine[1462]:     <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4152.2.0" track="stable" bootid="{b1ade0e9-247b-474f-bcc5-7fbe154fa83b}" oem="hetzner" oemversion="0" alephversion="4152.2.0" machineid="453259727a0a46a8a8df4a1d5c708d87" machinealias="" lang="en-US" board="arm64-usr" hardware_class="" delta_okay="false" >
Jan 13 20:26:56.874125 update_engine[1462]:         <event eventtype="3" eventresult="0" errorcode="268437456"></event>
Jan 13 20:26:56.874125 update_engine[1462]:     </app>
Jan 13 20:26:56.874125 update_engine[1462]: </request>
Jan 13 20:26:56.874125 update_engine[1462]: I20250113 20:26:56.873937  1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Jan 13 20:26:56.874125 update_engine[1462]: I20250113 20:26:56.874118  1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Jan 13 20:26:56.874885 update_engine[1462]: I20250113 20:26:56.874351  1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Jan 13 20:26:56.874917 locksmithd[1506]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0
Jan 13 20:26:56.875197 update_engine[1462]: E20250113 20:26:56.874857  1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Jan 13 20:26:56.875197 update_engine[1462]: I20250113 20:26:56.874936  1462 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded
Jan 13 20:26:56.875197 update_engine[1462]: I20250113 20:26:56.874945  1462 omaha_request_action.cc:617] Omaha request response:
Jan 13 20:26:56.875197 update_engine[1462]: I20250113 20:26:56.874953  1462 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction
Jan 13 20:26:56.875197 update_engine[1462]: I20250113 20:26:56.874959  1462 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction
Jan 13 20:26:56.875197 update_engine[1462]: I20250113 20:26:56.874965  1462 update_attempter.cc:306] Processing Done.
Jan 13 20:26:56.875197 update_engine[1462]: I20250113 20:26:56.874974  1462 update_attempter.cc:310] Error event sent.
Jan 13 20:26:56.875197 update_engine[1462]: I20250113 20:26:56.874984  1462 update_check_scheduler.cc:74] Next update check in 49m3s
Jan 13 20:26:56.875424 locksmithd[1506]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0
Jan 13 20:29:22.738640 systemd[1]: Started sshd@7-138.199.153.83:22-147.75.109.163:51270.service - OpenSSH per-connection server daemon (147.75.109.163:51270).
Jan 13 20:29:23.728957 sshd[4233]: Accepted publickey for core from 147.75.109.163 port 51270 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:29:23.731081 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:29:23.736634 systemd-logind[1461]: New session 8 of user core.
Jan 13 20:29:23.743533 systemd[1]: Started session-8.scope - Session 8 of User core.
Jan 13 20:29:24.506091 sshd[4235]: Connection closed by 147.75.109.163 port 51270
Jan 13 20:29:24.507493 sshd-session[4233]: pam_unix(sshd:session): session closed for user core
Jan 13 20:29:24.513856 systemd[1]: sshd@7-138.199.153.83:22-147.75.109.163:51270.service: Deactivated successfully.
Jan 13 20:29:24.514300 systemd-logind[1461]: Session 8 logged out. Waiting for processes to exit.
Jan 13 20:29:24.518694 systemd[1]: session-8.scope: Deactivated successfully.
Jan 13 20:29:24.520585 systemd-logind[1461]: Removed session 8.
Jan 13 20:29:29.692031 systemd[1]: Started sshd@8-138.199.153.83:22-147.75.109.163:42484.service - OpenSSH per-connection server daemon (147.75.109.163:42484).
Jan 13 20:29:30.679604 sshd[4247]: Accepted publickey for core from 147.75.109.163 port 42484 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:29:30.681897 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:29:30.686180 systemd-logind[1461]: New session 9 of user core.
Jan 13 20:29:30.694575 systemd[1]: Started session-9.scope - Session 9 of User core.
Jan 13 20:29:31.439308 sshd[4249]: Connection closed by 147.75.109.163 port 42484
Jan 13 20:29:31.440349 sshd-session[4247]: pam_unix(sshd:session): session closed for user core
Jan 13 20:29:31.443995 systemd[1]: sshd@8-138.199.153.83:22-147.75.109.163:42484.service: Deactivated successfully.
Jan 13 20:29:31.446481 systemd[1]: session-9.scope: Deactivated successfully.
Jan 13 20:29:31.448352 systemd-logind[1461]: Session 9 logged out. Waiting for processes to exit.
Jan 13 20:29:31.449566 systemd-logind[1461]: Removed session 9.
Jan 13 20:29:36.619409 systemd[1]: Started sshd@9-138.199.153.83:22-147.75.109.163:42488.service - OpenSSH per-connection server daemon (147.75.109.163:42488).
Jan 13 20:29:37.629284 sshd[4262]: Accepted publickey for core from 147.75.109.163 port 42488 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:29:37.630779 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:29:37.636670 systemd-logind[1461]: New session 10 of user core.
Jan 13 20:29:37.641473 systemd[1]: Started session-10.scope - Session 10 of User core.
Jan 13 20:29:38.393157 sshd[4264]: Connection closed by 147.75.109.163 port 42488
Jan 13 20:29:38.394057 sshd-session[4262]: pam_unix(sshd:session): session closed for user core
Jan 13 20:29:38.398811 systemd[1]: sshd@9-138.199.153.83:22-147.75.109.163:42488.service: Deactivated successfully.
Jan 13 20:29:38.401830 systemd[1]: session-10.scope: Deactivated successfully.
Jan 13 20:29:38.403985 systemd-logind[1461]: Session 10 logged out. Waiting for processes to exit.
Jan 13 20:29:38.405363 systemd-logind[1461]: Removed session 10.
Jan 13 20:29:38.570304 systemd[1]: Started sshd@10-138.199.153.83:22-147.75.109.163:51408.service - OpenSSH per-connection server daemon (147.75.109.163:51408).
Jan 13 20:29:39.558483 sshd[4276]: Accepted publickey for core from 147.75.109.163 port 51408 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:29:39.561218 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:29:39.570143 systemd-logind[1461]: New session 11 of user core.
Jan 13 20:29:39.577470 systemd[1]: Started session-11.scope - Session 11 of User core.
Jan 13 20:29:40.363632 sshd[4278]: Connection closed by 147.75.109.163 port 51408
Jan 13 20:29:40.363513 sshd-session[4276]: pam_unix(sshd:session): session closed for user core
Jan 13 20:29:40.372415 systemd[1]: sshd@10-138.199.153.83:22-147.75.109.163:51408.service: Deactivated successfully.
Jan 13 20:29:40.375977 systemd[1]: session-11.scope: Deactivated successfully.
Jan 13 20:29:40.378983 systemd-logind[1461]: Session 11 logged out. Waiting for processes to exit.
Jan 13 20:29:40.380687 systemd-logind[1461]: Removed session 11.
Jan 13 20:29:40.539596 systemd[1]: Started sshd@11-138.199.153.83:22-147.75.109.163:51424.service - OpenSSH per-connection server daemon (147.75.109.163:51424).
Jan 13 20:29:41.528329 sshd[4287]: Accepted publickey for core from 147.75.109.163 port 51424 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:29:41.530182 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:29:41.536324 systemd-logind[1461]: New session 12 of user core.
Jan 13 20:29:41.548577 systemd[1]: Started session-12.scope - Session 12 of User core.
Jan 13 20:29:42.286559 sshd[4289]: Connection closed by 147.75.109.163 port 51424
Jan 13 20:29:42.287582 sshd-session[4287]: pam_unix(sshd:session): session closed for user core
Jan 13 20:29:42.292887 systemd[1]: sshd@11-138.199.153.83:22-147.75.109.163:51424.service: Deactivated successfully.
Jan 13 20:29:42.297228 systemd[1]: session-12.scope: Deactivated successfully.
Jan 13 20:29:42.298762 systemd-logind[1461]: Session 12 logged out. Waiting for processes to exit.
Jan 13 20:29:42.300380 systemd-logind[1461]: Removed session 12.
Jan 13 20:29:47.464787 systemd[1]: Started sshd@12-138.199.153.83:22-147.75.109.163:51430.service - OpenSSH per-connection server daemon (147.75.109.163:51430).
Jan 13 20:29:48.463742 sshd[4300]: Accepted publickey for core from 147.75.109.163 port 51430 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:29:48.465913 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:29:48.471114 systemd-logind[1461]: New session 13 of user core.
Jan 13 20:29:48.478582 systemd[1]: Started session-13.scope - Session 13 of User core.
Jan 13 20:29:49.369710 sshd[4302]: Connection closed by 147.75.109.163 port 51430
Jan 13 20:29:49.370499 sshd-session[4300]: pam_unix(sshd:session): session closed for user core
Jan 13 20:29:49.378215 systemd[1]: sshd@12-138.199.153.83:22-147.75.109.163:51430.service: Deactivated successfully.
Jan 13 20:29:49.381317 systemd[1]: session-13.scope: Deactivated successfully.
Jan 13 20:29:49.382224 systemd-logind[1461]: Session 13 logged out. Waiting for processes to exit.
Jan 13 20:29:49.383560 systemd-logind[1461]: Removed session 13.
Jan 13 20:29:49.549993 systemd[1]: Started sshd@13-138.199.153.83:22-147.75.109.163:41092.service - OpenSSH per-connection server daemon (147.75.109.163:41092).
Jan 13 20:29:50.546290 sshd[4315]: Accepted publickey for core from 147.75.109.163 port 41092 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:29:50.547692 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:29:50.553159 systemd-logind[1461]: New session 14 of user core.
Jan 13 20:29:50.556579 systemd[1]: Started session-14.scope - Session 14 of User core.
Jan 13 20:29:51.355531 sshd[4317]: Connection closed by 147.75.109.163 port 41092
Jan 13 20:29:51.355307 sshd-session[4315]: pam_unix(sshd:session): session closed for user core
Jan 13 20:29:51.359646 systemd-logind[1461]: Session 14 logged out. Waiting for processes to exit.
Jan 13 20:29:51.361847 systemd[1]: sshd@13-138.199.153.83:22-147.75.109.163:41092.service: Deactivated successfully.
Jan 13 20:29:51.364447 systemd[1]: session-14.scope: Deactivated successfully.
Jan 13 20:29:51.366864 systemd-logind[1461]: Removed session 14.
Jan 13 20:29:51.533635 systemd[1]: Started sshd@14-138.199.153.83:22-147.75.109.163:41100.service - OpenSSH per-connection server daemon (147.75.109.163:41100).
Jan 13 20:29:52.529616 sshd[4325]: Accepted publickey for core from 147.75.109.163 port 41100 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:29:52.532894 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:29:52.540410 systemd-logind[1461]: New session 15 of user core.
Jan 13 20:29:52.547753 systemd[1]: Started session-15.scope - Session 15 of User core.
Jan 13 20:29:54.871370 sshd[4327]: Connection closed by 147.75.109.163 port 41100
Jan 13 20:29:54.872297 sshd-session[4325]: pam_unix(sshd:session): session closed for user core
Jan 13 20:29:54.877579 systemd[1]: sshd@14-138.199.153.83:22-147.75.109.163:41100.service: Deactivated successfully.
Jan 13 20:29:54.880709 systemd[1]: session-15.scope: Deactivated successfully.
Jan 13 20:29:54.883405 systemd-logind[1461]: Session 15 logged out. Waiting for processes to exit.
Jan 13 20:29:54.885666 systemd-logind[1461]: Removed session 15.
Jan 13 20:29:55.052653 systemd[1]: Started sshd@15-138.199.153.83:22-147.75.109.163:41116.service - OpenSSH per-connection server daemon (147.75.109.163:41116).
Jan 13 20:29:56.039225 sshd[4344]: Accepted publickey for core from 147.75.109.163 port 41116 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:29:56.041763 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:29:56.046449 systemd-logind[1461]: New session 16 of user core.
Jan 13 20:29:56.052508 systemd[1]: Started session-16.scope - Session 16 of User core.
Jan 13 20:29:56.958924 sshd[4346]: Connection closed by 147.75.109.163 port 41116
Jan 13 20:29:56.959654 sshd-session[4344]: pam_unix(sshd:session): session closed for user core
Jan 13 20:29:56.963877 systemd-logind[1461]: Session 16 logged out. Waiting for processes to exit.
Jan 13 20:29:56.965972 systemd[1]: sshd@15-138.199.153.83:22-147.75.109.163:41116.service: Deactivated successfully.
Jan 13 20:29:56.970622 systemd[1]: session-16.scope: Deactivated successfully.
Jan 13 20:29:56.972918 systemd-logind[1461]: Removed session 16.
Jan 13 20:29:57.141005 systemd[1]: Started sshd@16-138.199.153.83:22-147.75.109.163:41120.service - OpenSSH per-connection server daemon (147.75.109.163:41120).
Jan 13 20:29:58.128311 sshd[4355]: Accepted publickey for core from 147.75.109.163 port 41120 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:29:58.130259 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:29:58.137294 systemd-logind[1461]: New session 17 of user core.
Jan 13 20:29:58.146605 systemd[1]: Started session-17.scope - Session 17 of User core.
Jan 13 20:29:58.884275 sshd[4357]: Connection closed by 147.75.109.163 port 41120
Jan 13 20:29:58.883338 sshd-session[4355]: pam_unix(sshd:session): session closed for user core
Jan 13 20:29:58.889227 systemd[1]: sshd@16-138.199.153.83:22-147.75.109.163:41120.service: Deactivated successfully.
Jan 13 20:29:58.892758 systemd[1]: session-17.scope: Deactivated successfully.
Jan 13 20:29:58.893922 systemd-logind[1461]: Session 17 logged out. Waiting for processes to exit.
Jan 13 20:29:58.895163 systemd-logind[1461]: Removed session 17.
Jan 13 20:30:04.065425 systemd[1]: Started sshd@17-138.199.153.83:22-147.75.109.163:43012.service - OpenSSH per-connection server daemon (147.75.109.163:43012).
Jan 13 20:30:05.041321 sshd[4371]: Accepted publickey for core from 147.75.109.163 port 43012 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:30:05.044336 sshd-session[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:30:05.049907 systemd-logind[1461]: New session 18 of user core.
Jan 13 20:30:05.056561 systemd[1]: Started session-18.scope - Session 18 of User core.
Jan 13 20:30:05.796099 sshd[4373]: Connection closed by 147.75.109.163 port 43012
Jan 13 20:30:05.797868 sshd-session[4371]: pam_unix(sshd:session): session closed for user core
Jan 13 20:30:05.803477 systemd[1]: sshd@17-138.199.153.83:22-147.75.109.163:43012.service: Deactivated successfully.
Jan 13 20:30:05.806803 systemd[1]: session-18.scope: Deactivated successfully.
Jan 13 20:30:05.808324 systemd-logind[1461]: Session 18 logged out. Waiting for processes to exit.
Jan 13 20:30:05.809679 systemd-logind[1461]: Removed session 18.
Jan 13 20:30:10.970612 systemd[1]: Started sshd@18-138.199.153.83:22-147.75.109.163:54674.service - OpenSSH per-connection server daemon (147.75.109.163:54674).
Jan 13 20:30:11.982961 sshd[4386]: Accepted publickey for core from 147.75.109.163 port 54674 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:30:11.985371 sshd-session[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:30:11.992293 systemd-logind[1461]: New session 19 of user core.
Jan 13 20:30:11.999655 systemd[1]: Started session-19.scope - Session 19 of User core.
Jan 13 20:30:12.743957 sshd[4388]: Connection closed by 147.75.109.163 port 54674
Jan 13 20:30:12.745061 sshd-session[4386]: pam_unix(sshd:session): session closed for user core
Jan 13 20:30:12.751802 systemd-logind[1461]: Session 19 logged out. Waiting for processes to exit.
Jan 13 20:30:12.751845 systemd[1]: sshd@18-138.199.153.83:22-147.75.109.163:54674.service: Deactivated successfully.
Jan 13 20:30:12.755523 systemd[1]: session-19.scope: Deactivated successfully.
Jan 13 20:30:12.758206 systemd-logind[1461]: Removed session 19.
Jan 13 20:30:12.925671 systemd[1]: Started sshd@19-138.199.153.83:22-147.75.109.163:54682.service - OpenSSH per-connection server daemon (147.75.109.163:54682).
Jan 13 20:30:13.918230 sshd[4401]: Accepted publickey for core from 147.75.109.163 port 54682 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:30:13.920808 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:30:13.930353 systemd-logind[1461]: New session 20 of user core.
Jan 13 20:30:13.934549 systemd[1]: Started session-20.scope - Session 20 of User core.
Jan 13 20:30:15.951004 containerd[1481]: time="2025-01-13T20:30:15.950794578Z" level=info msg="StopContainer for \"2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1\" with timeout 30 (s)"
Jan 13 20:30:15.953312 containerd[1481]: time="2025-01-13T20:30:15.952848333Z" level=info msg="Stop container \"2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1\" with signal terminated"
Jan 13 20:30:15.970525 systemd[1]: cri-containerd-2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1.scope: Deactivated successfully.
Jan 13 20:30:15.988801 containerd[1481]: time="2025-01-13T20:30:15.988163941Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 13 20:30:15.997657 containerd[1481]: time="2025-01-13T20:30:15.997324882Z" level=info msg="StopContainer for \"416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59\" with timeout 2 (s)"
Jan 13 20:30:15.998327 containerd[1481]: time="2025-01-13T20:30:15.998184360Z" level=info msg="Stop container \"416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59\" with signal terminated"
Jan 13 20:30:16.010017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1-rootfs.mount: Deactivated successfully.
Jan 13 20:30:16.014979 systemd-networkd[1368]: lxc_health: Link DOWN
Jan 13 20:30:16.014989 systemd-networkd[1368]: lxc_health: Lost carrier
Jan 13 20:30:16.031196 containerd[1481]: time="2025-01-13T20:30:16.031109016Z" level=info msg="shim disconnected" id=2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1 namespace=k8s.io
Jan 13 20:30:16.031196 containerd[1481]: time="2025-01-13T20:30:16.031189936Z" level=warning msg="cleaning up after shim disconnected" id=2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1 namespace=k8s.io
Jan 13 20:30:16.031196 containerd[1481]: time="2025-01-13T20:30:16.031200176Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:30:16.033906 systemd[1]: cri-containerd-416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59.scope: Deactivated successfully.
Jan 13 20:30:16.034570 systemd[1]: cri-containerd-416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59.scope: Consumed 8.714s CPU time.
Jan 13 20:30:16.053887 containerd[1481]: time="2025-01-13T20:30:16.053843492Z" level=info msg="StopContainer for \"2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1\" returns successfully"
Jan 13 20:30:16.054803 containerd[1481]: time="2025-01-13T20:30:16.054768810Z" level=info msg="StopPodSandbox for \"642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5\""
Jan 13 20:30:16.054916 containerd[1481]: time="2025-01-13T20:30:16.054817570Z" level=info msg="Container to stop \"2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 13 20:30:16.057713 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5-shm.mount: Deactivated successfully.
Jan 13 20:30:16.063185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59-rootfs.mount: Deactivated successfully.
Jan 13 20:30:16.071613 systemd[1]: cri-containerd-642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5.scope: Deactivated successfully.
Jan 13 20:30:16.074003 containerd[1481]: time="2025-01-13T20:30:16.073862293Z" level=info msg="shim disconnected" id=416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59 namespace=k8s.io
Jan 13 20:30:16.074449 containerd[1481]: time="2025-01-13T20:30:16.074162612Z" level=warning msg="cleaning up after shim disconnected" id=416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59 namespace=k8s.io
Jan 13 20:30:16.074449 containerd[1481]: time="2025-01-13T20:30:16.074183652Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:30:16.096328 containerd[1481]: time="2025-01-13T20:30:16.096281889Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:30:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Jan 13 20:30:16.101813 containerd[1481]: time="2025-01-13T20:30:16.101766198Z" level=info msg="StopContainer for \"416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59\" returns successfully"
Jan 13 20:30:16.102483 containerd[1481]: time="2025-01-13T20:30:16.102372117Z" level=info msg="StopPodSandbox for \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\""
Jan 13 20:30:16.102483 containerd[1481]: time="2025-01-13T20:30:16.102430037Z" level=info msg="Container to stop \"3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 13 20:30:16.102483 containerd[1481]: time="2025-01-13T20:30:16.102443837Z" level=info msg="Container to stop \"a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 13 20:30:16.102483 containerd[1481]: time="2025-01-13T20:30:16.102452837Z" level=info msg="Container to stop \"299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 13 20:30:16.102483 containerd[1481]: time="2025-01-13T20:30:16.102461677Z" level=info msg="Container to stop \"e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 13 20:30:16.102880 containerd[1481]: time="2025-01-13T20:30:16.102469877Z" level=info msg="Container to stop \"416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 13 20:30:16.107181 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81-shm.mount: Deactivated successfully.
Jan 13 20:30:16.112075 systemd[1]: cri-containerd-992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81.scope: Deactivated successfully.
Jan 13 20:30:16.125612 containerd[1481]: time="2025-01-13T20:30:16.125547912Z" level=info msg="shim disconnected" id=642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5 namespace=k8s.io
Jan 13 20:30:16.125612 containerd[1481]: time="2025-01-13T20:30:16.125601992Z" level=warning msg="cleaning up after shim disconnected" id=642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5 namespace=k8s.io
Jan 13 20:30:16.125612 containerd[1481]: time="2025-01-13T20:30:16.125611712Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:30:16.150895 containerd[1481]: time="2025-01-13T20:30:16.150852503Z" level=info msg="TearDown network for sandbox \"642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5\" successfully"
Jan 13 20:30:16.151437 containerd[1481]: time="2025-01-13T20:30:16.151222702Z" level=info msg="StopPodSandbox for \"642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5\" returns successfully"
Jan 13 20:30:16.155667 containerd[1481]: time="2025-01-13T20:30:16.155540494Z" level=info msg="shim disconnected" id=992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81 namespace=k8s.io
Jan 13 20:30:16.155667 containerd[1481]: time="2025-01-13T20:30:16.155598934Z" level=warning msg="cleaning up after shim disconnected" id=992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81 namespace=k8s.io
Jan 13 20:30:16.155667 containerd[1481]: time="2025-01-13T20:30:16.155607534Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:30:16.173487 containerd[1481]: time="2025-01-13T20:30:16.173435619Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:30:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Jan 13 20:30:16.177271 containerd[1481]: time="2025-01-13T20:30:16.176856132Z" level=info msg="TearDown network for sandbox \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\" successfully"
Jan 13 20:30:16.177271 containerd[1481]: time="2025-01-13T20:30:16.176902052Z" level=info msg="StopPodSandbox for \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\" returns successfully"
Jan 13 20:30:16.208037 kubelet[2821]: I0113 20:30:16.207201    2821 scope.go:117] "RemoveContainer" containerID="2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1"
Jan 13 20:30:16.208037 kubelet[2821]: I0113 20:30:16.207357    2821 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/379ee6dc-408a-4db4-9545-4fcd69154c0d-cilium-config-path\") pod \"379ee6dc-408a-4db4-9545-4fcd69154c0d\" (UID: \"379ee6dc-408a-4db4-9545-4fcd69154c0d\") "
Jan 13 20:30:16.208037 kubelet[2821]: I0113 20:30:16.207434    2821 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5crkr\" (UniqueName: \"kubernetes.io/projected/379ee6dc-408a-4db4-9545-4fcd69154c0d-kube-api-access-5crkr\") pod \"379ee6dc-408a-4db4-9545-4fcd69154c0d\" (UID: \"379ee6dc-408a-4db4-9545-4fcd69154c0d\") "
Jan 13 20:30:16.213683 containerd[1481]: time="2025-01-13T20:30:16.212936422Z" level=info msg="RemoveContainer for \"2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1\""
Jan 13 20:30:16.214501 kubelet[2821]: I0113 20:30:16.213603    2821 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/379ee6dc-408a-4db4-9545-4fcd69154c0d-kube-api-access-5crkr" (OuterVolumeSpecName: "kube-api-access-5crkr") pod "379ee6dc-408a-4db4-9545-4fcd69154c0d" (UID: "379ee6dc-408a-4db4-9545-4fcd69154c0d"). InnerVolumeSpecName "kube-api-access-5crkr". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jan 13 20:30:16.215091 kubelet[2821]: I0113 20:30:16.215024    2821 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379ee6dc-408a-4db4-9545-4fcd69154c0d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "379ee6dc-408a-4db4-9545-4fcd69154c0d" (UID: "379ee6dc-408a-4db4-9545-4fcd69154c0d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Jan 13 20:30:16.218897 containerd[1481]: time="2025-01-13T20:30:16.218760771Z" level=info msg="RemoveContainer for \"2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1\" returns successfully"
Jan 13 20:30:16.219118 kubelet[2821]: I0113 20:30:16.219086    2821 scope.go:117] "RemoveContainer" containerID="2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1"
Jan 13 20:30:16.219452 containerd[1481]: time="2025-01-13T20:30:16.219411729Z" level=error msg="ContainerStatus for \"2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1\": not found"
Jan 13 20:30:16.219650 kubelet[2821]: E0113 20:30:16.219633    2821 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1\": not found" containerID="2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1"
Jan 13 20:30:16.219816 kubelet[2821]: I0113 20:30:16.219783    2821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1"} err="failed to get container status \"2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e4917aac028447027b1a2d5223cf74d1884cabcbdd5127f214819ab0fafeef1\": not found"
Jan 13 20:30:16.219857 kubelet[2821]: I0113 20:30:16.219823    2821 scope.go:117] "RemoveContainer" containerID="416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59"
Jan 13 20:30:16.221766 containerd[1481]: time="2025-01-13T20:30:16.221729445Z" level=info msg="RemoveContainer for \"416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59\""
Jan 13 20:30:16.226238 containerd[1481]: time="2025-01-13T20:30:16.226181516Z" level=info msg="RemoveContainer for \"416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59\" returns successfully"
Jan 13 20:30:16.226581 kubelet[2821]: I0113 20:30:16.226489    2821 scope.go:117] "RemoveContainer" containerID="e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a"
Jan 13 20:30:16.228860 containerd[1481]: time="2025-01-13T20:30:16.228820831Z" level=info msg="RemoveContainer for \"e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a\""
Jan 13 20:30:16.232622 containerd[1481]: time="2025-01-13T20:30:16.232578504Z" level=info msg="RemoveContainer for \"e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a\" returns successfully"
Jan 13 20:30:16.233168 kubelet[2821]: I0113 20:30:16.233116    2821 scope.go:117] "RemoveContainer" containerID="a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5"
Jan 13 20:30:16.235396 containerd[1481]: time="2025-01-13T20:30:16.235058699Z" level=info msg="RemoveContainer for \"a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5\""
Jan 13 20:30:16.238419 containerd[1481]: time="2025-01-13T20:30:16.238375372Z" level=info msg="RemoveContainer for \"a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5\" returns successfully"
Jan 13 20:30:16.239745 kubelet[2821]: I0113 20:30:16.239646    2821 scope.go:117] "RemoveContainer" containerID="299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268"
Jan 13 20:30:16.243266 containerd[1481]: time="2025-01-13T20:30:16.243209963Z" level=info msg="RemoveContainer for \"299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268\""
Jan 13 20:30:16.254266 containerd[1481]: time="2025-01-13T20:30:16.254153422Z" level=info msg="RemoveContainer for \"299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268\" returns successfully"
Jan 13 20:30:16.255822 kubelet[2821]: I0113 20:30:16.255787    2821 scope.go:117] "RemoveContainer" containerID="3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157"
Jan 13 20:30:16.257511 containerd[1481]: time="2025-01-13T20:30:16.257477775Z" level=info msg="RemoveContainer for \"3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157\""
Jan 13 20:30:16.263079 containerd[1481]: time="2025-01-13T20:30:16.262934085Z" level=info msg="RemoveContainer for \"3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157\" returns successfully"
Jan 13 20:30:16.263456 kubelet[2821]: I0113 20:30:16.263426    2821 scope.go:117] "RemoveContainer" containerID="416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59"
Jan 13 20:30:16.263786 containerd[1481]: time="2025-01-13T20:30:16.263738963Z" level=error msg="ContainerStatus for \"416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59\": not found"
Jan 13 20:30:16.264205 kubelet[2821]: E0113 20:30:16.264127    2821 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59\": not found" containerID="416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59"
Jan 13 20:30:16.264387 kubelet[2821]: I0113 20:30:16.264372    2821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59"} err="failed to get container status \"416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59\": rpc error: code = NotFound desc = an error occurred when try to find container \"416dbf47f66389251f64ce2a6d7f5b8031a5d7506855db98cb93036946eb0f59\": not found"
Jan 13 20:30:16.264464 kubelet[2821]: I0113 20:30:16.264454    2821 scope.go:117] "RemoveContainer" containerID="e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a"
Jan 13 20:30:16.264881 containerd[1481]: time="2025-01-13T20:30:16.264852081Z" level=error msg="ContainerStatus for \"e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a\": not found"
Jan 13 20:30:16.265188 kubelet[2821]: E0113 20:30:16.265164    2821 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a\": not found" containerID="e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a"
Jan 13 20:30:16.265287 kubelet[2821]: I0113 20:30:16.265211    2821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a"} err="failed to get container status \"e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a\": rpc error: code = NotFound desc = an error occurred when try to find container \"e31372115550902d542d7b6ef296ae643b8cbc5093930e4d474d8780e089b55a\": not found"
Jan 13 20:30:16.265287 kubelet[2821]: I0113 20:30:16.265227    2821 scope.go:117] "RemoveContainer" containerID="a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5"
Jan 13 20:30:16.267479 containerd[1481]: time="2025-01-13T20:30:16.267443876Z" level=error msg="ContainerStatus for \"a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5\": not found"
Jan 13 20:30:16.267716 kubelet[2821]: E0113 20:30:16.267692    2821 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5\": not found" containerID="a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5"
Jan 13 20:30:16.267777 kubelet[2821]: I0113 20:30:16.267738    2821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5"} err="failed to get container status \"a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"a194afcee934d3ba4f77a1ef7504debab74a245108f892b2abaeb4293fdb08c5\": not found"
Jan 13 20:30:16.267777 kubelet[2821]: I0113 20:30:16.267753    2821 scope.go:117] "RemoveContainer" containerID="299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268"
Jan 13 20:30:16.268090 containerd[1481]: time="2025-01-13T20:30:16.268043835Z" level=error msg="ContainerStatus for \"299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268\": not found"
Jan 13 20:30:16.268564 kubelet[2821]: E0113 20:30:16.268537    2821 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268\": not found" containerID="299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268"
Jan 13 20:30:16.268636 kubelet[2821]: I0113 20:30:16.268583    2821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268"} err="failed to get container status \"299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268\": rpc error: code = NotFound desc = an error occurred when try to find container \"299f960cd3ca09451170fbcbf8960820683489d0321639365e0bba4fc8dac268\": not found"
Jan 13 20:30:16.268636 kubelet[2821]: I0113 20:30:16.268596    2821 scope.go:117] "RemoveContainer" containerID="3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157"
Jan 13 20:30:16.270723 containerd[1481]: time="2025-01-13T20:30:16.270683070Z" level=error msg="ContainerStatus for \"3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157\": not found"
Jan 13 20:30:16.271271 kubelet[2821]: E0113 20:30:16.271166    2821 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157\": not found" containerID="3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157"
Jan 13 20:30:16.271271 kubelet[2821]: I0113 20:30:16.271207    2821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157"} err="failed to get container status \"3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157\": rpc error: code = NotFound desc = an error occurred when try to find container \"3200471247fc4b376c6b48ce47da5c8c3bd7b0510eaac772e3345c854302b157\": not found"
Jan 13 20:30:16.307723 kubelet[2821]: I0113 20:30:16.307675    2821 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ce12128-e669-4946-b129-f0f9a7dff7d9-clustermesh-secrets\") pod \"3ce12128-e669-4946-b129-f0f9a7dff7d9\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") "
Jan 13 20:30:16.309150 kubelet[2821]: I0113 20:30:16.307939    2821 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-cilium-cgroup\") pod \"3ce12128-e669-4946-b129-f0f9a7dff7d9\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") "
Jan 13 20:30:16.309150 kubelet[2821]: I0113 20:30:16.308048    2821 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-bpf-maps\") pod \"3ce12128-e669-4946-b129-f0f9a7dff7d9\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") "
Jan 13 20:30:16.309150 kubelet[2821]: I0113 20:30:16.308097    2821 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-cilium-run\") pod \"3ce12128-e669-4946-b129-f0f9a7dff7d9\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") "
Jan 13 20:30:16.309150 kubelet[2821]: I0113 20:30:16.308135    2821 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-host-proc-sys-net\") pod \"3ce12128-e669-4946-b129-f0f9a7dff7d9\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") "
Jan 13 20:30:16.309150 kubelet[2821]: I0113 20:30:16.308167    2821 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-etc-cni-netd\") pod \"3ce12128-e669-4946-b129-f0f9a7dff7d9\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") "
Jan 13 20:30:16.309150 kubelet[2821]: I0113 20:30:16.308206    2821 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfkmj\" (UniqueName: \"kubernetes.io/projected/3ce12128-e669-4946-b129-f0f9a7dff7d9-kube-api-access-wfkmj\") pod \"3ce12128-e669-4946-b129-f0f9a7dff7d9\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") "
Jan 13 20:30:16.309575 kubelet[2821]: I0113 20:30:16.308263    2821 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-xtables-lock\") pod \"3ce12128-e669-4946-b129-f0f9a7dff7d9\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") "
Jan 13 20:30:16.309575 kubelet[2821]: I0113 20:30:16.308307    2821 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-host-proc-sys-kernel\") pod \"3ce12128-e669-4946-b129-f0f9a7dff7d9\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") "
Jan 13 20:30:16.309575 kubelet[2821]: I0113 20:30:16.308342    2821 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ce12128-e669-4946-b129-f0f9a7dff7d9-hubble-tls\") pod \"3ce12128-e669-4946-b129-f0f9a7dff7d9\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") "
Jan 13 20:30:16.309575 kubelet[2821]: I0113 20:30:16.308373    2821 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-hostproc\") pod \"3ce12128-e669-4946-b129-f0f9a7dff7d9\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") "
Jan 13 20:30:16.309575 kubelet[2821]: I0113 20:30:16.308402    2821 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-cni-path\") pod \"3ce12128-e669-4946-b129-f0f9a7dff7d9\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") "
Jan 13 20:30:16.309575 kubelet[2821]: I0113 20:30:16.308437    2821 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ce12128-e669-4946-b129-f0f9a7dff7d9-cilium-config-path\") pod \"3ce12128-e669-4946-b129-f0f9a7dff7d9\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") "
Jan 13 20:30:16.309806 kubelet[2821]: I0113 20:30:16.308468    2821 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-lib-modules\") pod \"3ce12128-e669-4946-b129-f0f9a7dff7d9\" (UID: \"3ce12128-e669-4946-b129-f0f9a7dff7d9\") "
Jan 13 20:30:16.309806 kubelet[2821]: I0113 20:30:16.308525    2821 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/379ee6dc-408a-4db4-9545-4fcd69154c0d-cilium-config-path\") on node \"ci-4152-2-0-6-49e4a12287\" DevicePath \"\""
Jan 13 20:30:16.309806 kubelet[2821]: I0113 20:30:16.308546    2821 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5crkr\" (UniqueName: \"kubernetes.io/projected/379ee6dc-408a-4db4-9545-4fcd69154c0d-kube-api-access-5crkr\") on node \"ci-4152-2-0-6-49e4a12287\" DevicePath \"\""
Jan 13 20:30:16.309806 kubelet[2821]: I0113 20:30:16.308586    2821 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3ce12128-e669-4946-b129-f0f9a7dff7d9" (UID: "3ce12128-e669-4946-b129-f0f9a7dff7d9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:30:16.309806 kubelet[2821]: I0113 20:30:16.308640    2821 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3ce12128-e669-4946-b129-f0f9a7dff7d9" (UID: "3ce12128-e669-4946-b129-f0f9a7dff7d9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:30:16.309806 kubelet[2821]: I0113 20:30:16.308667    2821 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3ce12128-e669-4946-b129-f0f9a7dff7d9" (UID: "3ce12128-e669-4946-b129-f0f9a7dff7d9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:30:16.310063 kubelet[2821]: I0113 20:30:16.308691    2821 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3ce12128-e669-4946-b129-f0f9a7dff7d9" (UID: "3ce12128-e669-4946-b129-f0f9a7dff7d9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:30:16.310063 kubelet[2821]: I0113 20:30:16.308716    2821 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3ce12128-e669-4946-b129-f0f9a7dff7d9" (UID: "3ce12128-e669-4946-b129-f0f9a7dff7d9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:30:16.310063 kubelet[2821]: I0113 20:30:16.308743    2821 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3ce12128-e669-4946-b129-f0f9a7dff7d9" (UID: "3ce12128-e669-4946-b129-f0f9a7dff7d9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:30:16.311197 kubelet[2821]: I0113 20:30:16.311160    2821 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3ce12128-e669-4946-b129-f0f9a7dff7d9" (UID: "3ce12128-e669-4946-b129-f0f9a7dff7d9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:30:16.312325 kubelet[2821]: I0113 20:30:16.311226    2821 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-hostproc" (OuterVolumeSpecName: "hostproc") pod "3ce12128-e669-4946-b129-f0f9a7dff7d9" (UID: "3ce12128-e669-4946-b129-f0f9a7dff7d9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:30:16.312534 kubelet[2821]: I0113 20:30:16.311377    2821 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-cni-path" (OuterVolumeSpecName: "cni-path") pod "3ce12128-e669-4946-b129-f0f9a7dff7d9" (UID: "3ce12128-e669-4946-b129-f0f9a7dff7d9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:30:16.312695 kubelet[2821]: I0113 20:30:16.312634    2821 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3ce12128-e669-4946-b129-f0f9a7dff7d9" (UID: "3ce12128-e669-4946-b129-f0f9a7dff7d9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:30:16.315506 kubelet[2821]: I0113 20:30:16.315449    2821 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ce12128-e669-4946-b129-f0f9a7dff7d9-kube-api-access-wfkmj" (OuterVolumeSpecName: "kube-api-access-wfkmj") pod "3ce12128-e669-4946-b129-f0f9a7dff7d9" (UID: "3ce12128-e669-4946-b129-f0f9a7dff7d9"). InnerVolumeSpecName "kube-api-access-wfkmj". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jan 13 20:30:16.315706 kubelet[2821]: I0113 20:30:16.315678    2821 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ce12128-e669-4946-b129-f0f9a7dff7d9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3ce12128-e669-4946-b129-f0f9a7dff7d9" (UID: "3ce12128-e669-4946-b129-f0f9a7dff7d9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jan 13 20:30:16.316292 kubelet[2821]: I0113 20:30:16.316215    2821 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ce12128-e669-4946-b129-f0f9a7dff7d9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3ce12128-e669-4946-b129-f0f9a7dff7d9" (UID: "3ce12128-e669-4946-b129-f0f9a7dff7d9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jan 13 20:30:16.316573 kubelet[2821]: I0113 20:30:16.316533    2821 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ce12128-e669-4946-b129-f0f9a7dff7d9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3ce12128-e669-4946-b129-f0f9a7dff7d9" (UID: "3ce12128-e669-4946-b129-f0f9a7dff7d9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Jan 13 20:30:16.409322 kubelet[2821]: I0113 20:30:16.408909    2821 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-host-proc-sys-net\") on node \"ci-4152-2-0-6-49e4a12287\" DevicePath \"\""
Jan 13 20:30:16.409322 kubelet[2821]: I0113 20:30:16.408979    2821 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-etc-cni-netd\") on node \"ci-4152-2-0-6-49e4a12287\" DevicePath \"\""
Jan 13 20:30:16.409322 kubelet[2821]: I0113 20:30:16.409012    2821 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-host-proc-sys-kernel\") on node \"ci-4152-2-0-6-49e4a12287\" DevicePath \"\""
Jan 13 20:30:16.409322 kubelet[2821]: I0113 20:30:16.409032    2821 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ce12128-e669-4946-b129-f0f9a7dff7d9-hubble-tls\") on node \"ci-4152-2-0-6-49e4a12287\" DevicePath \"\""
Jan 13 20:30:16.409322 kubelet[2821]: I0113 20:30:16.409055    2821 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wfkmj\" (UniqueName: \"kubernetes.io/projected/3ce12128-e669-4946-b129-f0f9a7dff7d9-kube-api-access-wfkmj\") on node \"ci-4152-2-0-6-49e4a12287\" DevicePath \"\""
Jan 13 20:30:16.409322 kubelet[2821]: I0113 20:30:16.409074    2821 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-xtables-lock\") on node \"ci-4152-2-0-6-49e4a12287\" DevicePath \"\""
Jan 13 20:30:16.409322 kubelet[2821]: I0113 20:30:16.409096    2821 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ce12128-e669-4946-b129-f0f9a7dff7d9-cilium-config-path\") on node \"ci-4152-2-0-6-49e4a12287\" DevicePath \"\""
Jan 13 20:30:16.409322 kubelet[2821]: I0113 20:30:16.409115    2821 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-lib-modules\") on node \"ci-4152-2-0-6-49e4a12287\" DevicePath \"\""
Jan 13 20:30:16.409843 kubelet[2821]: I0113 20:30:16.409136    2821 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-hostproc\") on node \"ci-4152-2-0-6-49e4a12287\" DevicePath \"\""
Jan 13 20:30:16.409843 kubelet[2821]: I0113 20:30:16.409166    2821 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-cni-path\") on node \"ci-4152-2-0-6-49e4a12287\" DevicePath \"\""
Jan 13 20:30:16.409843 kubelet[2821]: I0113 20:30:16.409187    2821 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ce12128-e669-4946-b129-f0f9a7dff7d9-clustermesh-secrets\") on node \"ci-4152-2-0-6-49e4a12287\" DevicePath \"\""
Jan 13 20:30:16.409843 kubelet[2821]: I0113 20:30:16.409208    2821 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-cilium-cgroup\") on node \"ci-4152-2-0-6-49e4a12287\" DevicePath \"\""
Jan 13 20:30:16.409843 kubelet[2821]: I0113 20:30:16.409226    2821 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-bpf-maps\") on node \"ci-4152-2-0-6-49e4a12287\" DevicePath \"\""
Jan 13 20:30:16.409843 kubelet[2821]: I0113 20:30:16.409276    2821 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ce12128-e669-4946-b129-f0f9a7dff7d9-cilium-run\") on node \"ci-4152-2-0-6-49e4a12287\" DevicePath \"\""
Jan 13 20:30:16.515539 systemd[1]: Removed slice kubepods-besteffort-pod379ee6dc_408a_4db4_9545_4fcd69154c0d.slice - libcontainer container kubepods-besteffort-pod379ee6dc_408a_4db4_9545_4fcd69154c0d.slice.
Jan 13 20:30:16.523404 systemd[1]: Removed slice kubepods-burstable-pod3ce12128_e669_4946_b129_f0f9a7dff7d9.slice - libcontainer container kubepods-burstable-pod3ce12128_e669_4946_b129_f0f9a7dff7d9.slice.
Jan 13 20:30:16.523636 systemd[1]: kubepods-burstable-pod3ce12128_e669_4946_b129_f0f9a7dff7d9.slice: Consumed 8.813s CPU time.
Jan 13 20:30:16.965971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5-rootfs.mount: Deactivated successfully.
Jan 13 20:30:16.966094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81-rootfs.mount: Deactivated successfully.
Jan 13 20:30:16.966162 systemd[1]: var-lib-kubelet-pods-379ee6dc\x2d408a\x2d4db4\x2d9545\x2d4fcd69154c0d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5crkr.mount: Deactivated successfully.
Jan 13 20:30:16.966222 systemd[1]: var-lib-kubelet-pods-3ce12128\x2de669\x2d4946\x2db129\x2df0f9a7dff7d9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwfkmj.mount: Deactivated successfully.
Jan 13 20:30:16.966308 systemd[1]: var-lib-kubelet-pods-3ce12128\x2de669\x2d4946\x2db129\x2df0f9a7dff7d9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Jan 13 20:30:16.966365 systemd[1]: var-lib-kubelet-pods-3ce12128\x2de669\x2d4946\x2db129\x2df0f9a7dff7d9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Jan 13 20:30:17.206208 kubelet[2821]: I0113 20:30:17.205806    2821 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="379ee6dc-408a-4db4-9545-4fcd69154c0d" path="/var/lib/kubelet/pods/379ee6dc-408a-4db4-9545-4fcd69154c0d/volumes"
Jan 13 20:30:17.206839 kubelet[2821]: I0113 20:30:17.206789    2821 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3ce12128-e669-4946-b129-f0f9a7dff7d9" path="/var/lib/kubelet/pods/3ce12128-e669-4946-b129-f0f9a7dff7d9/volumes"
Jan 13 20:30:18.031996 sshd[4403]: Connection closed by 147.75.109.163 port 54682
Jan 13 20:30:18.032788 sshd-session[4401]: pam_unix(sshd:session): session closed for user core
Jan 13 20:30:18.037125 systemd-logind[1461]: Session 20 logged out. Waiting for processes to exit.
Jan 13 20:30:18.038167 systemd[1]: sshd@19-138.199.153.83:22-147.75.109.163:54682.service: Deactivated successfully.
Jan 13 20:30:18.042549 systemd[1]: session-20.scope: Deactivated successfully.
Jan 13 20:30:18.045103 systemd-logind[1461]: Removed session 20.
Jan 13 20:30:18.207769 systemd[1]: Started sshd@20-138.199.153.83:22-147.75.109.163:52634.service - OpenSSH per-connection server daemon (147.75.109.163:52634).
Jan 13 20:30:18.460305 kubelet[2821]: E0113 20:30:18.459830    2821 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jan 13 20:30:19.201655 kubelet[2821]: E0113 20:30:19.201297    2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-2584n" podUID="6cc99e97-2145-4576-972b-6db3fdabd52f"
Jan 13 20:30:19.206323 sshd[4565]: Accepted publickey for core from 147.75.109.163 port 52634 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:30:19.210347 sshd-session[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:30:19.216704 systemd-logind[1461]: New session 21 of user core.
Jan 13 20:30:19.221510 systemd[1]: Started session-21.scope - Session 21 of User core.
Jan 13 20:30:21.076510 kubelet[2821]: I0113 20:30:21.075389    2821 topology_manager.go:215] "Topology Admit Handler" podUID="cfe898a4-47b2-42f8-90bc-53ef435c0867" podNamespace="kube-system" podName="cilium-j7hc9"
Jan 13 20:30:21.076510 kubelet[2821]: E0113 20:30:21.075449    2821 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="379ee6dc-408a-4db4-9545-4fcd69154c0d" containerName="cilium-operator"
Jan 13 20:30:21.076510 kubelet[2821]: E0113 20:30:21.075460    2821 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ce12128-e669-4946-b129-f0f9a7dff7d9" containerName="mount-bpf-fs"
Jan 13 20:30:21.076510 kubelet[2821]: E0113 20:30:21.075467    2821 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ce12128-e669-4946-b129-f0f9a7dff7d9" containerName="clean-cilium-state"
Jan 13 20:30:21.076510 kubelet[2821]: E0113 20:30:21.075474    2821 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ce12128-e669-4946-b129-f0f9a7dff7d9" containerName="mount-cgroup"
Jan 13 20:30:21.076510 kubelet[2821]: E0113 20:30:21.075481    2821 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ce12128-e669-4946-b129-f0f9a7dff7d9" containerName="apply-sysctl-overwrites"
Jan 13 20:30:21.076510 kubelet[2821]: E0113 20:30:21.075488    2821 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ce12128-e669-4946-b129-f0f9a7dff7d9" containerName="cilium-agent"
Jan 13 20:30:21.076510 kubelet[2821]: I0113 20:30:21.075510    2821 memory_manager.go:354] "RemoveStaleState removing state" podUID="379ee6dc-408a-4db4-9545-4fcd69154c0d" containerName="cilium-operator"
Jan 13 20:30:21.076510 kubelet[2821]: I0113 20:30:21.075516    2821 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ce12128-e669-4946-b129-f0f9a7dff7d9" containerName="cilium-agent"
Jan 13 20:30:21.084198 kubelet[2821]: W0113 20:30:21.084169    2821 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4152-2-0-6-49e4a12287" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-6-49e4a12287' and this object
Jan 13 20:30:21.085052 kubelet[2821]: E0113 20:30:21.084800    2821 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4152-2-0-6-49e4a12287" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-6-49e4a12287' and this object
Jan 13 20:30:21.085052 kubelet[2821]: W0113 20:30:21.084874    2821 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4152-2-0-6-49e4a12287" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-6-49e4a12287' and this object
Jan 13 20:30:21.085052 kubelet[2821]: E0113 20:30:21.084953    2821 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4152-2-0-6-49e4a12287" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-6-49e4a12287' and this object
Jan 13 20:30:21.085052 kubelet[2821]: W0113 20:30:21.085022    2821 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4152-2-0-6-49e4a12287" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-6-49e4a12287' and this object
Jan 13 20:30:21.085052 kubelet[2821]: E0113 20:30:21.085033    2821 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4152-2-0-6-49e4a12287" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-6-49e4a12287' and this object
Jan 13 20:30:21.085594 kubelet[2821]: W0113 20:30:21.085427    2821 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4152-2-0-6-49e4a12287" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-6-49e4a12287' and this object
Jan 13 20:30:21.085594 kubelet[2821]: E0113 20:30:21.085571    2821 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4152-2-0-6-49e4a12287" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-6-49e4a12287' and this object
Jan 13 20:30:21.086596 systemd[1]: Created slice kubepods-burstable-podcfe898a4_47b2_42f8_90bc_53ef435c0867.slice - libcontainer container kubepods-burstable-podcfe898a4_47b2_42f8_90bc_53ef435c0867.slice.
Jan 13 20:30:21.140939 kubelet[2821]: I0113 20:30:21.140176    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cfe898a4-47b2-42f8-90bc-53ef435c0867-cni-path\") pod \"cilium-j7hc9\" (UID: \"cfe898a4-47b2-42f8-90bc-53ef435c0867\") " pod="kube-system/cilium-j7hc9"
Jan 13 20:30:21.140939 kubelet[2821]: I0113 20:30:21.140276    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cfe898a4-47b2-42f8-90bc-53ef435c0867-etc-cni-netd\") pod \"cilium-j7hc9\" (UID: \"cfe898a4-47b2-42f8-90bc-53ef435c0867\") " pod="kube-system/cilium-j7hc9"
Jan 13 20:30:21.140939 kubelet[2821]: I0113 20:30:21.140326    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cfe898a4-47b2-42f8-90bc-53ef435c0867-clustermesh-secrets\") pod \"cilium-j7hc9\" (UID: \"cfe898a4-47b2-42f8-90bc-53ef435c0867\") " pod="kube-system/cilium-j7hc9"
Jan 13 20:30:21.140939 kubelet[2821]: I0113 20:30:21.140359    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cfe898a4-47b2-42f8-90bc-53ef435c0867-cilium-run\") pod \"cilium-j7hc9\" (UID: \"cfe898a4-47b2-42f8-90bc-53ef435c0867\") " pod="kube-system/cilium-j7hc9"
Jan 13 20:30:21.140939 kubelet[2821]: I0113 20:30:21.140388    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfe898a4-47b2-42f8-90bc-53ef435c0867-xtables-lock\") pod \"cilium-j7hc9\" (UID: \"cfe898a4-47b2-42f8-90bc-53ef435c0867\") " pod="kube-system/cilium-j7hc9"
Jan 13 20:30:21.140939 kubelet[2821]: I0113 20:30:21.140414    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cfe898a4-47b2-42f8-90bc-53ef435c0867-hubble-tls\") pod \"cilium-j7hc9\" (UID: \"cfe898a4-47b2-42f8-90bc-53ef435c0867\") " pod="kube-system/cilium-j7hc9"
Jan 13 20:30:21.141386 kubelet[2821]: I0113 20:30:21.140442    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cfe898a4-47b2-42f8-90bc-53ef435c0867-host-proc-sys-net\") pod \"cilium-j7hc9\" (UID: \"cfe898a4-47b2-42f8-90bc-53ef435c0867\") " pod="kube-system/cilium-j7hc9"
Jan 13 20:30:21.141386 kubelet[2821]: I0113 20:30:21.140472    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cfe898a4-47b2-42f8-90bc-53ef435c0867-host-proc-sys-kernel\") pod \"cilium-j7hc9\" (UID: \"cfe898a4-47b2-42f8-90bc-53ef435c0867\") " pod="kube-system/cilium-j7hc9"
Jan 13 20:30:21.141386 kubelet[2821]: I0113 20:30:21.140498    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cfe898a4-47b2-42f8-90bc-53ef435c0867-cilium-config-path\") pod \"cilium-j7hc9\" (UID: \"cfe898a4-47b2-42f8-90bc-53ef435c0867\") " pod="kube-system/cilium-j7hc9"
Jan 13 20:30:21.141386 kubelet[2821]: I0113 20:30:21.140535    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cfe898a4-47b2-42f8-90bc-53ef435c0867-hostproc\") pod \"cilium-j7hc9\" (UID: \"cfe898a4-47b2-42f8-90bc-53ef435c0867\") " pod="kube-system/cilium-j7hc9"
Jan 13 20:30:21.141386 kubelet[2821]: I0113 20:30:21.140579    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cfe898a4-47b2-42f8-90bc-53ef435c0867-cilium-cgroup\") pod \"cilium-j7hc9\" (UID: \"cfe898a4-47b2-42f8-90bc-53ef435c0867\") " pod="kube-system/cilium-j7hc9"
Jan 13 20:30:21.141386 kubelet[2821]: I0113 20:30:21.140607    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfe898a4-47b2-42f8-90bc-53ef435c0867-lib-modules\") pod \"cilium-j7hc9\" (UID: \"cfe898a4-47b2-42f8-90bc-53ef435c0867\") " pod="kube-system/cilium-j7hc9"
Jan 13 20:30:21.142071 kubelet[2821]: I0113 20:30:21.140639    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cfe898a4-47b2-42f8-90bc-53ef435c0867-cilium-ipsec-secrets\") pod \"cilium-j7hc9\" (UID: \"cfe898a4-47b2-42f8-90bc-53ef435c0867\") " pod="kube-system/cilium-j7hc9"
Jan 13 20:30:21.142071 kubelet[2821]: I0113 20:30:21.140667    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cfe898a4-47b2-42f8-90bc-53ef435c0867-bpf-maps\") pod \"cilium-j7hc9\" (UID: \"cfe898a4-47b2-42f8-90bc-53ef435c0867\") " pod="kube-system/cilium-j7hc9"
Jan 13 20:30:21.142071 kubelet[2821]: I0113 20:30:21.140695    2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rppxn\" (UniqueName: \"kubernetes.io/projected/cfe898a4-47b2-42f8-90bc-53ef435c0867-kube-api-access-rppxn\") pod \"cilium-j7hc9\" (UID: \"cfe898a4-47b2-42f8-90bc-53ef435c0867\") " pod="kube-system/cilium-j7hc9"
Jan 13 20:30:21.203269 kubelet[2821]: E0113 20:30:21.201576    2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-2584n" podUID="6cc99e97-2145-4576-972b-6db3fdabd52f"
Jan 13 20:30:21.209635 sshd[4569]: Connection closed by 147.75.109.163 port 52634
Jan 13 20:30:21.212599 sshd-session[4565]: pam_unix(sshd:session): session closed for user core
Jan 13 20:30:21.218310 systemd[1]: sshd@20-138.199.153.83:22-147.75.109.163:52634.service: Deactivated successfully.
Jan 13 20:30:21.220770 systemd[1]: session-21.scope: Deactivated successfully.
Jan 13 20:30:21.222556 systemd[1]: session-21.scope: Consumed 1.229s CPU time.
Jan 13 20:30:21.223552 systemd-logind[1461]: Session 21 logged out. Waiting for processes to exit.
Jan 13 20:30:21.224569 systemd-logind[1461]: Removed session 21.
Jan 13 20:30:21.336400 kubelet[2821]: I0113 20:30:21.335344    2821 setters.go:568] "Node became not ready" node="ci-4152-2-0-6-49e4a12287" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:30:21Z","lastTransitionTime":"2025-01-13T20:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Jan 13 20:30:21.388689 systemd[1]: Started sshd@21-138.199.153.83:22-147.75.109.163:52650.service - OpenSSH per-connection server daemon (147.75.109.163:52650).
Jan 13 20:30:22.243363 kubelet[2821]: E0113 20:30:22.242696    2821 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition
Jan 13 20:30:22.243363 kubelet[2821]: E0113 20:30:22.242869    2821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cfe898a4-47b2-42f8-90bc-53ef435c0867-cilium-config-path podName:cfe898a4-47b2-42f8-90bc-53ef435c0867 nodeName:}" failed. No retries permitted until 2025-01-13 20:30:22.742835184 +0000 UTC m=+349.706231851 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/cfe898a4-47b2-42f8-90bc-53ef435c0867-cilium-config-path") pod "cilium-j7hc9" (UID: "cfe898a4-47b2-42f8-90bc-53ef435c0867") : failed to sync configmap cache: timed out waiting for the condition
Jan 13 20:30:22.393220 sshd[4581]: Accepted publickey for core from 147.75.109.163 port 52650 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:30:22.395081 sshd-session[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:30:22.403348 systemd-logind[1461]: New session 22 of user core.
Jan 13 20:30:22.409509 systemd[1]: Started session-22.scope - Session 22 of User core.
Jan 13 20:30:22.895299 containerd[1481]: time="2025-01-13T20:30:22.894849636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j7hc9,Uid:cfe898a4-47b2-42f8-90bc-53ef435c0867,Namespace:kube-system,Attempt:0,}"
Jan 13 20:30:22.922503 containerd[1481]: time="2025-01-13T20:30:22.921929599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:30:22.922503 containerd[1481]: time="2025-01-13T20:30:22.921999279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:30:22.922503 containerd[1481]: time="2025-01-13T20:30:22.922011559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:30:22.922503 containerd[1481]: time="2025-01-13T20:30:22.922100479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:30:22.945103 systemd[1]: run-containerd-runc-k8s.io-bcf0baa663f24d6f93f3c9c319500398619cc576c96da7ea298528d0bda4aa50-runc.BWnOxw.mount: Deactivated successfully.
Jan 13 20:30:22.960596 systemd[1]: Started cri-containerd-bcf0baa663f24d6f93f3c9c319500398619cc576c96da7ea298528d0bda4aa50.scope - libcontainer container bcf0baa663f24d6f93f3c9c319500398619cc576c96da7ea298528d0bda4aa50.
Jan 13 20:30:22.991629 containerd[1481]: time="2025-01-13T20:30:22.990976708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j7hc9,Uid:cfe898a4-47b2-42f8-90bc-53ef435c0867,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcf0baa663f24d6f93f3c9c319500398619cc576c96da7ea298528d0bda4aa50\""
Jan 13 20:30:22.995180 containerd[1481]: time="2025-01-13T20:30:22.995102862Z" level=info msg="CreateContainer within sandbox \"bcf0baa663f24d6f93f3c9c319500398619cc576c96da7ea298528d0bda4aa50\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Jan 13 20:30:23.007507 containerd[1481]: time="2025-01-13T20:30:23.007430366Z" level=info msg="CreateContainer within sandbox \"bcf0baa663f24d6f93f3c9c319500398619cc576c96da7ea298528d0bda4aa50\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8264a1a97c4dc5118b52bd67d032c7d82d38d9de8fa9a864c5921b9537293dd0\""
Jan 13 20:30:23.008463 containerd[1481]: time="2025-01-13T20:30:23.008224765Z" level=info msg="StartContainer for \"8264a1a97c4dc5118b52bd67d032c7d82d38d9de8fa9a864c5921b9537293dd0\""
Jan 13 20:30:23.045464 systemd[1]: Started cri-containerd-8264a1a97c4dc5118b52bd67d032c7d82d38d9de8fa9a864c5921b9537293dd0.scope - libcontainer container 8264a1a97c4dc5118b52bd67d032c7d82d38d9de8fa9a864c5921b9537293dd0.
Jan 13 20:30:23.079527 containerd[1481]: time="2025-01-13T20:30:23.079379678Z" level=info msg="StartContainer for \"8264a1a97c4dc5118b52bd67d032c7d82d38d9de8fa9a864c5921b9537293dd0\" returns successfully"
Jan 13 20:30:23.082081 sshd[4586]: Connection closed by 147.75.109.163 port 52650
Jan 13 20:30:23.082709 sshd-session[4581]: pam_unix(sshd:session): session closed for user core
Jan 13 20:30:23.089966 systemd[1]: sshd@21-138.199.153.83:22-147.75.109.163:52650.service: Deactivated successfully.
Jan 13 20:30:23.094009 systemd[1]: cri-containerd-8264a1a97c4dc5118b52bd67d032c7d82d38d9de8fa9a864c5921b9537293dd0.scope: Deactivated successfully.
Jan 13 20:30:23.096695 systemd[1]: session-22.scope: Deactivated successfully.
Jan 13 20:30:23.099875 systemd-logind[1461]: Session 22 logged out. Waiting for processes to exit.
Jan 13 20:30:23.102599 systemd-logind[1461]: Removed session 22.
Jan 13 20:30:23.131125 containerd[1481]: time="2025-01-13T20:30:23.131064134Z" level=info msg="shim disconnected" id=8264a1a97c4dc5118b52bd67d032c7d82d38d9de8fa9a864c5921b9537293dd0 namespace=k8s.io
Jan 13 20:30:23.131684 containerd[1481]: time="2025-01-13T20:30:23.131482453Z" level=warning msg="cleaning up after shim disconnected" id=8264a1a97c4dc5118b52bd67d032c7d82d38d9de8fa9a864c5921b9537293dd0 namespace=k8s.io
Jan 13 20:30:23.131684 containerd[1481]: time="2025-01-13T20:30:23.131501133Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:30:23.148520 containerd[1481]: time="2025-01-13T20:30:23.148366713Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:30:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Jan 13 20:30:23.204529 kubelet[2821]: E0113 20:30:23.203506    2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-2584n" podUID="6cc99e97-2145-4576-972b-6db3fdabd52f"
Jan 13 20:30:23.245608 containerd[1481]: time="2025-01-13T20:30:23.245553033Z" level=info msg="CreateContainer within sandbox \"bcf0baa663f24d6f93f3c9c319500398619cc576c96da7ea298528d0bda4aa50\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Jan 13 20:30:23.261966 systemd[1]: Started sshd@22-138.199.153.83:22-147.75.109.163:52658.service - OpenSSH per-connection server daemon (147.75.109.163:52658).
Jan 13 20:30:23.284246 containerd[1481]: time="2025-01-13T20:30:23.283685986Z" level=info msg="CreateContainer within sandbox \"bcf0baa663f24d6f93f3c9c319500398619cc576c96da7ea298528d0bda4aa50\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a29ab15a3103384bf27765b611a63e95094bca3ecb527d17dd7fc8191de4253d\""
Jan 13 20:30:23.286480 containerd[1481]: time="2025-01-13T20:30:23.285672983Z" level=info msg="StartContainer for \"a29ab15a3103384bf27765b611a63e95094bca3ecb527d17dd7fc8191de4253d\""
Jan 13 20:30:23.313492 systemd[1]: Started cri-containerd-a29ab15a3103384bf27765b611a63e95094bca3ecb527d17dd7fc8191de4253d.scope - libcontainer container a29ab15a3103384bf27765b611a63e95094bca3ecb527d17dd7fc8191de4253d.
Jan 13 20:30:23.342708 containerd[1481]: time="2025-01-13T20:30:23.342331874Z" level=info msg="StartContainer for \"a29ab15a3103384bf27765b611a63e95094bca3ecb527d17dd7fc8191de4253d\" returns successfully"
Jan 13 20:30:23.348085 systemd[1]: cri-containerd-a29ab15a3103384bf27765b611a63e95094bca3ecb527d17dd7fc8191de4253d.scope: Deactivated successfully.
Jan 13 20:30:23.374173 containerd[1481]: time="2025-01-13T20:30:23.373883555Z" level=info msg="shim disconnected" id=a29ab15a3103384bf27765b611a63e95094bca3ecb527d17dd7fc8191de4253d namespace=k8s.io
Jan 13 20:30:23.374173 containerd[1481]: time="2025-01-13T20:30:23.373978435Z" level=warning msg="cleaning up after shim disconnected" id=a29ab15a3103384bf27765b611a63e95094bca3ecb527d17dd7fc8191de4253d namespace=k8s.io
Jan 13 20:30:23.374173 containerd[1481]: time="2025-01-13T20:30:23.373986515Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:30:23.462376 kubelet[2821]: E0113 20:30:23.462188    2821 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jan 13 20:30:24.246874 containerd[1481]: time="2025-01-13T20:30:24.246662584Z" level=info msg="CreateContainer within sandbox \"bcf0baa663f24d6f93f3c9c319500398619cc576c96da7ea298528d0bda4aa50\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Jan 13 20:30:24.272309 sshd[4695]: Accepted publickey for core from 147.75.109.163 port 52658 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo
Jan 13 20:30:24.274013 sshd-session[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:30:24.286512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3094454881.mount: Deactivated successfully.
Jan 13 20:30:24.291396 containerd[1481]: time="2025-01-13T20:30:24.290764374Z" level=info msg="CreateContainer within sandbox \"bcf0baa663f24d6f93f3c9c319500398619cc576c96da7ea298528d0bda4aa50\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"024ef02c6be15b2f564aa6ea29f044ff3e64c3cc8425811740c47015c0700e42\""
Jan 13 20:30:24.295431 containerd[1481]: time="2025-01-13T20:30:24.292146572Z" level=info msg="StartContainer for \"024ef02c6be15b2f564aa6ea29f044ff3e64c3cc8425811740c47015c0700e42\""
Jan 13 20:30:24.300197 systemd-logind[1461]: New session 23 of user core.
Jan 13 20:30:24.305634 systemd[1]: Started session-23.scope - Session 23 of User core.
Jan 13 20:30:24.350478 systemd[1]: Started cri-containerd-024ef02c6be15b2f564aa6ea29f044ff3e64c3cc8425811740c47015c0700e42.scope - libcontainer container 024ef02c6be15b2f564aa6ea29f044ff3e64c3cc8425811740c47015c0700e42.
Jan 13 20:30:24.404780 containerd[1481]: time="2025-01-13T20:30:24.404723045Z" level=info msg="StartContainer for \"024ef02c6be15b2f564aa6ea29f044ff3e64c3cc8425811740c47015c0700e42\" returns successfully"
Jan 13 20:30:24.407643 systemd[1]: cri-containerd-024ef02c6be15b2f564aa6ea29f044ff3e64c3cc8425811740c47015c0700e42.scope: Deactivated successfully.
Jan 13 20:30:24.436419 containerd[1481]: time="2025-01-13T20:30:24.436336569Z" level=info msg="shim disconnected" id=024ef02c6be15b2f564aa6ea29f044ff3e64c3cc8425811740c47015c0700e42 namespace=k8s.io
Jan 13 20:30:24.436419 containerd[1481]: time="2025-01-13T20:30:24.436394329Z" level=warning msg="cleaning up after shim disconnected" id=024ef02c6be15b2f564aa6ea29f044ff3e64c3cc8425811740c47015c0700e42 namespace=k8s.io
Jan 13 20:30:24.436419 containerd[1481]: time="2025-01-13T20:30:24.436436249Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:30:25.201869 kubelet[2821]: E0113 20:30:25.200853    2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-2584n" podUID="6cc99e97-2145-4576-972b-6db3fdabd52f"
Jan 13 20:30:25.254718 containerd[1481]: time="2025-01-13T20:30:25.254481147Z" level=info msg="CreateContainer within sandbox \"bcf0baa663f24d6f93f3c9c319500398619cc576c96da7ea298528d0bda4aa50\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Jan 13 20:30:25.273789 containerd[1481]: time="2025-01-13T20:30:25.273422088Z" level=info msg="CreateContainer within sandbox \"bcf0baa663f24d6f93f3c9c319500398619cc576c96da7ea298528d0bda4aa50\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"52417e3392a2837f36c6c19ee9a617623bc93a195827ee12fba9c57b91301ae4\""
Jan 13 20:30:25.275441 containerd[1481]: time="2025-01-13T20:30:25.275310966Z" level=info msg="StartContainer for \"52417e3392a2837f36c6c19ee9a617623bc93a195827ee12fba9c57b91301ae4\""
Jan 13 20:30:25.315532 systemd[1]: Started cri-containerd-52417e3392a2837f36c6c19ee9a617623bc93a195827ee12fba9c57b91301ae4.scope - libcontainer container 52417e3392a2837f36c6c19ee9a617623bc93a195827ee12fba9c57b91301ae4.
Jan 13 20:30:25.346157 systemd[1]: cri-containerd-52417e3392a2837f36c6c19ee9a617623bc93a195827ee12fba9c57b91301ae4.scope: Deactivated successfully.
Jan 13 20:30:25.349390 containerd[1481]: time="2025-01-13T20:30:25.349347129Z" level=info msg="StartContainer for \"52417e3392a2837f36c6c19ee9a617623bc93a195827ee12fba9c57b91301ae4\" returns successfully"
Jan 13 20:30:25.373564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52417e3392a2837f36c6c19ee9a617623bc93a195827ee12fba9c57b91301ae4-rootfs.mount: Deactivated successfully.
Jan 13 20:30:25.376830 containerd[1481]: time="2025-01-13T20:30:25.376610341Z" level=info msg="shim disconnected" id=52417e3392a2837f36c6c19ee9a617623bc93a195827ee12fba9c57b91301ae4 namespace=k8s.io
Jan 13 20:30:25.376830 containerd[1481]: time="2025-01-13T20:30:25.376768621Z" level=warning msg="cleaning up after shim disconnected" id=52417e3392a2837f36c6c19ee9a617623bc93a195827ee12fba9c57b91301ae4 namespace=k8s.io
Jan 13 20:30:25.376830 containerd[1481]: time="2025-01-13T20:30:25.376779981Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:30:26.262958 containerd[1481]: time="2025-01-13T20:30:26.261587250Z" level=info msg="CreateContainer within sandbox \"bcf0baa663f24d6f93f3c9c319500398619cc576c96da7ea298528d0bda4aa50\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Jan 13 20:30:26.287034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1436537281.mount: Deactivated successfully.
Jan 13 20:30:26.292462 containerd[1481]: time="2025-01-13T20:30:26.292303502Z" level=info msg="CreateContainer within sandbox \"bcf0baa663f24d6f93f3c9c319500398619cc576c96da7ea298528d0bda4aa50\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"44da9ba7b3975df35baeb58ac61e680a8a30de54a6f0f28a27c1b52e2e859a62\""
Jan 13 20:30:26.294755 containerd[1481]: time="2025-01-13T20:30:26.293593940Z" level=info msg="StartContainer for \"44da9ba7b3975df35baeb58ac61e680a8a30de54a6f0f28a27c1b52e2e859a62\""
Jan 13 20:30:26.330519 systemd[1]: Started cri-containerd-44da9ba7b3975df35baeb58ac61e680a8a30de54a6f0f28a27c1b52e2e859a62.scope - libcontainer container 44da9ba7b3975df35baeb58ac61e680a8a30de54a6f0f28a27c1b52e2e859a62.
Jan 13 20:30:26.373768 containerd[1481]: time="2025-01-13T20:30:26.373464586Z" level=info msg="StartContainer for \"44da9ba7b3975df35baeb58ac61e680a8a30de54a6f0f28a27c1b52e2e859a62\" returns successfully"
Jan 13 20:30:26.717273 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce))
Jan 13 20:30:27.203901 kubelet[2821]: E0113 20:30:27.201365    2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-2584n" podUID="6cc99e97-2145-4576-972b-6db3fdabd52f"
Jan 13 20:30:27.285527 kubelet[2821]: I0113 20:30:27.285475    2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-j7hc9" podStartSLOduration=6.285409158 podStartE2EDuration="6.285409158s" podCreationTimestamp="2025-01-13 20:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:30:27.284843079 +0000 UTC m=+354.248239786" watchObservedRunningTime="2025-01-13 20:30:27.285409158 +0000 UTC m=+354.248805825"
Jan 13 20:30:29.012648 systemd[1]: run-containerd-runc-k8s.io-44da9ba7b3975df35baeb58ac61e680a8a30de54a6f0f28a27c1b52e2e859a62-runc.g2c2Xs.mount: Deactivated successfully.
Jan 13 20:30:29.982474 systemd-networkd[1368]: lxc_health: Link UP
Jan 13 20:30:29.994426 systemd-networkd[1368]: lxc_health: Gained carrier
Jan 13 20:30:31.942411 systemd-networkd[1368]: lxc_health: Gained IPv6LL
Jan 13 20:30:33.210252 containerd[1481]: time="2025-01-13T20:30:33.210144960Z" level=info msg="StopPodSandbox for \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\""
Jan 13 20:30:33.211410 containerd[1481]: time="2025-01-13T20:30:33.210736120Z" level=info msg="TearDown network for sandbox \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\" successfully"
Jan 13 20:30:33.211410 containerd[1481]: time="2025-01-13T20:30:33.210914160Z" level=info msg="StopPodSandbox for \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\" returns successfully"
Jan 13 20:30:33.213338 containerd[1481]: time="2025-01-13T20:30:33.211757919Z" level=info msg="RemovePodSandbox for \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\""
Jan 13 20:30:33.213338 containerd[1481]: time="2025-01-13T20:30:33.211789679Z" level=info msg="Forcibly stopping sandbox \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\""
Jan 13 20:30:33.213338 containerd[1481]: time="2025-01-13T20:30:33.211859319Z" level=info msg="TearDown network for sandbox \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\" successfully"
Jan 13 20:30:33.215949 containerd[1481]: time="2025-01-13T20:30:33.215873318Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:30:33.216168 containerd[1481]: time="2025-01-13T20:30:33.216151038Z" level=info msg="RemovePodSandbox \"992e99c94b79f5b37c963b06846740b6cd96d44442d4c7bbb028a1507308fd81\" returns successfully"
Jan 13 20:30:33.217118 containerd[1481]: time="2025-01-13T20:30:33.217086838Z" level=info msg="StopPodSandbox for \"642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5\""
Jan 13 20:30:33.217371 containerd[1481]: time="2025-01-13T20:30:33.217350318Z" level=info msg="TearDown network for sandbox \"642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5\" successfully"
Jan 13 20:30:33.217458 containerd[1481]: time="2025-01-13T20:30:33.217445918Z" level=info msg="StopPodSandbox for \"642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5\" returns successfully"
Jan 13 20:30:33.218367 containerd[1481]: time="2025-01-13T20:30:33.217870358Z" level=info msg="RemovePodSandbox for \"642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5\""
Jan 13 20:30:33.218367 containerd[1481]: time="2025-01-13T20:30:33.217900838Z" level=info msg="Forcibly stopping sandbox \"642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5\""
Jan 13 20:30:33.218367 containerd[1481]: time="2025-01-13T20:30:33.217957958Z" level=info msg="TearDown network for sandbox \"642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5\" successfully"
Jan 13 20:30:33.224461 containerd[1481]: time="2025-01-13T20:30:33.224401116Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:30:33.224888 containerd[1481]: time="2025-01-13T20:30:33.224729916Z" level=info msg="RemovePodSandbox \"642856cedb2e1ec2a0b23d779a703f9f81eb3350cc0947c5177f13b568467fb5\" returns successfully"
Jan 13 20:30:33.358703 systemd[1]: run-containerd-runc-k8s.io-44da9ba7b3975df35baeb58ac61e680a8a30de54a6f0f28a27c1b52e2e859a62-runc.0TDJHo.mount: Deactivated successfully.
Jan 13 20:30:35.793325 sshd[4766]: Connection closed by 147.75.109.163 port 52658
Jan 13 20:30:35.794518 sshd-session[4695]: pam_unix(sshd:session): session closed for user core
Jan 13 20:30:35.799052 systemd[1]: sshd@22-138.199.153.83:22-147.75.109.163:52658.service: Deactivated successfully.
Jan 13 20:30:35.802267 systemd[1]: session-23.scope: Deactivated successfully.
Jan 13 20:30:35.804896 systemd-logind[1461]: Session 23 logged out. Waiting for processes to exit.
Jan 13 20:30:35.806876 systemd-logind[1461]: Removed session 23.
Jan 13 20:30:51.018943 kubelet[2821]: E0113 20:30:51.018896    2821 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:37088->10.0.0.2:2379: read: connection timed out"
Jan 13 20:30:51.022297 systemd[1]: cri-containerd-d54340dbf3012b12f68a9e96c9b36087a317b2697151a77f6ed1cd68b702982b.scope: Deactivated successfully.
Jan 13 20:30:51.023059 systemd[1]: cri-containerd-d54340dbf3012b12f68a9e96c9b36087a317b2697151a77f6ed1cd68b702982b.scope: Consumed 2.844s CPU time, 16.6M memory peak, 0B memory swap peak.
Jan 13 20:30:51.050429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d54340dbf3012b12f68a9e96c9b36087a317b2697151a77f6ed1cd68b702982b-rootfs.mount: Deactivated successfully.
Jan 13 20:30:51.058439 containerd[1481]: time="2025-01-13T20:30:51.058141910Z" level=info msg="shim disconnected" id=d54340dbf3012b12f68a9e96c9b36087a317b2697151a77f6ed1cd68b702982b namespace=k8s.io
Jan 13 20:30:51.058439 containerd[1481]: time="2025-01-13T20:30:51.058204791Z" level=warning msg="cleaning up after shim disconnected" id=d54340dbf3012b12f68a9e96c9b36087a317b2697151a77f6ed1cd68b702982b namespace=k8s.io
Jan 13 20:30:51.058439 containerd[1481]: time="2025-01-13T20:30:51.058214631Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:30:51.332143 kubelet[2821]: I0113 20:30:51.331272    2821 scope.go:117] "RemoveContainer" containerID="d54340dbf3012b12f68a9e96c9b36087a317b2697151a77f6ed1cd68b702982b"
Jan 13 20:30:51.335764 containerd[1481]: time="2025-01-13T20:30:51.335679582Z" level=info msg="CreateContainer within sandbox \"7060d96a6fb2740b44b3d022de4b7b3a05100c1fcbdb2acc5e47316e6d7ee49b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}"
Jan 13 20:30:51.349735 containerd[1481]: time="2025-01-13T20:30:51.349671159Z" level=info msg="CreateContainer within sandbox \"7060d96a6fb2740b44b3d022de4b7b3a05100c1fcbdb2acc5e47316e6d7ee49b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"03debc4ae15a857dfbad223822d451ab3741834374656e1a52e328004bb11912\""
Jan 13 20:30:51.350300 containerd[1481]: time="2025-01-13T20:30:51.350203000Z" level=info msg="StartContainer for \"03debc4ae15a857dfbad223822d451ab3741834374656e1a52e328004bb11912\""
Jan 13 20:30:51.376939 systemd[1]: run-containerd-runc-k8s.io-03debc4ae15a857dfbad223822d451ab3741834374656e1a52e328004bb11912-runc.qdqVjo.mount: Deactivated successfully.
Jan 13 20:30:51.384449 systemd[1]: Started cri-containerd-03debc4ae15a857dfbad223822d451ab3741834374656e1a52e328004bb11912.scope - libcontainer container 03debc4ae15a857dfbad223822d451ab3741834374656e1a52e328004bb11912.
Jan 13 20:30:51.421046 containerd[1481]: time="2025-01-13T20:30:51.420977010Z" level=info msg="StartContainer for \"03debc4ae15a857dfbad223822d451ab3741834374656e1a52e328004bb11912\" returns successfully"
Jan 13 20:30:52.307720 systemd[1]: cri-containerd-43fa7befb7f38c3b42b32f42808f20dc17c6301fde80a458113c2217d5e0746c.scope: Deactivated successfully.
Jan 13 20:30:52.308746 systemd[1]: cri-containerd-43fa7befb7f38c3b42b32f42808f20dc17c6301fde80a458113c2217d5e0746c.scope: Consumed 6.677s CPU time, 21.8M memory peak, 0B memory swap peak.
Jan 13 20:30:52.343195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43fa7befb7f38c3b42b32f42808f20dc17c6301fde80a458113c2217d5e0746c-rootfs.mount: Deactivated successfully.
Jan 13 20:30:52.351730 containerd[1481]: time="2025-01-13T20:30:52.351617176Z" level=info msg="shim disconnected" id=43fa7befb7f38c3b42b32f42808f20dc17c6301fde80a458113c2217d5e0746c namespace=k8s.io
Jan 13 20:30:52.352152 containerd[1481]: time="2025-01-13T20:30:52.351703136Z" level=warning msg="cleaning up after shim disconnected" id=43fa7befb7f38c3b42b32f42808f20dc17c6301fde80a458113c2217d5e0746c namespace=k8s.io
Jan 13 20:30:52.352152 containerd[1481]: time="2025-01-13T20:30:52.351787776Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:30:52.376295 containerd[1481]: time="2025-01-13T20:30:52.373927846Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:30:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Jan 13 20:30:53.347164 kubelet[2821]: I0113 20:30:53.346401    2821 scope.go:117] "RemoveContainer" containerID="43fa7befb7f38c3b42b32f42808f20dc17c6301fde80a458113c2217d5e0746c"
Jan 13 20:30:53.349119 containerd[1481]: time="2025-01-13T20:30:53.349062025Z" level=info msg="CreateContainer within sandbox \"1aed82f21f04d9daa993ca005f1f9219044f0d9ef4d0e2016c1e108a9fee5325\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}"
Jan 13 20:30:53.373398 containerd[1481]: time="2025-01-13T20:30:53.371662177Z" level=info msg="CreateContainer within sandbox \"1aed82f21f04d9daa993ca005f1f9219044f0d9ef4d0e2016c1e108a9fee5325\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"df4390b17712008c4c37e096d62732691a1da59424dd6c29739a5a68a86ab88d\""
Jan 13 20:30:53.373398 containerd[1481]: time="2025-01-13T20:30:53.372438898Z" level=info msg="StartContainer for \"df4390b17712008c4c37e096d62732691a1da59424dd6c29739a5a68a86ab88d\""
Jan 13 20:30:53.407477 systemd[1]: Started cri-containerd-df4390b17712008c4c37e096d62732691a1da59424dd6c29739a5a68a86ab88d.scope - libcontainer container df4390b17712008c4c37e096d62732691a1da59424dd6c29739a5a68a86ab88d.
Jan 13 20:30:53.457639 containerd[1481]: time="2025-01-13T20:30:53.457580740Z" level=info msg="StartContainer for \"df4390b17712008c4c37e096d62732691a1da59424dd6c29739a5a68a86ab88d\" returns successfully"
Jan 13 20:30:56.263276 kubelet[2821]: E0113 20:30:56.263198    2821 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36886->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4152-2-0-6-49e4a12287.181a5a9f22be8ba1  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4152-2-0-6-49e4a12287,UID:5aa8018309e7466c38f1ce9a58bfdfe4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-6-49e4a12287,},FirstTimestamp:2025-01-13 20:30:45.833960353 +0000 UTC m=+372.797356980,LastTimestamp:2025-01-13 20:30:45.833960353 +0000 UTC m=+372.797356980,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-6-49e4a12287,}"