Jan 29 11:01:53.900445 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:01:53.900471 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 29 11:01:53.900482 kernel: KASLR enabled Jan 29 11:01:53.900487 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 29 11:01:53.900493 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 29 11:01:53.900498 kernel: random: crng init done Jan 29 11:01:53.900505 kernel: secureboot: Secure boot disabled Jan 29 11:01:53.900511 kernel: ACPI: Early table checksum verification disabled Jan 29 11:01:53.900517 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 29 11:01:53.900525 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:01:53.900531 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:53.900537 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:53.900543 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:53.900549 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:53.900556 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:53.900563 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:53.900569 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:53.900576 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:53.900582 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:01:53.900588 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 11:01:53.900594 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 29 11:01:53.900600 kernel: NUMA: Failed to initialise from firmware Jan 29 11:01:53.900606 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 11:01:53.900612 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 29 11:01:53.900618 kernel: Zone ranges: Jan 29 11:01:53.900625 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 11:01:53.900631 kernel: DMA32 empty Jan 29 11:01:53.900637 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 29 11:01:53.900643 kernel: Movable zone start for each node Jan 29 11:01:53.900950 kernel: Early memory node ranges Jan 29 11:01:53.900957 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 29 11:01:53.900964 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 29 11:01:53.900970 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 29 11:01:53.900976 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 29 11:01:53.900982 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 29 11:01:53.900988 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 29 11:01:53.900994 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 29 11:01:53.901006 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 29 11:01:53.901012 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 29 11:01:53.901018 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 11:01:53.901028 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 29 11:01:53.901034 kernel: psci: probing for conduit method from ACPI. Jan 29 11:01:53.901041 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:01:53.901049 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:01:53.901056 kernel: psci: Trusted OS migration not required Jan 29 11:01:53.901062 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:01:53.901069 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:01:53.901076 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:01:53.901082 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:01:53.901089 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 11:01:53.901095 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:01:53.901102 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:01:53.901109 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:01:53.901116 kernel: CPU features: detected: Spectre-v4 Jan 29 11:01:53.901123 kernel: CPU features: detected: Spectre-BHB Jan 29 11:01:53.901130 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:01:53.901136 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:01:53.901142 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:01:53.901149 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:01:53.901155 kernel: alternatives: applying boot alternatives Jan 29 11:01:53.901164 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 11:01:53.901171 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:01:53.901177 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:01:53.901184 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:01:53.901192 kernel: Fallback order for Node 0: 0 Jan 29 11:01:53.901198 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 29 11:01:53.901205 kernel: Policy zone: Normal Jan 29 11:01:53.901211 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:01:53.901218 kernel: software IO TLB: area num 2. Jan 29 11:01:53.901224 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 29 11:01:53.901231 kernel: Memory: 3882296K/4096000K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 213704K reserved, 0K cma-reserved) Jan 29 11:01:53.901238 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:01:53.901244 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:01:53.901252 kernel: rcu: RCU event tracing is enabled. Jan 29 11:01:53.901259 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:01:53.901265 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:01:53.901273 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:01:53.901280 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:01:53.901287 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:01:53.901293 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:01:53.901300 kernel: GICv3: 256 SPIs implemented Jan 29 11:01:53.901306 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:01:53.901313 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:01:53.901319 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:01:53.901326 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:01:53.901332 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:01:53.901339 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:01:53.901384 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:01:53.901395 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 29 11:01:53.901401 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 29 11:01:53.901408 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:01:53.901414 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:01:53.901421 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:01:53.901427 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:01:53.901434 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:01:53.901441 kernel: Console: colour dummy device 80x25 Jan 29 11:01:53.901453 kernel: ACPI: Core revision 20230628 Jan 29 11:01:53.901461 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:01:53.901471 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:01:53.901478 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:01:53.901485 kernel: landlock: Up and running. Jan 29 11:01:53.901491 kernel: SELinux: Initializing. Jan 29 11:01:53.901498 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:01:53.901505 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:01:53.901511 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:01:53.901518 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:01:53.901525 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:01:53.901534 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:01:53.901541 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:01:53.901548 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:01:53.901555 kernel: Remapping and enabling EFI services. Jan 29 11:01:53.901561 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:01:53.901568 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:01:53.901575 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:01:53.901582 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 29 11:01:53.901589 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:01:53.901597 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:01:53.901604 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:01:53.901616 kernel: SMP: Total of 2 processors activated. Jan 29 11:01:53.901624 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:01:53.901632 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:01:53.901639 kernel: CPU features: detected: Common not Private translations Jan 29 11:01:53.901661 kernel: CPU features: detected: CRC32 instructions Jan 29 11:01:53.901669 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:01:53.901677 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:01:53.902672 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:01:53.902682 kernel: CPU features: detected: Privileged Access Never Jan 29 11:01:53.902689 kernel: CPU features: detected: RAS Extension Support Jan 29 11:01:53.902696 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:01:53.902704 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:01:53.902711 kernel: alternatives: applying system-wide alternatives Jan 29 11:01:53.902718 kernel: devtmpfs: initialized Jan 29 11:01:53.902726 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:01:53.902735 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:01:53.902882 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:01:53.902896 kernel: SMBIOS 3.0.0 present. Jan 29 11:01:53.902904 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 29 11:01:53.902912 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:01:53.902919 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:01:53.902926 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:01:53.902934 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:01:53.902941 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:01:53.902953 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Jan 29 11:01:53.902960 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:01:53.902967 kernel: cpuidle: using governor menu Jan 29 11:01:53.902974 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:01:53.902983 kernel: ASID allocator initialised with 32768 entries Jan 29 11:01:53.902990 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:01:53.902997 kernel: Serial: AMBA PL011 UART driver Jan 29 11:01:53.903004 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:01:53.903011 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:01:53.903021 kernel: Modules: 508880 pages in range for PLT usage Jan 29 11:01:53.903028 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:01:53.903035 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:01:53.903043 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:01:53.903050 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:01:53.903057 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:01:53.903064 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:01:53.903071 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:01:53.903078 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:01:53.903087 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:01:53.903094 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:01:53.903108 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:01:53.903115 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:01:53.903123 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:01:53.903130 kernel: ACPI: Interpreter enabled Jan 29 11:01:53.903137 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:01:53.903144 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:01:53.903152 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:01:53.903160 kernel: printk: console [ttyAMA0] enabled Jan 29 11:01:53.903168 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:01:53.903384 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:01:53.903484 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:01:53.903552 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:01:53.903618 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:01:53.905942 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:01:53.905987 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:01:53.905995 kernel: PCI host bridge to bus 0000:00 Jan 29 11:01:53.906091 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:01:53.906166 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:01:53.906225 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:01:53.906282 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:01:53.906384 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:01:53.906475 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 29 11:01:53.906547 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 29 11:01:53.906615 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 11:01:53.906710 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 11:01:53.906779 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 29 11:01:53.906855 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 11:01:53.906927 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 29 11:01:53.907002 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 11:01:53.907069 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 29 11:01:53.907141 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 11:01:53.907207 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 29 11:01:53.907285 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 11:01:53.907372 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 29 11:01:53.907456 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 11:01:53.907523 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 29 11:01:53.907597 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 11:01:53.908793 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 29 11:01:53.908915 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 11:01:53.908991 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 29 11:01:53.909067 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 29 11:01:53.909134 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 29 11:01:53.909209 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 29 11:01:53.909275 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 29 11:01:53.909374 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 11:01:53.909457 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 29 11:01:53.909536 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:01:53.909606 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 11:01:53.911895 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 11:01:53.912017 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 29 11:01:53.912106 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 29 11:01:53.912174 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 29 11:01:53.912250 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 29 11:01:53.912328 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 29 11:01:53.912417 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 29 11:01:53.912500 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 11:01:53.912567 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 29 11:01:53.912636 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 29 11:01:53.912731 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 29 11:01:53.912806 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 29 11:01:53.912874 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 11:01:53.912951 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 11:01:53.913020 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 29 11:01:53.913088 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 29 11:01:53.913155 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 11:01:53.913231 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 29 11:01:53.913321 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 29 11:01:53.914778 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 29 11:01:53.914882 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 29 11:01:53.914949 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 29 11:01:53.915024 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 29 11:01:53.915113 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 11:01:53.915190 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 29 11:01:53.915267 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 29 11:01:53.915340 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 11:01:53.915430 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 29 11:01:53.915497 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 29 11:01:53.915570 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 11:01:53.915636 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 29 11:01:53.915736 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 29 11:01:53.915818 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 11:01:53.915885 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 29 11:01:53.915950 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 29 11:01:53.916019 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 11:01:53.916085 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 29 11:01:53.916152 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 29 11:01:53.916222 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 11:01:53.916292 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 29 11:01:53.916370 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 29 11:01:53.916450 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 11:01:53.916520 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 29 11:01:53.916584 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 29 11:01:53.918096 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 29 11:01:53.918215 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:01:53.918295 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 29 11:01:53.918384 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:01:53.918470 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 29 11:01:53.918538 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:01:53.918608 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 29 11:01:53.918715 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:01:53.918786 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 29 11:01:53.918858 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:01:53.918929 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 29 11:01:53.918995 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:01:53.919064 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 29 11:01:53.919129 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:01:53.919197 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 29 11:01:53.919263 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:01:53.919337 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 29 11:01:53.919462 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:01:53.919538 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 29 11:01:53.919607 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 29 11:01:53.921511 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 29 11:01:53.921609 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 11:01:53.921881 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 29 11:01:53.921970 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 11:01:53.922042 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 29 11:01:53.922109 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 11:01:53.922180 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 29 11:01:53.922249 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 11:01:53.922322 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 29 11:01:53.922415 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 11:01:53.922492 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 29 11:01:53.922566 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 11:01:53.922638 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 29 11:01:53.922832 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 11:01:53.922905 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 29 11:01:53.922971 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 11:01:53.923039 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 29 11:01:53.923104 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 29 11:01:53.923177 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 29 11:01:53.923259 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 29 11:01:53.923328 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:01:53.923467 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 29 11:01:53.923540 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 11:01:53.923607 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 11:01:53.923723 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 29 11:01:53.923806 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:01:53.923883 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 29 11:01:53.923959 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 11:01:53.924027 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 11:01:53.924092 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 29 11:01:53.924158 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:01:53.924241 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 11:01:53.924312 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 29 11:01:53.924398 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 11:01:53.924469 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 11:01:53.924535 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 29 11:01:53.924600 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:01:53.924691 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 11:01:53.926045 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 11:01:53.926172 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 11:01:53.926242 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 29 11:01:53.926307 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:01:53.926432 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 29 11:01:53.926507 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 29 11:01:53.926576 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 11:01:53.926643 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 11:01:53.926801 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 29 11:01:53.926877 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:01:53.926952 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 29 11:01:53.927020 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 29 11:01:53.927088 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 11:01:53.927152 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 11:01:53.927216 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 29 11:01:53.927281 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:01:53.927388 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 29 11:01:53.927472 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 29 11:01:53.927542 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 29 11:01:53.927610 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 11:01:53.929755 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 11:01:53.929853 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 29 11:01:53.929919 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:01:53.929989 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 11:01:53.930071 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 11:01:53.930148 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 29 11:01:53.930214 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:01:53.930283 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 11:01:53.930360 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 29 11:01:53.930438 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 29 11:01:53.930509 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:01:53.930578 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:01:53.930640 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:01:53.932054 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:01:53.932137 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 11:01:53.932204 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 29 11:01:53.932267 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:01:53.932340 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 29 11:01:53.932427 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 29 11:01:53.932496 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:01:53.932585 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 29 11:01:53.932685 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 29 11:01:53.932754 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:01:53.932825 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 29 11:01:53.932887 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 29 11:01:53.932947 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:01:53.933022 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 29 11:01:53.933083 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 29 11:01:53.933145 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:01:53.933214 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 29 11:01:53.933279 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 29 11:01:53.933339 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:01:53.933476 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 29 11:01:53.933595 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 29 11:01:53.933676 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:01:53.933752 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 29 11:01:53.933815 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 29 11:01:53.933881 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:01:53.933950 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 29 11:01:53.934011 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 29 11:01:53.934071 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:01:53.934080 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:01:53.934088 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:01:53.934096 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:01:53.934106 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:01:53.934114 kernel: iommu: Default domain type: Translated Jan 29 11:01:53.934121 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:01:53.934129 kernel: efivars: Registered efivars operations Jan 29 11:01:53.934136 kernel: vgaarb: loaded Jan 29 11:01:53.934143 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:01:53.934151 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:01:53.934159 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:01:53.934166 kernel: pnp: PnP ACPI init Jan 29 11:01:53.934241 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:01:53.934252 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:01:53.934260 kernel: NET: Registered PF_INET protocol family Jan 29 11:01:53.934268 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:01:53.934276 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:01:53.934284 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:01:53.934291 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:01:53.934299 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:01:53.934306 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:01:53.934316 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:01:53.934324 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:01:53.934331 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:01:53.934427 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 29 11:01:53.934441 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:01:53.934449 kernel: kvm [1]: HYP mode not available Jan 29 11:01:53.934457 kernel: Initialise system trusted keyrings Jan 29 11:01:53.934464 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:01:53.934475 kernel: Key type asymmetric registered Jan 29 11:01:53.934483 kernel: Asymmetric key parser 'x509' registered Jan 29 11:01:53.934490 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:01:53.934498 kernel: io scheduler mq-deadline registered Jan 29 11:01:53.934505 kernel: io scheduler kyber registered Jan 29 11:01:53.934513 kernel: io scheduler bfq registered Jan 29 11:01:53.934521 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 11:01:53.934595 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 29 11:01:53.934695 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 29 11:01:53.934772 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:01:53.934843 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 29 11:01:53.934909 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 29 11:01:53.934975 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:01:53.935047 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 29 11:01:53.935114 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 29 11:01:53.935196 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:01:53.935269 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 29 11:01:53.935337 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 29 11:01:53.935416 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:01:53.935489 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 29 11:01:53.935555 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 29 11:01:53.935627 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:01:53.935715 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 29 11:01:53.935785 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 29 11:01:53.935851 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:01:53.935922 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 29 11:01:53.935992 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 29 11:01:53.936062 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:01:53.936131 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 29 11:01:53.936197 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 29 11:01:53.936264 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:01:53.936275 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 29 11:01:53.936344 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 29 11:01:53.936468 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 29 11:01:53.936539 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:01:53.936550 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:01:53.936557 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:01:53.936565 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:01:53.936638 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 29 11:01:53.936751 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 29 11:01:53.936763 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:01:53.936776 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 11:01:53.936848 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 29 11:01:53.936859 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 29 11:01:53.936866 kernel: thunder_xcv, ver 1.0 Jan 29 11:01:53.936874 kernel: thunder_bgx, ver 1.0 Jan 29 11:01:53.936881 kernel: nicpf, ver 1.0 Jan 29 11:01:53.936889 kernel: nicvf, ver 1.0 Jan 29 11:01:53.936968 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:01:53.937036 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:01:53 UTC (1738148513) Jan 29 11:01:53.937046 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:01:53.937054 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:01:53.937061 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:01:53.937069 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:01:53.937076 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:01:53.937084 kernel: Segment Routing with IPv6 Jan 29 11:01:53.937091 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:01:53.937098 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:01:53.937108 kernel: Key type dns_resolver registered Jan 29 11:01:53.937116 kernel: registered taskstats version 1 Jan 29 11:01:53.937123 kernel: Loading compiled-in X.509 certificates Jan 29 11:01:53.937131 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 29 11:01:53.937138 kernel: Key type .fscrypt registered Jan 29 11:01:53.937146 kernel: Key type fscrypt-provisioning registered Jan 29 11:01:53.937153 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:01:53.937161 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:01:53.937168 kernel: ima: No architecture policies found Jan 29 11:01:53.937180 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:01:53.937188 kernel: clk: Disabling unused clocks Jan 29 11:01:53.937195 kernel: Freeing unused kernel memory: 39936K Jan 29 11:01:53.937203 kernel: Run /init as init process Jan 29 11:01:53.937210 kernel: with arguments: Jan 29 11:01:53.937218 kernel: /init Jan 29 11:01:53.937225 kernel: with environment: Jan 29 11:01:53.937232 kernel: HOME=/ Jan 29 11:01:53.937240 kernel: TERM=linux Jan 29 11:01:53.937249 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:01:53.937258 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:01:53.937268 systemd[1]: Detected virtualization kvm. Jan 29 11:01:53.937276 systemd[1]: Detected architecture arm64. Jan 29 11:01:53.937284 systemd[1]: Running in initrd. Jan 29 11:01:53.937292 systemd[1]: No hostname configured, using default hostname. Jan 29 11:01:53.937300 systemd[1]: Hostname set to . Jan 29 11:01:53.937310 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:01:53.937318 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:01:53.937326 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:01:53.937335 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:01:53.937343 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:01:53.937363 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:01:53.937371 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:01:53.937380 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:01:53.937392 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:01:53.937401 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:01:53.937409 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:01:53.937417 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:01:53.937425 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:01:53.937433 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:01:53.937441 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:01:53.937451 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:01:53.937459 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:01:53.937468 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:01:53.937476 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:01:53.937484 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:01:53.937493 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:01:53.937501 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:01:53.937509 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:01:53.937517 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:01:53.937527 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:01:53.937535 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:01:53.937543 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:01:53.937551 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:01:53.937560 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:01:53.937568 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:01:53.937576 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:01:53.937584 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:01:53.937621 systemd-journald[238]: Collecting audit messages is disabled. Jan 29 11:01:53.937641 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:01:53.937752 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:01:53.937766 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:01:53.937774 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:01:53.937783 kernel: Bridge firewalling registered Jan 29 11:01:53.937790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:01:53.937799 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:01:53.937809 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:01:53.937819 systemd-journald[238]: Journal started Jan 29 11:01:53.937846 systemd-journald[238]: Runtime Journal (/run/log/journal/26c3bffcf86e498195199e7e6755483c) is 8.0M, max 76.6M, 68.6M free. Jan 29 11:01:53.893826 systemd-modules-load[239]: Inserted module 'overlay' Jan 29 11:01:53.915166 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 29 11:01:53.941342 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:01:53.942869 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:01:53.958386 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:01:53.962319 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:01:53.968927 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:01:53.969884 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:01:53.983070 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:01:53.986149 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:01:53.987160 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:01:53.992859 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:01:53.996915 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:01:54.011920 dracut-cmdline[272]: dracut-dracut-053 Jan 29 11:01:54.018381 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 11:01:54.039018 systemd-resolved[274]: Positive Trust Anchors: Jan 29 11:01:54.039038 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:01:54.039069 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:01:54.046252 systemd-resolved[274]: Defaulting to hostname 'linux'. Jan 29 11:01:54.048133 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:01:54.049441 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:01:54.121728 kernel: SCSI subsystem initialized Jan 29 11:01:54.126684 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:01:54.135684 kernel: iscsi: registered transport (tcp) Jan 29 11:01:54.150703 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:01:54.150800 kernel: QLogic iSCSI HBA Driver Jan 29 11:01:54.211637 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:01:54.217895 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:01:54.248749 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:01:54.248868 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:01:54.248892 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:01:54.306476 kernel: raid6: neonx8 gen() 15698 MB/s Jan 29 11:01:54.322720 kernel: raid6: neonx4 gen() 8008 MB/s Jan 29 11:01:54.339719 kernel: raid6: neonx2 gen() 11141 MB/s Jan 29 11:01:54.356708 kernel: raid6: neonx1 gen() 9728 MB/s Jan 29 11:01:54.373704 kernel: raid6: int64x8 gen() 5557 MB/s Jan 29 11:01:54.390697 kernel: raid6: int64x4 gen() 4674 MB/s Jan 29 11:01:54.407703 kernel: raid6: int64x2 gen() 5693 MB/s Jan 29 11:01:54.424703 kernel: raid6: int64x1 gen() 4866 MB/s Jan 29 11:01:54.424754 kernel: raid6: using algorithm neonx8 gen() 15698 MB/s Jan 29 11:01:54.441699 kernel: raid6: .... xor() 11562 MB/s, rmw enabled Jan 29 11:01:54.441744 kernel: raid6: using neon recovery algorithm Jan 29 11:01:54.446696 kernel: xor: measuring software checksum speed Jan 29 11:01:54.446784 kernel: 8regs : 21613 MB/sec Jan 29 11:01:54.446807 kernel: 32regs : 21699 MB/sec Jan 29 11:01:54.447712 kernel: arm64_neon : 27804 MB/sec Jan 29 11:01:54.447757 kernel: xor: using function: arm64_neon (27804 MB/sec) Jan 29 11:01:54.506802 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:01:54.522124 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:01:54.534070 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:01:54.547768 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jan 29 11:01:54.551199 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:01:54.558926 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:01:54.582902 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Jan 29 11:01:54.623443 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:01:54.628972 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:01:54.690572 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:01:54.696970 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:01:54.719632 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:01:54.722009 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:01:54.723604 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:01:54.724798 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:01:54.735008 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:01:54.756197 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:01:54.810006 kernel: scsi host0: Virtio SCSI HBA Jan 29 11:01:54.814748 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:01:54.814854 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 29 11:01:54.849703 kernel: ACPI: bus type USB registered Jan 29 11:01:54.849764 kernel: usbcore: registered new interface driver usbfs Jan 29 11:01:54.849778 kernel: usbcore: registered new interface driver hub Jan 29 11:01:54.849765 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:01:54.849902 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:01:54.854282 kernel: usbcore: registered new device driver usb Jan 29 11:01:54.852434 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:01:54.853580 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:01:54.853782 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:01:54.855532 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:01:54.866202 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:01:54.871261 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 29 11:01:54.875054 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 29 11:01:54.875171 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:01:54.875182 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:01:54.891065 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 29 11:01:54.905895 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 29 11:01:54.906042 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 29 11:01:54.906125 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 29 11:01:54.906211 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 11:01:54.906293 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:01:54.906303 kernel: GPT:17805311 != 80003071 Jan 29 11:01:54.906313 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:01:54.906322 kernel: GPT:17805311 != 80003071 Jan 29 11:01:54.906334 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:01:54.906387 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:01:54.906403 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 29 11:01:54.892419 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:01:54.901043 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:01:54.915914 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 11:01:54.927760 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 29 11:01:54.927892 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 11:01:54.927984 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 11:01:54.928072 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 29 11:01:54.928153 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 29 11:01:54.928234 kernel: hub 1-0:1.0: USB hub found Jan 29 11:01:54.928335 kernel: hub 1-0:1.0: 4 ports detected Jan 29 11:01:54.928439 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 11:01:54.928537 kernel: hub 2-0:1.0: USB hub found Jan 29 11:01:54.928625 kernel: hub 2-0:1.0: 4 ports detected Jan 29 11:01:54.918988 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:01:54.973681 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (514) Jan 29 11:01:54.976674 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (509) Jan 29 11:01:54.977972 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 29 11:01:54.997445 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 29 11:01:54.999465 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 29 11:01:55.006516 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 29 11:01:55.012429 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 11:01:55.024993 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:01:55.033952 disk-uuid[575]: Primary Header is updated. Jan 29 11:01:55.033952 disk-uuid[575]: Secondary Entries is updated. Jan 29 11:01:55.033952 disk-uuid[575]: Secondary Header is updated. Jan 29 11:01:55.042214 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:01:55.045685 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:01:55.165927 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 11:01:55.408206 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 29 11:01:55.546843 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 29 11:01:55.546918 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 29 11:01:55.549704 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 29 11:01:55.604544 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 29 11:01:55.604854 kernel: usbcore: registered new interface driver usbhid Jan 29 11:01:55.604871 kernel: usbhid: USB HID core driver Jan 29 11:01:56.054167 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:01:56.054234 disk-uuid[576]: The operation has completed successfully. Jan 29 11:01:56.122995 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:01:56.123108 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:01:56.134956 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:01:56.142496 sh[591]: Success Jan 29 11:01:56.163752 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:01:56.222082 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:01:56.236850 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:01:56.239773 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:01:56.274535 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 29 11:01:56.274606 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:01:56.274618 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:01:56.275846 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:01:56.275906 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:01:56.282706 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:01:56.284617 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:01:56.285929 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:01:56.291895 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:01:56.295066 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:01:56.307685 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 11:01:56.307762 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:01:56.307773 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:01:56.312825 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:01:56.312899 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:01:56.327309 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:01:56.328217 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 11:01:56.334694 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:01:56.342980 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:01:56.445928 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:01:56.453916 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:01:56.455492 ignition[679]: Ignition 2.20.0 Jan 29 11:01:56.456184 ignition[679]: Stage: fetch-offline Jan 29 11:01:56.456662 ignition[679]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:01:56.456675 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:01:56.456869 ignition[679]: parsed url from cmdline: "" Jan 29 11:01:56.456872 ignition[679]: no config URL provided Jan 29 11:01:56.456877 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:01:56.458724 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:01:56.456888 ignition[679]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:01:56.456894 ignition[679]: failed to fetch config: resource requires networking Jan 29 11:01:56.457120 ignition[679]: Ignition finished successfully Jan 29 11:01:56.483222 systemd-networkd[779]: lo: Link UP Jan 29 11:01:56.483237 systemd-networkd[779]: lo: Gained carrier Jan 29 11:01:56.485245 systemd-networkd[779]: Enumeration completed Jan 29 11:01:56.485721 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:01:56.486397 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:01:56.486400 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:01:56.487391 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:01:56.487395 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:01:56.488061 systemd-networkd[779]: eth0: Link UP Jan 29 11:01:56.488064 systemd-networkd[779]: eth0: Gained carrier Jan 29 11:01:56.488073 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:01:56.489401 systemd[1]: Reached target network.target - Network. Jan 29 11:01:56.491269 systemd-networkd[779]: eth1: Link UP Jan 29 11:01:56.491272 systemd-networkd[779]: eth1: Gained carrier Jan 29 11:01:56.491282 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:01:56.501993 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:01:56.517165 ignition[782]: Ignition 2.20.0 Jan 29 11:01:56.517180 ignition[782]: Stage: fetch Jan 29 11:01:56.517468 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:01:56.517481 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:01:56.517592 ignition[782]: parsed url from cmdline: "" Jan 29 11:01:56.517596 ignition[782]: no config URL provided Jan 29 11:01:56.517601 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:01:56.517610 ignition[782]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:01:56.517728 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 29 11:01:56.518527 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 11:01:56.531790 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:01:56.541759 systemd-networkd[779]: eth0: DHCPv4 address 116.202.15.110/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 11:01:56.718749 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 29 11:01:56.723814 ignition[782]: GET result: OK Jan 29 11:01:56.723932 ignition[782]: parsing config with SHA512: 30577602de71321e6b6e2a02c11eee64dc5b4eceaa67ebdc458776d96b41682a4dba7bb0b6cf6644d13abd34c52c36c9d6af8a50b079ba65e458aa057582dedb Jan 29 11:01:56.729947 unknown[782]: fetched base config from "system" Jan 29 11:01:56.729958 unknown[782]: fetched base config from "system" Jan 29 11:01:56.730357 ignition[782]: fetch: fetch complete Jan 29 11:01:56.729964 unknown[782]: fetched user config from "hetzner" Jan 29 11:01:56.730363 ignition[782]: fetch: fetch passed Jan 29 11:01:56.732643 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:01:56.730413 ignition[782]: Ignition finished successfully Jan 29 11:01:56.737955 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:01:56.754086 ignition[789]: Ignition 2.20.0 Jan 29 11:01:56.754099 ignition[789]: Stage: kargs Jan 29 11:01:56.754373 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:01:56.754392 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:01:56.758258 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:01:56.755547 ignition[789]: kargs: kargs passed Jan 29 11:01:56.755604 ignition[789]: Ignition finished successfully Jan 29 11:01:56.776001 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:01:56.790869 ignition[796]: Ignition 2.20.0 Jan 29 11:01:56.790877 ignition[796]: Stage: disks Jan 29 11:01:56.791095 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:01:56.791107 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:01:56.792199 ignition[796]: disks: disks passed Jan 29 11:01:56.794216 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:01:56.792257 ignition[796]: Ignition finished successfully Jan 29 11:01:56.795442 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:01:56.797778 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:01:56.798822 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:01:56.799780 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:01:56.801448 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:01:56.807899 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:01:56.829551 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 11:01:56.837730 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:01:56.845024 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:01:56.888725 kernel: EXT4-fs (sda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 29 11:01:56.889417 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:01:56.890692 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:01:56.896823 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:01:56.901977 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:01:56.904935 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:01:56.908952 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:01:56.909726 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:01:56.913830 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:01:56.919683 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (812) Jan 29 11:01:56.923692 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 11:01:56.923778 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:01:56.923803 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:01:56.922968 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:01:56.933840 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:01:56.933935 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:01:56.937478 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:01:56.993479 coreos-metadata[814]: Jan 29 11:01:56.993 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 29 11:01:56.995723 coreos-metadata[814]: Jan 29 11:01:56.995 INFO Fetch successful Jan 29 11:01:56.995723 coreos-metadata[814]: Jan 29 11:01:56.995 INFO wrote hostname ci-4186-1-0-1-dfe7c46cbd to /sysroot/etc/hostname Jan 29 11:01:56.999003 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:01:57.001180 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:01:57.006359 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:01:57.011225 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:01:57.015739 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:01:57.119454 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:01:57.125869 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:01:57.131311 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:01:57.135715 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 11:01:57.162950 ignition[928]: INFO : Ignition 2.20.0 Jan 29 11:01:57.163791 ignition[928]: INFO : Stage: mount Jan 29 11:01:57.165699 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:01:57.165699 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:01:57.165699 ignition[928]: INFO : mount: mount passed Jan 29 11:01:57.165699 ignition[928]: INFO : Ignition finished successfully Jan 29 11:01:57.169926 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:01:57.179841 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:01:57.181970 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:01:57.273364 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:01:57.287077 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:01:57.300717 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (940) Jan 29 11:01:57.302799 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 11:01:57.302866 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:01:57.302883 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:01:57.306877 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:01:57.306957 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:01:57.309775 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:01:57.331141 ignition[957]: INFO : Ignition 2.20.0 Jan 29 11:01:57.331928 ignition[957]: INFO : Stage: files Jan 29 11:01:57.332300 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:01:57.332300 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:01:57.333536 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:01:57.336789 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:01:57.336789 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:01:57.339819 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:01:57.339819 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:01:57.339819 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:01:57.339080 unknown[957]: wrote ssh authorized keys file for user: core Jan 29 11:01:57.344210 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 11:01:57.344210 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 29 11:01:57.429828 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:01:57.624838 systemd-networkd[779]: eth0: Gained IPv6LL Jan 29 11:01:57.689943 systemd-networkd[779]: eth1: Gained IPv6LL Jan 29 11:01:57.696871 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 11:01:57.696871 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:01:57.696871 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 11:01:58.095073 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:01:58.291889 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:01:58.291889 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:01:58.295875 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:01:58.295875 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:01:58.295875 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:01:58.295875 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:01:58.295875 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:01:58.295875 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:01:58.295875 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:01:58.295875 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:01:58.295875 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:01:58.295875 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:01:58.295875 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:01:58.295875 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:01:58.295875 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Jan 29 11:01:58.923401 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:01:59.961633 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:01:59.961633 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:01:59.965699 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:01:59.965699 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:01:59.965699 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:01:59.965699 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 11:01:59.965699 ignition[957]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 11:01:59.965699 ignition[957]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 11:01:59.965699 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 11:01:59.965699 ignition[957]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:01:59.965699 ignition[957]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:01:59.979917 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:01:59.979917 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:01:59.979917 ignition[957]: INFO : files: files passed Jan 29 11:01:59.979917 ignition[957]: INFO : Ignition finished successfully Jan 29 11:01:59.969451 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:01:59.981854 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:01:59.983300 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:02:00.000219 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:02:00.000481 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:02:00.018518 initrd-setup-root-after-ignition[985]: grep: Jan 29 11:02:00.018518 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:02:00.021212 initrd-setup-root-after-ignition[985]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:02:00.021212 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:02:00.022498 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:02:00.023770 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:02:00.034978 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:02:00.066983 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:02:00.067108 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:02:00.069449 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:02:00.070988 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:02:00.072310 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:02:00.081973 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:02:00.098399 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:02:00.104858 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:02:00.134218 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:02:00.135794 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:02:00.141990 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:02:00.142623 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:02:00.142779 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:02:00.145037 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:02:00.146181 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:02:00.147083 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:02:00.148127 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:02:00.149234 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:02:00.150469 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:02:00.151370 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:02:00.152487 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:02:00.153567 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:02:00.154545 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:02:00.155374 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:02:00.155555 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:02:00.156877 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:02:00.158015 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:02:00.159033 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:02:00.160094 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:02:00.161613 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:02:00.161817 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:02:00.163412 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:02:00.163692 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:02:00.165214 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:02:00.165421 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:02:00.166400 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:02:00.166546 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:02:00.175119 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:02:00.178918 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:02:00.182282 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:02:00.182507 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:02:00.185978 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:02:00.186102 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:02:00.192784 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:02:00.192900 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:02:00.198662 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:02:00.204509 ignition[1010]: INFO : Ignition 2.20.0 Jan 29 11:02:00.204509 ignition[1010]: INFO : Stage: umount Jan 29 11:02:00.204509 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:02:00.204509 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:02:00.204509 ignition[1010]: INFO : umount: umount passed Jan 29 11:02:00.204509 ignition[1010]: INFO : Ignition finished successfully Jan 29 11:02:00.204959 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:02:00.205119 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:02:00.207633 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:02:00.207859 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:02:00.209912 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:02:00.210046 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:02:00.211195 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:02:00.211240 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:02:00.212274 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:02:00.212334 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:02:00.213129 systemd[1]: Stopped target network.target - Network. Jan 29 11:02:00.213961 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:02:00.214014 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:02:00.214993 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:02:00.215905 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:02:00.219705 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:02:00.220796 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:02:00.221716 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:02:00.223442 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:02:00.223531 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:02:00.225216 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:02:00.225286 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:02:00.226294 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:02:00.226368 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:02:00.227272 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:02:00.227314 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:02:00.228276 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:02:00.228335 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:02:00.229434 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:02:00.230292 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:02:00.235716 systemd-networkd[779]: eth1: DHCPv6 lease lost Jan 29 11:02:00.238125 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:02:00.238264 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:02:00.239738 systemd-networkd[779]: eth0: DHCPv6 lease lost Jan 29 11:02:00.242309 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:02:00.242485 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:02:00.245620 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:02:00.246738 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:02:00.248620 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:02:00.248760 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:02:00.253805 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:02:00.254380 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:02:00.254459 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:02:00.255199 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:02:00.255240 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:02:00.256227 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:02:00.256268 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:02:00.258701 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:02:00.276267 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:02:00.276514 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:02:00.279179 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:02:00.279261 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:02:00.280382 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:02:00.280423 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:02:00.281629 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:02:00.281741 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:02:00.283904 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:02:00.283962 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:02:00.285504 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:02:00.285559 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:02:00.291957 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:02:00.292597 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:02:00.292687 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:02:00.295339 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:02:00.295407 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:02:00.296089 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:02:00.296130 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:02:00.297249 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:02:00.297288 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:02:00.300873 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:02:00.300996 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:02:00.308300 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:02:00.308472 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:02:00.309431 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:02:00.315947 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:02:00.328065 systemd[1]: Switching root. Jan 29 11:02:00.373338 systemd-journald[238]: Journal stopped Jan 29 11:02:01.348488 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 29 11:02:01.348577 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:02:01.348592 kernel: SELinux: policy capability open_perms=1 Jan 29 11:02:01.348601 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:02:01.348610 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:02:01.348619 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:02:01.348629 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:02:01.348639 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:02:01.348684 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:02:01.348696 kernel: audit: type=1403 audit(1738148520.561:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:02:01.348706 systemd[1]: Successfully loaded SELinux policy in 34.930ms. Jan 29 11:02:01.348734 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.560ms. Jan 29 11:02:01.348745 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:02:01.348756 systemd[1]: Detected virtualization kvm. Jan 29 11:02:01.348766 systemd[1]: Detected architecture arm64. Jan 29 11:02:01.348776 systemd[1]: Detected first boot. Jan 29 11:02:01.348786 systemd[1]: Hostname set to . Jan 29 11:02:01.348796 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:02:01.348807 zram_generator::config[1053]: No configuration found. Jan 29 11:02:01.348820 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:02:01.348830 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:02:01.348842 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:02:01.348852 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:02:01.348864 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:02:01.348874 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:02:01.348885 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:02:01.348894 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:02:01.348906 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:02:01.348916 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:02:01.348926 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:02:01.348937 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:02:01.348947 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:02:01.348957 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:02:01.348968 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:02:01.348978 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:02:01.348988 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:02:01.349000 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:02:01.349014 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:02:01.349024 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:02:01.349033 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:02:01.349044 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:02:01.349054 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:02:01.349067 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:02:01.349077 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:02:01.349090 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:02:01.349100 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:02:01.349110 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:02:01.349120 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:02:01.349131 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:02:01.349141 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:02:01.349152 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:02:01.349162 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:02:01.349176 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:02:01.349189 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:02:01.349201 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:02:01.349211 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:02:01.349221 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:02:01.349233 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:02:01.349244 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:02:01.349254 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:02:01.349265 systemd[1]: Reached target machines.target - Containers. Jan 29 11:02:01.349275 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:02:01.349285 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:02:01.349296 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:02:01.349306 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:02:01.349316 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:02:01.349344 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:02:01.349357 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:02:01.349368 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:02:01.349377 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:02:01.349388 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:02:01.349398 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:02:01.349408 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:02:01.349418 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:02:01.349430 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:02:01.349440 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:02:01.349450 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:02:01.349461 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:02:01.349471 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:02:01.349481 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:02:01.349491 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:02:01.349501 systemd[1]: Stopped verity-setup.service. Jan 29 11:02:01.349511 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:02:01.349523 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:02:01.349533 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:02:01.349543 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:02:01.349553 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:02:01.349563 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:02:01.349576 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:02:01.349586 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:02:01.349597 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:02:01.349606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:02:01.349620 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:02:01.349630 kernel: fuse: init (API version 7.39) Jan 29 11:02:01.349640 kernel: loop: module loaded Jan 29 11:02:01.352869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:02:01.352900 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:02:01.352919 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:02:01.352930 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:02:01.352941 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:02:01.352952 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:02:01.352965 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:02:01.352978 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:02:01.352989 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:02:01.352999 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:02:01.353072 systemd-journald[1116]: Collecting audit messages is disabled. Jan 29 11:02:01.353108 systemd-journald[1116]: Journal started Jan 29 11:02:01.353137 systemd-journald[1116]: Runtime Journal (/run/log/journal/26c3bffcf86e498195199e7e6755483c) is 8.0M, max 76.6M, 68.6M free. Jan 29 11:02:01.359711 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:02:01.359787 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:02:01.063719 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:02:01.086378 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 11:02:01.086876 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:02:01.374286 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:02:01.374378 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:02:01.377672 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:02:01.379711 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:02:01.380625 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:02:01.381963 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:02:01.409304 kernel: ACPI: bus type drm_connector registered Jan 29 11:02:01.418493 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:02:01.419729 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:02:01.424799 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:02:01.427398 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:02:01.427464 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:02:01.430450 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:02:01.435908 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:02:01.442601 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:02:01.445039 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:02:01.455739 systemd-tmpfiles[1138]: ACLs are not supported, ignoring. Jan 29 11:02:01.455761 systemd-tmpfiles[1138]: ACLs are not supported, ignoring. Jan 29 11:02:01.455914 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:02:01.459199 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:02:01.461751 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:02:01.470359 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:02:01.474481 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:02:01.477818 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:02:01.480283 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:02:01.482124 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:02:01.484305 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:02:01.501028 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:02:01.505901 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:02:01.507745 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:02:01.510838 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:02:01.518866 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:02:01.530613 systemd-journald[1116]: Time spent on flushing to /var/log/journal/26c3bffcf86e498195199e7e6755483c is 51.327ms for 1146 entries. Jan 29 11:02:01.530613 systemd-journald[1116]: System Journal (/var/log/journal/26c3bffcf86e498195199e7e6755483c) is 8.0M, max 584.8M, 576.8M free. Jan 29 11:02:01.594456 systemd-journald[1116]: Received client request to flush runtime journal. Jan 29 11:02:01.594544 kernel: loop0: detected capacity change from 0 to 116784 Jan 29 11:02:01.594569 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:02:01.573413 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:02:01.600778 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:02:01.607773 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:02:01.618922 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:02:01.621091 kernel: loop1: detected capacity change from 0 to 201592 Jan 29 11:02:01.622794 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:02:01.625727 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:02:01.653565 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 29 11:02:01.653587 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 29 11:02:01.659486 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:02:01.668678 kernel: loop2: detected capacity change from 0 to 8 Jan 29 11:02:01.689698 kernel: loop3: detected capacity change from 0 to 113552 Jan 29 11:02:01.742682 kernel: loop4: detected capacity change from 0 to 116784 Jan 29 11:02:01.758690 kernel: loop5: detected capacity change from 0 to 201592 Jan 29 11:02:01.791393 kernel: loop6: detected capacity change from 0 to 8 Jan 29 11:02:01.796716 kernel: loop7: detected capacity change from 0 to 113552 Jan 29 11:02:01.811696 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 29 11:02:01.815740 (sd-merge)[1197]: Merged extensions into '/usr'. Jan 29 11:02:01.822062 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:02:01.822086 systemd[1]: Reloading... Jan 29 11:02:01.952706 zram_generator::config[1226]: No configuration found. Jan 29 11:02:02.076542 ldconfig[1169]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:02:02.090311 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:02:02.137703 systemd[1]: Reloading finished in 314 ms. Jan 29 11:02:02.175171 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:02:02.179883 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:02:02.192068 systemd[1]: Starting ensure-sysext.service... Jan 29 11:02:02.196865 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:02:02.224741 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:02:02.224778 systemd[1]: Reloading... Jan 29 11:02:02.252613 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:02:02.254182 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:02:02.256232 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:02:02.257116 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 29 11:02:02.257226 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 29 11:02:02.263192 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:02:02.263412 systemd-tmpfiles[1261]: Skipping /boot Jan 29 11:02:02.272489 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:02:02.272640 systemd-tmpfiles[1261]: Skipping /boot Jan 29 11:02:02.317683 zram_generator::config[1287]: No configuration found. Jan 29 11:02:02.416749 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:02:02.464221 systemd[1]: Reloading finished in 239 ms. Jan 29 11:02:02.484463 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:02:02.495484 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:02:02.509929 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:02:02.518224 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:02:02.524048 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:02:02.530947 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:02:02.538888 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:02:02.542292 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:02:02.546715 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:02:02.551991 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:02:02.563387 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:02:02.570087 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:02:02.571844 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:02:02.578010 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:02:02.583773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:02:02.584019 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:02:02.588237 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:02:02.594799 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:02:02.607194 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:02:02.607990 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:02:02.608641 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:02:02.609739 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:02:02.618118 systemd[1]: Finished ensure-sysext.service. Jan 29 11:02:02.619504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:02:02.620260 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:02:02.623515 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:02:02.633830 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:02:02.635092 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:02:02.635267 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:02:02.636280 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:02:02.636493 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:02:02.641451 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:02:02.655415 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:02:02.659155 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:02:02.677511 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:02:02.679904 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:02:02.698479 augenrules[1371]: No rules Jan 29 11:02:02.694097 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:02:02.696550 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:02:02.699012 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:02:02.710047 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Jan 29 11:02:02.723965 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:02:02.756181 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:02:02.757917 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:02:02.759243 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:02:02.768545 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:02:02.772733 systemd-resolved[1330]: Positive Trust Anchors: Jan 29 11:02:02.773091 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:02:02.773126 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:02:02.778977 systemd-resolved[1330]: Using system hostname 'ci-4186-1-0-1-dfe7c46cbd'. Jan 29 11:02:02.782304 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:02:02.783157 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:02:02.853619 systemd-networkd[1384]: lo: Link UP Jan 29 11:02:02.853633 systemd-networkd[1384]: lo: Gained carrier Jan 29 11:02:02.854505 systemd-networkd[1384]: Enumeration completed Jan 29 11:02:02.855049 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:02:02.856118 systemd[1]: Reached target network.target - Network. Jan 29 11:02:02.871131 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:02:02.871957 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 11:02:02.943765 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:02:02.945274 systemd-networkd[1384]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:02:02.945293 systemd-networkd[1384]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:02:02.947417 systemd-networkd[1384]: eth1: Link UP Jan 29 11:02:02.947427 systemd-networkd[1384]: eth1: Gained carrier Jan 29 11:02:02.947450 systemd-networkd[1384]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:02:02.977874 systemd-networkd[1384]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:02:02.979223 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Jan 29 11:02:03.011107 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 29 11:02:03.011234 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:02:03.024849 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1399) Jan 29 11:02:03.025012 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:02:03.027891 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:02:03.036766 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:02:03.038298 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:02:03.038364 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:02:03.041991 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:02:03.042180 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:02:03.053951 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:02:03.053963 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:02:03.055550 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Jan 29 11:02:03.059185 systemd-networkd[1384]: eth0: Link UP Jan 29 11:02:03.059200 systemd-networkd[1384]: eth0: Gained carrier Jan 29 11:02:03.059225 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:02:03.063526 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Jan 29 11:02:03.069285 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:02:03.070703 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:02:03.073518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:02:03.074607 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:02:03.078232 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:02:03.078360 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:02:03.091678 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 29 11:02:03.091792 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 11:02:03.091806 kernel: [drm] features: -context_init Jan 29 11:02:03.093686 kernel: [drm] number of scanouts: 1 Jan 29 11:02:03.094668 kernel: [drm] number of cap sets: 0 Jan 29 11:02:03.095690 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 29 11:02:03.120669 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 11:02:03.126679 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 11:02:03.126840 systemd-networkd[1384]: eth0: DHCPv4 address 116.202.15.110/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 11:02:03.128539 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:02:03.130753 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Jan 29 11:02:03.139403 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 11:02:03.157525 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:02:03.163712 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:02:03.218752 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:02:03.295412 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:02:03.309973 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:02:03.323347 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:02:03.351845 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:02:03.354539 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:02:03.356446 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:02:03.358185 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:02:03.360118 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:02:03.362223 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:02:03.362951 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:02:03.363676 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:02:03.364324 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:02:03.364363 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:02:03.364863 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:02:03.366973 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:02:03.369271 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:02:03.373862 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:02:03.377005 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:02:03.378805 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:02:03.379633 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:02:03.380257 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:02:03.381607 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:02:03.381850 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:02:03.383972 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:02:03.388122 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:02:03.393905 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:02:03.399948 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:02:03.407961 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:02:03.412923 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:02:03.413694 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:02:03.418575 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:02:03.422983 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:02:03.426928 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 29 11:02:03.432865 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:02:03.437607 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:02:03.444832 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:02:03.447464 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:02:03.449674 jq[1449]: false Jan 29 11:02:03.449300 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:02:03.451939 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:02:03.457951 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:02:03.461722 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:02:03.471135 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:02:03.471379 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:02:03.472619 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:02:03.473903 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:02:03.494819 dbus-daemon[1448]: [system] SELinux support is enabled Jan 29 11:02:03.495432 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:02:03.500869 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:02:03.502872 jq[1461]: true Jan 29 11:02:03.500915 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:02:03.502784 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:02:03.502807 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:02:03.526816 extend-filesystems[1452]: Found loop4 Jan 29 11:02:03.539803 extend-filesystems[1452]: Found loop5 Jan 29 11:02:03.539803 extend-filesystems[1452]: Found loop6 Jan 29 11:02:03.539803 extend-filesystems[1452]: Found loop7 Jan 29 11:02:03.539803 extend-filesystems[1452]: Found sda Jan 29 11:02:03.539803 extend-filesystems[1452]: Found sda1 Jan 29 11:02:03.539803 extend-filesystems[1452]: Found sda2 Jan 29 11:02:03.539803 extend-filesystems[1452]: Found sda3 Jan 29 11:02:03.539803 extend-filesystems[1452]: Found usr Jan 29 11:02:03.539803 extend-filesystems[1452]: Found sda4 Jan 29 11:02:03.539803 extend-filesystems[1452]: Found sda6 Jan 29 11:02:03.539803 extend-filesystems[1452]: Found sda7 Jan 29 11:02:03.539803 extend-filesystems[1452]: Found sda9 Jan 29 11:02:03.539803 extend-filesystems[1452]: Checking size of /dev/sda9 Jan 29 11:02:03.569939 coreos-metadata[1447]: Jan 29 11:02:03.541 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 29 11:02:03.569939 coreos-metadata[1447]: Jan 29 11:02:03.545 INFO Fetch successful Jan 29 11:02:03.569939 coreos-metadata[1447]: Jan 29 11:02:03.547 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 29 11:02:03.569939 coreos-metadata[1447]: Jan 29 11:02:03.550 INFO Fetch successful Jan 29 11:02:03.585042 tar[1470]: linux-arm64/LICENSE Jan 29 11:02:03.585042 tar[1470]: linux-arm64/helm Jan 29 11:02:03.543170 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:02:03.582270 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:02:03.590420 jq[1474]: true Jan 29 11:02:03.582535 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:02:03.591901 extend-filesystems[1452]: Resized partition /dev/sda9 Jan 29 11:02:03.601044 extend-filesystems[1493]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:02:03.618769 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 29 11:02:03.619703 update_engine[1460]: I20250129 11:02:03.619029 1460 main.cc:92] Flatcar Update Engine starting Jan 29 11:02:03.628004 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:02:03.629236 update_engine[1460]: I20250129 11:02:03.629060 1460 update_check_scheduler.cc:74] Next update check in 3m4s Jan 29 11:02:03.646435 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:02:03.711994 systemd-logind[1459]: New seat seat0. Jan 29 11:02:03.731402 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:02:03.731428 systemd-logind[1459]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 29 11:02:03.777623 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1401) Jan 29 11:02:03.775405 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:02:03.787674 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 29 11:02:03.806288 extend-filesystems[1493]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 11:02:03.806288 extend-filesystems[1493]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 29 11:02:03.806288 extend-filesystems[1493]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 29 11:02:03.824882 extend-filesystems[1452]: Resized filesystem in /dev/sda9 Jan 29 11:02:03.824882 extend-filesystems[1452]: Found sr0 Jan 29 11:02:03.809823 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:02:03.827525 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:02:03.823570 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:02:03.823951 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:02:03.827857 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:02:03.838913 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:02:03.850994 systemd[1]: Starting sshkeys.service... Jan 29 11:02:03.892563 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:02:03.905071 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:02:03.917764 containerd[1481]: time="2025-01-29T11:02:03.915952200Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:02:03.966370 coreos-metadata[1528]: Jan 29 11:02:03.964 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 29 11:02:03.967041 coreos-metadata[1528]: Jan 29 11:02:03.966 INFO Fetch successful Jan 29 11:02:03.974458 unknown[1528]: wrote ssh authorized keys file for user: core Jan 29 11:02:03.996283 containerd[1481]: time="2025-01-29T11:02:03.996023280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:02:03.998980 containerd[1481]: time="2025-01-29T11:02:03.998927120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:02:03.999111 containerd[1481]: time="2025-01-29T11:02:03.999096880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:02:04.000498 containerd[1481]: time="2025-01-29T11:02:03.999159720Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:02:04.000498 containerd[1481]: time="2025-01-29T11:02:03.999397400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:02:04.000498 containerd[1481]: time="2025-01-29T11:02:03.999419480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:02:04.000498 containerd[1481]: time="2025-01-29T11:02:03.999492800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:02:04.000498 containerd[1481]: time="2025-01-29T11:02:03.999505680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:02:04.001949 containerd[1481]: time="2025-01-29T11:02:04.001134840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:02:04.001949 containerd[1481]: time="2025-01-29T11:02:04.001172200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:02:04.001949 containerd[1481]: time="2025-01-29T11:02:04.001189960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:02:04.001949 containerd[1481]: time="2025-01-29T11:02:04.001199720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:02:04.001949 containerd[1481]: time="2025-01-29T11:02:04.001348840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:02:04.001949 containerd[1481]: time="2025-01-29T11:02:04.001573560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:02:04.001949 containerd[1481]: time="2025-01-29T11:02:04.001741800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:02:04.001949 containerd[1481]: time="2025-01-29T11:02:04.001757840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:02:04.001949 containerd[1481]: time="2025-01-29T11:02:04.001843560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:02:04.001949 containerd[1481]: time="2025-01-29T11:02:04.001885440Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:02:04.016978 containerd[1481]: time="2025-01-29T11:02:04.014602840Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:02:04.016978 containerd[1481]: time="2025-01-29T11:02:04.014730680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:02:04.016978 containerd[1481]: time="2025-01-29T11:02:04.014750560Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:02:04.016978 containerd[1481]: time="2025-01-29T11:02:04.014767960Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:02:04.016978 containerd[1481]: time="2025-01-29T11:02:04.014826800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:02:04.016978 containerd[1481]: time="2025-01-29T11:02:04.015016480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:02:04.016978 containerd[1481]: time="2025-01-29T11:02:04.015337880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:02:04.016978 containerd[1481]: time="2025-01-29T11:02:04.015467240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:02:04.016978 containerd[1481]: time="2025-01-29T11:02:04.015484360Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:02:04.016978 containerd[1481]: time="2025-01-29T11:02:04.015498800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:02:04.016978 containerd[1481]: time="2025-01-29T11:02:04.015512760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:02:04.016978 containerd[1481]: time="2025-01-29T11:02:04.015526400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:02:04.016978 containerd[1481]: time="2025-01-29T11:02:04.015540160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:02:04.016978 containerd[1481]: time="2025-01-29T11:02:04.015556800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:02:04.017415 containerd[1481]: time="2025-01-29T11:02:04.015572800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:02:04.017415 containerd[1481]: time="2025-01-29T11:02:04.015586760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:02:04.017415 containerd[1481]: time="2025-01-29T11:02:04.015601360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:02:04.017415 containerd[1481]: time="2025-01-29T11:02:04.015614080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:02:04.017415 containerd[1481]: time="2025-01-29T11:02:04.015636440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.017415 containerd[1481]: time="2025-01-29T11:02:04.015669040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.017415 containerd[1481]: time="2025-01-29T11:02:04.015693200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.017415 containerd[1481]: time="2025-01-29T11:02:04.015717560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.017415 containerd[1481]: time="2025-01-29T11:02:04.015729720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.017415 containerd[1481]: time="2025-01-29T11:02:04.015744520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.017415 containerd[1481]: time="2025-01-29T11:02:04.015757040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.017415 containerd[1481]: time="2025-01-29T11:02:04.015772560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.017415 containerd[1481]: time="2025-01-29T11:02:04.015785600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.017415 containerd[1481]: time="2025-01-29T11:02:04.015800840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.017637 containerd[1481]: time="2025-01-29T11:02:04.015813680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.017637 containerd[1481]: time="2025-01-29T11:02:04.015825600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.017637 containerd[1481]: time="2025-01-29T11:02:04.015838440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.017637 containerd[1481]: time="2025-01-29T11:02:04.015854640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:02:04.017637 containerd[1481]: time="2025-01-29T11:02:04.015876800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.017637 containerd[1481]: time="2025-01-29T11:02:04.015891280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.017637 containerd[1481]: time="2025-01-29T11:02:04.015902200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:02:04.017637 containerd[1481]: time="2025-01-29T11:02:04.016109440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:02:04.017637 containerd[1481]: time="2025-01-29T11:02:04.016132360Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:02:04.017637 containerd[1481]: time="2025-01-29T11:02:04.016144960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:02:04.017637 containerd[1481]: time="2025-01-29T11:02:04.016157720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:02:04.017637 containerd[1481]: time="2025-01-29T11:02:04.016166880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.017637 containerd[1481]: time="2025-01-29T11:02:04.016184320Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:02:04.017637 containerd[1481]: time="2025-01-29T11:02:04.016195040Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:02:04.018879 containerd[1481]: time="2025-01-29T11:02:04.016205520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:02:04.019694 containerd[1481]: time="2025-01-29T11:02:04.016625760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:02:04.019694 containerd[1481]: time="2025-01-29T11:02:04.019217080Z" level=info msg="Connect containerd service" Jan 29 11:02:04.019694 containerd[1481]: time="2025-01-29T11:02:04.019287120Z" level=info msg="using legacy CRI server" Jan 29 11:02:04.019694 containerd[1481]: time="2025-01-29T11:02:04.019294680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:02:04.020929 update-ssh-keys[1536]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:02:04.023114 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:02:04.027992 containerd[1481]: time="2025-01-29T11:02:04.023948520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:02:04.027992 containerd[1481]: time="2025-01-29T11:02:04.027217560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:02:04.029707 systemd[1]: Finished sshkeys.service. Jan 29 11:02:04.034179 containerd[1481]: time="2025-01-29T11:02:04.032003000Z" level=info msg="Start subscribing containerd event" Jan 29 11:02:04.034179 containerd[1481]: time="2025-01-29T11:02:04.032078800Z" level=info msg="Start recovering state" Jan 29 11:02:04.034179 containerd[1481]: time="2025-01-29T11:02:04.032170840Z" level=info msg="Start event monitor" Jan 29 11:02:04.034179 containerd[1481]: time="2025-01-29T11:02:04.032185200Z" level=info msg="Start snapshots syncer" Jan 29 11:02:04.034179 containerd[1481]: time="2025-01-29T11:02:04.032194960Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:02:04.034179 containerd[1481]: time="2025-01-29T11:02:04.032201960Z" level=info msg="Start streaming server" Jan 29 11:02:04.030737 systemd-networkd[1384]: eth1: Gained IPv6LL Jan 29 11:02:04.033339 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Jan 29 11:02:04.038091 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:02:04.041804 containerd[1481]: time="2025-01-29T11:02:04.041281480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:02:04.041804 containerd[1481]: time="2025-01-29T11:02:04.041391480Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:02:04.041804 containerd[1481]: time="2025-01-29T11:02:04.041463960Z" level=info msg="containerd successfully booted in 0.131446s" Jan 29 11:02:04.040363 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:02:04.062118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:02:04.067003 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:02:04.067869 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:02:04.115736 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:02:04.168120 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:02:04.217422 systemd-networkd[1384]: eth0: Gained IPv6LL Jan 29 11:02:04.217940 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Jan 29 11:02:04.616804 tar[1470]: linux-arm64/README.md Jan 29 11:02:04.636794 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:02:04.924800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:02:04.931482 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:02:05.153335 sshd_keygen[1485]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:02:05.181586 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:02:05.191294 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:02:05.203012 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:02:05.203451 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:02:05.213288 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:02:05.224010 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:02:05.232079 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:02:05.241052 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:02:05.241875 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:02:05.242457 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:02:05.243713 systemd[1]: Startup finished in 793ms (kernel) + 6.876s (initrd) + 4.716s (userspace) = 12.387s. Jan 29 11:02:05.259640 agetty[1583]: failed to open credentials directory Jan 29 11:02:05.267249 agetty[1582]: failed to open credentials directory Jan 29 11:02:05.506338 kubelet[1562]: E0129 11:02:05.506096 1562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:02:05.509783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:02:05.510045 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:02:15.584951 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:02:15.592102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:02:15.721518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:02:15.727069 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:02:15.788145 kubelet[1598]: E0129 11:02:15.788072 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:02:15.792774 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:02:15.792944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:02:25.834332 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:02:25.841931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:02:25.956075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:02:25.961417 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:02:26.006093 kubelet[1612]: E0129 11:02:26.006001 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:02:26.008372 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:02:26.008509 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:02:34.441391 systemd-timesyncd[1356]: Contacted time server 62.75.236.38:123 (2.flatcar.pool.ntp.org). Jan 29 11:02:34.441519 systemd-timesyncd[1356]: Initial clock synchronization to Wed 2025-01-29 11:02:34.395120 UTC. Jan 29 11:02:36.084626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 11:02:36.095037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:02:36.207912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:02:36.213100 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:02:36.258236 kubelet[1628]: E0129 11:02:36.258146 1628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:02:36.260255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:02:36.260387 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:02:46.334753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 11:02:46.344012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:02:46.488873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:02:46.489107 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:02:46.536655 kubelet[1643]: E0129 11:02:46.536549 1643 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:02:46.538671 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:02:46.538807 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:02:48.881853 update_engine[1460]: I20250129 11:02:48.881683 1460 update_attempter.cc:509] Updating boot flags... Jan 29 11:02:48.927762 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1659) Jan 29 11:02:48.988696 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1658) Jan 29 11:02:49.046743 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1658) Jan 29 11:02:56.584247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 11:02:56.593006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:02:56.708812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:02:56.714620 (kubelet)[1679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:02:56.762166 kubelet[1679]: E0129 11:02:56.762107 1679 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:02:56.765027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:02:56.765217 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:06.834423 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 11:03:06.840029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:03:06.965537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:03:06.979268 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:03:07.030731 kubelet[1692]: E0129 11:03:07.030637 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:03:07.033143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:03:07.033323 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:17.085139 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 11:03:17.095143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:03:17.295086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:03:17.295843 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:03:17.357060 kubelet[1708]: E0129 11:03:17.356895 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:03:17.360225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:03:17.360428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:27.584235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 11:03:27.591795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:03:27.750064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:03:27.751203 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:03:27.793767 kubelet[1724]: E0129 11:03:27.793559 1724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:03:27.796303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:03:27.796436 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:37.834467 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 29 11:03:37.843082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:03:37.970288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:03:37.976422 (kubelet)[1739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:03:38.033476 kubelet[1739]: E0129 11:03:38.033365 1739 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:03:38.037140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:03:38.037311 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:48.084410 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 29 11:03:48.092316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:03:48.251790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:03:48.264271 (kubelet)[1754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:03:48.311328 kubelet[1754]: E0129 11:03:48.311261 1754 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:03:48.313641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:03:48.313811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:52.909504 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:03:52.918089 systemd[1]: Started sshd@0-116.202.15.110:22-147.75.109.163:49454.service - OpenSSH per-connection server daemon (147.75.109.163:49454). Jan 29 11:03:53.921063 sshd[1762]: Accepted publickey for core from 147.75.109.163 port 49454 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:03:53.924046 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:03:53.938427 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:03:53.947994 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:03:53.950473 systemd-logind[1459]: New session 1 of user core. Jan 29 11:03:53.973774 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:03:53.981105 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:03:53.995071 (systemd)[1766]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:03:54.106101 systemd[1766]: Queued start job for default target default.target. Jan 29 11:03:54.118329 systemd[1766]: Created slice app.slice - User Application Slice. Jan 29 11:03:54.118394 systemd[1766]: Reached target paths.target - Paths. Jan 29 11:03:54.118420 systemd[1766]: Reached target timers.target - Timers. Jan 29 11:03:54.124056 systemd[1766]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:03:54.139906 systemd[1766]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:03:54.139987 systemd[1766]: Reached target sockets.target - Sockets. Jan 29 11:03:54.140000 systemd[1766]: Reached target basic.target - Basic System. Jan 29 11:03:54.140056 systemd[1766]: Reached target default.target - Main User Target. Jan 29 11:03:54.140086 systemd[1766]: Startup finished in 137ms. Jan 29 11:03:54.140297 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:03:54.152161 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:03:54.857029 systemd[1]: Started sshd@1-116.202.15.110:22-147.75.109.163:49464.service - OpenSSH per-connection server daemon (147.75.109.163:49464). Jan 29 11:03:55.844155 sshd[1777]: Accepted publickey for core from 147.75.109.163 port 49464 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:03:55.847574 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:03:55.852983 systemd-logind[1459]: New session 2 of user core. Jan 29 11:03:55.860901 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:03:56.529750 sshd[1779]: Connection closed by 147.75.109.163 port 49464 Jan 29 11:03:56.529599 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Jan 29 11:03:56.534977 systemd[1]: sshd@1-116.202.15.110:22-147.75.109.163:49464.service: Deactivated successfully. Jan 29 11:03:56.537545 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:03:56.542830 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:03:56.546415 systemd-logind[1459]: Removed session 2. Jan 29 11:03:56.698953 systemd[1]: Started sshd@2-116.202.15.110:22-147.75.109.163:49474.service - OpenSSH per-connection server daemon (147.75.109.163:49474). Jan 29 11:03:57.705754 sshd[1784]: Accepted publickey for core from 147.75.109.163 port 49474 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:03:57.708094 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:03:57.712754 systemd-logind[1459]: New session 3 of user core. Jan 29 11:03:57.720956 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:03:58.334388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 29 11:03:58.343951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:03:58.391698 sshd[1786]: Connection closed by 147.75.109.163 port 49474 Jan 29 11:03:58.393053 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Jan 29 11:03:58.399035 systemd[1]: sshd@2-116.202.15.110:22-147.75.109.163:49474.service: Deactivated successfully. Jan 29 11:03:58.403451 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:03:58.405861 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:03:58.407366 systemd-logind[1459]: Removed session 3. Jan 29 11:03:58.466632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:03:58.473680 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:03:58.515991 kubelet[1798]: E0129 11:03:58.515893 1798 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:03:58.518080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:03:58.518215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:58.562365 systemd[1]: Started sshd@3-116.202.15.110:22-147.75.109.163:55674.service - OpenSSH per-connection server daemon (147.75.109.163:55674). Jan 29 11:03:59.555370 sshd[1806]: Accepted publickey for core from 147.75.109.163 port 55674 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:03:59.559118 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:03:59.567438 systemd-logind[1459]: New session 4 of user core. Jan 29 11:03:59.574109 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:04:00.236675 sshd[1808]: Connection closed by 147.75.109.163 port 55674 Jan 29 11:04:00.236512 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:00.240745 systemd[1]: sshd@3-116.202.15.110:22-147.75.109.163:55674.service: Deactivated successfully. Jan 29 11:04:00.242706 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:04:00.245145 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:04:00.247241 systemd-logind[1459]: Removed session 4. Jan 29 11:04:00.413175 systemd[1]: Started sshd@4-116.202.15.110:22-147.75.109.163:55676.service - OpenSSH per-connection server daemon (147.75.109.163:55676). Jan 29 11:04:01.397867 sshd[1813]: Accepted publickey for core from 147.75.109.163 port 55676 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:04:01.399880 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:01.405280 systemd-logind[1459]: New session 5 of user core. Jan 29 11:04:01.414062 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:04:01.936326 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:04:01.936633 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:04:01.956437 sudo[1816]: pam_unix(sudo:session): session closed for user root Jan 29 11:04:02.118059 sshd[1815]: Connection closed by 147.75.109.163 port 55676 Jan 29 11:04:02.117611 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:02.122000 systemd[1]: sshd@4-116.202.15.110:22-147.75.109.163:55676.service: Deactivated successfully. Jan 29 11:04:02.125990 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:04:02.129327 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:04:02.130643 systemd-logind[1459]: Removed session 5. Jan 29 11:04:02.305362 systemd[1]: Started sshd@5-116.202.15.110:22-147.75.109.163:55688.service - OpenSSH per-connection server daemon (147.75.109.163:55688). Jan 29 11:04:03.310909 sshd[1821]: Accepted publickey for core from 147.75.109.163 port 55688 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:04:03.313936 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:03.323748 systemd-logind[1459]: New session 6 of user core. Jan 29 11:04:03.348708 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:04:03.842156 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:04:03.842468 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:04:03.846572 sudo[1825]: pam_unix(sudo:session): session closed for user root Jan 29 11:04:03.853472 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:04:03.854135 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:04:03.872123 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:04:03.904584 augenrules[1847]: No rules Jan 29 11:04:03.905406 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:04:03.905792 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:04:03.907881 sudo[1824]: pam_unix(sudo:session): session closed for user root Jan 29 11:04:04.069336 sshd[1823]: Connection closed by 147.75.109.163 port 55688 Jan 29 11:04:04.070136 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:04.076541 systemd[1]: sshd@5-116.202.15.110:22-147.75.109.163:55688.service: Deactivated successfully. Jan 29 11:04:04.078805 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:04:04.079870 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:04:04.081937 systemd-logind[1459]: Removed session 6. Jan 29 11:04:04.256753 systemd[1]: Started sshd@6-116.202.15.110:22-147.75.109.163:55694.service - OpenSSH per-connection server daemon (147.75.109.163:55694). Jan 29 11:04:05.242078 sshd[1855]: Accepted publickey for core from 147.75.109.163 port 55694 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:04:05.244483 sshd-session[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:05.249643 systemd-logind[1459]: New session 7 of user core. Jan 29 11:04:05.261981 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:04:05.766491 sudo[1858]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:04:05.766886 sudo[1858]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:04:06.089027 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:04:06.090018 (dockerd)[1876]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:04:06.330173 dockerd[1876]: time="2025-01-29T11:04:06.330050583Z" level=info msg="Starting up" Jan 29 11:04:06.405205 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport562960000-merged.mount: Deactivated successfully. Jan 29 11:04:06.431368 dockerd[1876]: time="2025-01-29T11:04:06.431298911Z" level=info msg="Loading containers: start." Jan 29 11:04:06.614702 kernel: Initializing XFRM netlink socket Jan 29 11:04:06.701853 systemd-networkd[1384]: docker0: Link UP Jan 29 11:04:06.745424 dockerd[1876]: time="2025-01-29T11:04:06.744790867Z" level=info msg="Loading containers: done." Jan 29 11:04:06.762199 dockerd[1876]: time="2025-01-29T11:04:06.762110297Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:04:06.762482 dockerd[1876]: time="2025-01-29T11:04:06.762229093Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 11:04:06.762482 dockerd[1876]: time="2025-01-29T11:04:06.762425965Z" level=info msg="Daemon has completed initialization" Jan 29 11:04:06.799727 dockerd[1876]: time="2025-01-29T11:04:06.799655842Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:04:06.800086 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:04:07.401886 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1461610133-merged.mount: Deactivated successfully. Jan 29 11:04:07.566042 containerd[1481]: time="2025-01-29T11:04:07.565486849Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 11:04:08.248050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4123912557.mount: Deactivated successfully. Jan 29 11:04:08.584235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 29 11:04:08.591063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:08.729635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:08.746574 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:08.795903 kubelet[2120]: E0129 11:04:08.795857 2120 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:08.798725 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:08.798909 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:04:10.345444 containerd[1481]: time="2025-01-29T11:04:10.345213629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:10.347442 containerd[1481]: time="2025-01-29T11:04:10.347368191Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=26221040" Jan 29 11:04:10.348974 containerd[1481]: time="2025-01-29T11:04:10.348902015Z" level=info msg="ImageCreate event name:\"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:10.352482 containerd[1481]: time="2025-01-29T11:04:10.352401769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:10.353892 containerd[1481]: time="2025-01-29T11:04:10.353676483Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"26217748\" in 2.788108997s" Jan 29 11:04:10.353892 containerd[1481]: time="2025-01-29T11:04:10.353723721Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\"" Jan 29 11:04:10.354710 containerd[1481]: time="2025-01-29T11:04:10.354611009Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 11:04:12.737172 containerd[1481]: time="2025-01-29T11:04:12.737117902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:12.739487 containerd[1481]: time="2025-01-29T11:04:12.739391103Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=22527127" Jan 29 11:04:12.741710 containerd[1481]: time="2025-01-29T11:04:12.740814694Z" level=info msg="ImageCreate event name:\"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:12.745542 containerd[1481]: time="2025-01-29T11:04:12.745430975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:12.746456 containerd[1481]: time="2025-01-29T11:04:12.746401981Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"23968433\" in 2.391747574s" Jan 29 11:04:12.746456 containerd[1481]: time="2025-01-29T11:04:12.746449300Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\"" Jan 29 11:04:12.747265 containerd[1481]: time="2025-01-29T11:04:12.747126356Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 11:04:14.366616 containerd[1481]: time="2025-01-29T11:04:14.366548065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:14.370519 containerd[1481]: time="2025-01-29T11:04:14.370071589Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=17481133" Jan 29 11:04:14.373325 containerd[1481]: time="2025-01-29T11:04:14.373262604Z" level=info msg="ImageCreate event name:\"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:14.377643 containerd[1481]: time="2025-01-29T11:04:14.377539424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:14.379071 containerd[1481]: time="2025-01-29T11:04:14.378536831Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"18922457\" in 1.631376755s" Jan 29 11:04:14.379071 containerd[1481]: time="2025-01-29T11:04:14.378584909Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\"" Jan 29 11:04:14.379236 containerd[1481]: time="2025-01-29T11:04:14.379209489Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 11:04:15.445216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2716564566.mount: Deactivated successfully. Jan 29 11:04:15.748384 containerd[1481]: time="2025-01-29T11:04:15.748178176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:15.750557 containerd[1481]: time="2025-01-29T11:04:15.750491822Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364423" Jan 29 11:04:15.751542 containerd[1481]: time="2025-01-29T11:04:15.751504549Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:15.754973 containerd[1481]: time="2025-01-29T11:04:15.754922719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:15.755662 containerd[1481]: time="2025-01-29T11:04:15.755604017Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 1.37635981s" Jan 29 11:04:15.755662 containerd[1481]: time="2025-01-29T11:04:15.755643536Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\"" Jan 29 11:04:15.756434 containerd[1481]: time="2025-01-29T11:04:15.756215078Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 11:04:16.415642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2641589335.mount: Deactivated successfully. Jan 29 11:04:17.473734 containerd[1481]: time="2025-01-29T11:04:17.473612484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:17.475714 containerd[1481]: time="2025-01-29T11:04:17.475623302Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jan 29 11:04:17.477344 containerd[1481]: time="2025-01-29T11:04:17.477266131Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:17.481735 containerd[1481]: time="2025-01-29T11:04:17.481588598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:17.482925 containerd[1481]: time="2025-01-29T11:04:17.482869559Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.726613122s" Jan 29 11:04:17.482925 containerd[1481]: time="2025-01-29T11:04:17.482913517Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 29 11:04:17.484590 containerd[1481]: time="2025-01-29T11:04:17.484303235Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:04:18.036143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2093057589.mount: Deactivated successfully. Jan 29 11:04:18.050042 containerd[1481]: time="2025-01-29T11:04:18.049942311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:18.053840 containerd[1481]: time="2025-01-29T11:04:18.053751796Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 29 11:04:18.056456 containerd[1481]: time="2025-01-29T11:04:18.056346718Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:18.064272 containerd[1481]: time="2025-01-29T11:04:18.064169523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:18.065305 containerd[1481]: time="2025-01-29T11:04:18.065037777Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 580.688423ms" Jan 29 11:04:18.065305 containerd[1481]: time="2025-01-29T11:04:18.065084055Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 11:04:18.066560 containerd[1481]: time="2025-01-29T11:04:18.066311458Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 11:04:18.765354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2617307690.mount: Deactivated successfully. Jan 29 11:04:18.834196 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 29 11:04:18.844322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:18.990546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:19.001426 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:19.063176 kubelet[2224]: E0129 11:04:19.062996 2224 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:19.065879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:19.066087 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:04:21.873679 containerd[1481]: time="2025-01-29T11:04:21.871805797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:21.874845 containerd[1481]: time="2025-01-29T11:04:21.874786233Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:21.874934 containerd[1481]: time="2025-01-29T11:04:21.874850191Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812491" Jan 29 11:04:21.879465 containerd[1481]: time="2025-01-29T11:04:21.879403023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:21.881598 containerd[1481]: time="2025-01-29T11:04:21.881548722Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.815193706s" Jan 29 11:04:21.881598 containerd[1481]: time="2025-01-29T11:04:21.881592201Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 29 11:04:27.090319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:27.101176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:27.146368 systemd[1]: Reloading requested from client PID 2301 ('systemctl') (unit session-7.scope)... Jan 29 11:04:27.146389 systemd[1]: Reloading... Jan 29 11:04:27.284673 zram_generator::config[2345]: No configuration found. Jan 29 11:04:27.374068 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:04:27.442918 systemd[1]: Reloading finished in 296 ms. Jan 29 11:04:27.529635 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:27.536051 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:04:27.536419 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:27.543133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:27.694917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:27.708280 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:04:27.757208 kubelet[2391]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:04:27.757208 kubelet[2391]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:04:27.757208 kubelet[2391]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:04:27.757208 kubelet[2391]: I0129 11:04:27.756986 2391 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:04:28.436899 kubelet[2391]: I0129 11:04:28.436837 2391 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:04:28.436899 kubelet[2391]: I0129 11:04:28.436877 2391 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:04:28.437196 kubelet[2391]: I0129 11:04:28.437159 2391 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:04:28.475900 kubelet[2391]: E0129 11:04:28.475843 2391 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://116.202.15.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 116.202.15.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:28.478604 kubelet[2391]: I0129 11:04:28.478217 2391 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:04:28.489407 kubelet[2391]: E0129 11:04:28.488573 2391 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:04:28.489407 kubelet[2391]: I0129 11:04:28.488614 2391 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:04:28.494774 kubelet[2391]: I0129 11:04:28.494317 2391 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:04:28.495725 kubelet[2391]: I0129 11:04:28.495673 2391 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:04:28.496950 kubelet[2391]: I0129 11:04:28.495852 2391 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-1-dfe7c46cbd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:04:28.497237 kubelet[2391]: I0129 11:04:28.497220 2391 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:04:28.499074 kubelet[2391]: I0129 11:04:28.497293 2391 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:04:28.499074 kubelet[2391]: I0129 11:04:28.497859 2391 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:04:28.506238 kubelet[2391]: I0129 11:04:28.505535 2391 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:04:28.506238 kubelet[2391]: I0129 11:04:28.505571 2391 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:04:28.506238 kubelet[2391]: I0129 11:04:28.505595 2391 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:04:28.506238 kubelet[2391]: I0129 11:04:28.505606 2391 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:04:28.512690 kubelet[2391]: W0129 11:04:28.512272 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://116.202.15.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 116.202.15.110:6443: connect: connection refused Jan 29 11:04:28.512690 kubelet[2391]: E0129 11:04:28.512363 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://116.202.15.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 116.202.15.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:28.512690 kubelet[2391]: W0129 11:04:28.512434 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://116.202.15.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-1-dfe7c46cbd&limit=500&resourceVersion=0": dial tcp 116.202.15.110:6443: connect: connection refused Jan 29 11:04:28.512690 kubelet[2391]: E0129 11:04:28.512461 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://116.202.15.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-1-dfe7c46cbd&limit=500&resourceVersion=0\": dial tcp 116.202.15.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:28.512690 kubelet[2391]: I0129 11:04:28.512565 2391 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:04:28.514352 kubelet[2391]: I0129 11:04:28.514281 2391 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:04:28.514602 kubelet[2391]: W0129 11:04:28.514575 2391 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:04:28.517390 kubelet[2391]: I0129 11:04:28.516690 2391 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:04:28.517390 kubelet[2391]: I0129 11:04:28.516760 2391 server.go:1287] "Started kubelet" Jan 29 11:04:28.521040 kubelet[2391]: I0129 11:04:28.520981 2391 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:04:28.522184 kubelet[2391]: I0129 11:04:28.521987 2391 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:04:28.525008 kubelet[2391]: I0129 11:04:28.524022 2391 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:04:28.525008 kubelet[2391]: I0129 11:04:28.524422 2391 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:04:28.525008 kubelet[2391]: E0129 11:04:28.524621 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://116.202.15.110:6443/api/v1/namespaces/default/events\": dial tcp 116.202.15.110:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-1-dfe7c46cbd.181f250141c1dd18 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-1-dfe7c46cbd,UID:ci-4186-1-0-1-dfe7c46cbd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-1-dfe7c46cbd,},FirstTimestamp:2025-01-29 11:04:28.516719896 +0000 UTC m=+0.803290532,LastTimestamp:2025-01-29 11:04:28.516719896 +0000 UTC m=+0.803290532,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-1-dfe7c46cbd,}" Jan 29 11:04:28.525008 kubelet[2391]: I0129 11:04:28.525074 2391 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:04:28.525968 kubelet[2391]: I0129 11:04:28.525935 2391 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:04:28.531518 kubelet[2391]: E0129 11:04:28.531181 2391 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186-1-0-1-dfe7c46cbd\" not found" Jan 29 11:04:28.531518 kubelet[2391]: I0129 11:04:28.531220 2391 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:04:28.531518 kubelet[2391]: I0129 11:04:28.531423 2391 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:04:28.533208 kubelet[2391]: I0129 11:04:28.531493 2391 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:04:28.533208 kubelet[2391]: W0129 11:04:28.532523 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://116.202.15.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 116.202.15.110:6443: connect: connection refused Jan 29 11:04:28.533208 kubelet[2391]: E0129 11:04:28.532573 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://116.202.15.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 116.202.15.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:28.534250 kubelet[2391]: E0129 11:04:28.533710 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.15.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-1-dfe7c46cbd?timeout=10s\": dial tcp 116.202.15.110:6443: connect: connection refused" interval="200ms" Jan 29 11:04:28.534250 kubelet[2391]: E0129 11:04:28.535156 2391 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:04:28.537610 kubelet[2391]: I0129 11:04:28.536004 2391 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:04:28.537610 kubelet[2391]: I0129 11:04:28.536126 2391 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:04:28.538629 kubelet[2391]: I0129 11:04:28.538607 2391 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:04:28.549005 kubelet[2391]: I0129 11:04:28.548941 2391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:04:28.550159 kubelet[2391]: I0129 11:04:28.550123 2391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:04:28.550159 kubelet[2391]: I0129 11:04:28.550155 2391 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:04:28.550293 kubelet[2391]: I0129 11:04:28.550191 2391 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:04:28.550293 kubelet[2391]: I0129 11:04:28.550198 2391 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:04:28.550293 kubelet[2391]: E0129 11:04:28.550245 2391 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:04:28.558103 kubelet[2391]: W0129 11:04:28.558023 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://116.202.15.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 116.202.15.110:6443: connect: connection refused Jan 29 11:04:28.558103 kubelet[2391]: E0129 11:04:28.558097 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://116.202.15.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 116.202.15.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:28.572143 kubelet[2391]: I0129 11:04:28.572080 2391 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:04:28.572143 kubelet[2391]: I0129 11:04:28.572108 2391 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:04:28.572394 kubelet[2391]: I0129 11:04:28.572161 2391 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:04:28.577430 kubelet[2391]: I0129 11:04:28.577381 2391 policy_none.go:49] "None policy: Start" Jan 29 11:04:28.577430 kubelet[2391]: I0129 11:04:28.577424 2391 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:04:28.577430 kubelet[2391]: I0129 11:04:28.577439 2391 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:04:28.583878 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:04:28.595499 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:04:28.599608 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:04:28.609481 kubelet[2391]: I0129 11:04:28.609182 2391 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:04:28.609833 kubelet[2391]: I0129 11:04:28.609805 2391 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:04:28.611898 kubelet[2391]: I0129 11:04:28.610597 2391 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:04:28.613771 kubelet[2391]: I0129 11:04:28.612922 2391 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:04:28.615891 kubelet[2391]: E0129 11:04:28.615837 2391 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:04:28.616856 kubelet[2391]: E0129 11:04:28.615914 2391 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-0-1-dfe7c46cbd\" not found" Jan 29 11:04:28.665881 systemd[1]: Created slice kubepods-burstable-pod1ab45a9571af7eebb2c5f2f55f8143ff.slice - libcontainer container kubepods-burstable-pod1ab45a9571af7eebb2c5f2f55f8143ff.slice. Jan 29 11:04:28.685124 kubelet[2391]: E0129 11:04:28.684720 2391 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186-1-0-1-dfe7c46cbd\" not found" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:28.690241 systemd[1]: Created slice kubepods-burstable-pod169cc8b857019d81c44b7ae543477fd7.slice - libcontainer container kubepods-burstable-pod169cc8b857019d81c44b7ae543477fd7.slice. Jan 29 11:04:28.700983 kubelet[2391]: E0129 11:04:28.700613 2391 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186-1-0-1-dfe7c46cbd\" not found" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:28.705081 systemd[1]: Created slice kubepods-burstable-pod10a439c0ae8de3a2bfd6f92b4c7ac182.slice - libcontainer container kubepods-burstable-pod10a439c0ae8de3a2bfd6f92b4c7ac182.slice. Jan 29 11:04:28.707579 kubelet[2391]: E0129 11:04:28.707482 2391 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186-1-0-1-dfe7c46cbd\" not found" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:28.716351 kubelet[2391]: I0129 11:04:28.716282 2391 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:28.716984 kubelet[2391]: E0129 11:04:28.716946 2391 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://116.202.15.110:6443/api/v1/nodes\": dial tcp 116.202.15.110:6443: connect: connection refused" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:28.733974 kubelet[2391]: I0129 11:04:28.733823 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ab45a9571af7eebb2c5f2f55f8143ff-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"1ab45a9571af7eebb2c5f2f55f8143ff\") " pod="kube-system/kube-apiserver-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:28.733974 kubelet[2391]: I0129 11:04:28.733871 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/169cc8b857019d81c44b7ae543477fd7-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"169cc8b857019d81c44b7ae543477fd7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:28.733974 kubelet[2391]: I0129 11:04:28.733893 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/169cc8b857019d81c44b7ae543477fd7-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"169cc8b857019d81c44b7ae543477fd7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:28.733974 kubelet[2391]: I0129 11:04:28.733909 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/169cc8b857019d81c44b7ae543477fd7-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"169cc8b857019d81c44b7ae543477fd7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:28.733974 kubelet[2391]: I0129 11:04:28.733927 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ab45a9571af7eebb2c5f2f55f8143ff-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"1ab45a9571af7eebb2c5f2f55f8143ff\") " pod="kube-system/kube-apiserver-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:28.734551 kubelet[2391]: I0129 11:04:28.733958 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ab45a9571af7eebb2c5f2f55f8143ff-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"1ab45a9571af7eebb2c5f2f55f8143ff\") " pod="kube-system/kube-apiserver-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:28.734551 kubelet[2391]: I0129 11:04:28.733975 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/169cc8b857019d81c44b7ae543477fd7-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"169cc8b857019d81c44b7ae543477fd7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:28.734551 kubelet[2391]: I0129 11:04:28.733992 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/169cc8b857019d81c44b7ae543477fd7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"169cc8b857019d81c44b7ae543477fd7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:28.734551 kubelet[2391]: I0129 11:04:28.734011 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10a439c0ae8de3a2bfd6f92b4c7ac182-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"10a439c0ae8de3a2bfd6f92b4c7ac182\") " pod="kube-system/kube-scheduler-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:28.734551 kubelet[2391]: E0129 11:04:28.734484 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.15.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-1-dfe7c46cbd?timeout=10s\": dial tcp 116.202.15.110:6443: connect: connection refused" interval="400ms" Jan 29 11:04:28.919543 kubelet[2391]: I0129 11:04:28.919517 2391 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:28.919987 kubelet[2391]: E0129 11:04:28.919885 2391 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://116.202.15.110:6443/api/v1/nodes\": dial tcp 116.202.15.110:6443: connect: connection refused" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:28.987507 containerd[1481]: time="2025-01-29T11:04:28.987356231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-1-dfe7c46cbd,Uid:1ab45a9571af7eebb2c5f2f55f8143ff,Namespace:kube-system,Attempt:0,}" Jan 29 11:04:29.002222 containerd[1481]: time="2025-01-29T11:04:29.002156911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd,Uid:169cc8b857019d81c44b7ae543477fd7,Namespace:kube-system,Attempt:0,}" Jan 29 11:04:29.008847 containerd[1481]: time="2025-01-29T11:04:29.008775393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-1-dfe7c46cbd,Uid:10a439c0ae8de3a2bfd6f92b4c7ac182,Namespace:kube-system,Attempt:0,}" Jan 29 11:04:29.136103 kubelet[2391]: E0129 11:04:29.135963 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.15.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-1-dfe7c46cbd?timeout=10s\": dial tcp 116.202.15.110:6443: connect: connection refused" interval="800ms" Jan 29 11:04:29.324060 kubelet[2391]: I0129 11:04:29.323335 2391 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:29.324060 kubelet[2391]: E0129 11:04:29.323784 2391 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://116.202.15.110:6443/api/v1/nodes\": dial tcp 116.202.15.110:6443: connect: connection refused" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:29.509817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3220042032.mount: Deactivated successfully. Jan 29 11:04:29.519471 containerd[1481]: time="2025-01-29T11:04:29.519379720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:04:29.523917 containerd[1481]: time="2025-01-29T11:04:29.523777774Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 29 11:04:29.528207 containerd[1481]: time="2025-01-29T11:04:29.527628123Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:04:29.532225 containerd[1481]: time="2025-01-29T11:04:29.531733904Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:04:29.535436 containerd[1481]: time="2025-01-29T11:04:29.535176862Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:04:29.537158 containerd[1481]: time="2025-01-29T11:04:29.537106936Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:04:29.537738 containerd[1481]: time="2025-01-29T11:04:29.537548046Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:04:29.538965 containerd[1481]: time="2025-01-29T11:04:29.538849375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:04:29.541584 containerd[1481]: time="2025-01-29T11:04:29.540495175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 531.626385ms" Jan 29 11:04:29.541756 containerd[1481]: time="2025-01-29T11:04:29.541589909Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.12084ms" Jan 29 11:04:29.546826 containerd[1481]: time="2025-01-29T11:04:29.546376795Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 544.101606ms" Jan 29 11:04:29.641304 kubelet[2391]: W0129 11:04:29.641102 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://116.202.15.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-1-dfe7c46cbd&limit=500&resourceVersion=0": dial tcp 116.202.15.110:6443: connect: connection refused Jan 29 11:04:29.641304 kubelet[2391]: E0129 11:04:29.641218 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://116.202.15.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-1-dfe7c46cbd&limit=500&resourceVersion=0\": dial tcp 116.202.15.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:29.655913 kubelet[2391]: W0129 11:04:29.655778 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://116.202.15.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 116.202.15.110:6443: connect: connection refused Jan 29 11:04:29.655913 kubelet[2391]: E0129 11:04:29.655856 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://116.202.15.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 116.202.15.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:29.683737 containerd[1481]: time="2025-01-29T11:04:29.682368547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:04:29.683737 containerd[1481]: time="2025-01-29T11:04:29.683340204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:04:29.683737 containerd[1481]: time="2025-01-29T11:04:29.683365523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:29.685134 containerd[1481]: time="2025-01-29T11:04:29.685076322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:29.686661 containerd[1481]: time="2025-01-29T11:04:29.686375291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:04:29.687244 containerd[1481]: time="2025-01-29T11:04:29.686912239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:04:29.687244 containerd[1481]: time="2025-01-29T11:04:29.686933358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:29.687387 containerd[1481]: time="2025-01-29T11:04:29.686949838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:04:29.687387 containerd[1481]: time="2025-01-29T11:04:29.686993877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:04:29.687387 containerd[1481]: time="2025-01-29T11:04:29.687004476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:29.687387 containerd[1481]: time="2025-01-29T11:04:29.687081955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:29.687507 containerd[1481]: time="2025-01-29T11:04:29.687424266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:29.718095 systemd[1]: Started cri-containerd-e12477d7641c9c4f4e3a2fb4e9ddfd4676dafecc4bf04eff02263397243db754.scope - libcontainer container e12477d7641c9c4f4e3a2fb4e9ddfd4676dafecc4bf04eff02263397243db754. Jan 29 11:04:29.730967 systemd[1]: Started cri-containerd-3d14f1b30989935b949d5b71f150ba5b4006c4338eb6bb878364eee0f022439d.scope - libcontainer container 3d14f1b30989935b949d5b71f150ba5b4006c4338eb6bb878364eee0f022439d. Jan 29 11:04:29.732770 systemd[1]: Started cri-containerd-7f88446b4add3aa7391866e08355565f896b25a486184e0af8d77531950b3409.scope - libcontainer container 7f88446b4add3aa7391866e08355565f896b25a486184e0af8d77531950b3409. Jan 29 11:04:29.792219 containerd[1481]: time="2025-01-29T11:04:29.791862372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-1-dfe7c46cbd,Uid:1ab45a9571af7eebb2c5f2f55f8143ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d14f1b30989935b949d5b71f150ba5b4006c4338eb6bb878364eee0f022439d\"" Jan 29 11:04:29.793234 containerd[1481]: time="2025-01-29T11:04:29.793087463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-1-dfe7c46cbd,Uid:10a439c0ae8de3a2bfd6f92b4c7ac182,Namespace:kube-system,Attempt:0,} returns sandbox id \"e12477d7641c9c4f4e3a2fb4e9ddfd4676dafecc4bf04eff02263397243db754\"" Jan 29 11:04:29.797911 containerd[1481]: time="2025-01-29T11:04:29.797862109Z" level=info msg="CreateContainer within sandbox \"e12477d7641c9c4f4e3a2fb4e9ddfd4676dafecc4bf04eff02263397243db754\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:04:29.800713 containerd[1481]: time="2025-01-29T11:04:29.798022305Z" level=info msg="CreateContainer within sandbox \"3d14f1b30989935b949d5b71f150ba5b4006c4338eb6bb878364eee0f022439d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:04:29.806934 containerd[1481]: time="2025-01-29T11:04:29.805937116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd,Uid:169cc8b857019d81c44b7ae543477fd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f88446b4add3aa7391866e08355565f896b25a486184e0af8d77531950b3409\"" Jan 29 11:04:29.811894 containerd[1481]: time="2025-01-29T11:04:29.811854095Z" level=info msg="CreateContainer within sandbox \"7f88446b4add3aa7391866e08355565f896b25a486184e0af8d77531950b3409\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:04:29.833464 containerd[1481]: time="2025-01-29T11:04:29.833173906Z" level=info msg="CreateContainer within sandbox \"3d14f1b30989935b949d5b71f150ba5b4006c4338eb6bb878364eee0f022439d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7a1fa199bfa8fcb997df7ded348bf67daec50de2397667a618317cdb5efff0f8\"" Jan 29 11:04:29.836039 containerd[1481]: time="2025-01-29T11:04:29.834812347Z" level=info msg="StartContainer for \"7a1fa199bfa8fcb997df7ded348bf67daec50de2397667a618317cdb5efff0f8\"" Jan 29 11:04:29.836689 containerd[1481]: time="2025-01-29T11:04:29.836633663Z" level=info msg="CreateContainer within sandbox \"e12477d7641c9c4f4e3a2fb4e9ddfd4676dafecc4bf04eff02263397243db754\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e2f926d312609f52913b456621b06295f28823ddeaca090d3bc390566189d9dc\"" Jan 29 11:04:29.839230 containerd[1481]: time="2025-01-29T11:04:29.839198322Z" level=info msg="StartContainer for \"e2f926d312609f52913b456621b06295f28823ddeaca090d3bc390566189d9dc\"" Jan 29 11:04:29.844091 containerd[1481]: time="2025-01-29T11:04:29.844045566Z" level=info msg="CreateContainer within sandbox \"7f88446b4add3aa7391866e08355565f896b25a486184e0af8d77531950b3409\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"acf11183ea205d294abb0da62a04d9d65ccfb42ac4442939d08a2b9528d763c5\"" Jan 29 11:04:29.845556 containerd[1481]: time="2025-01-29T11:04:29.845512731Z" level=info msg="StartContainer for \"acf11183ea205d294abb0da62a04d9d65ccfb42ac4442939d08a2b9528d763c5\"" Jan 29 11:04:29.869176 systemd[1]: Started cri-containerd-7a1fa199bfa8fcb997df7ded348bf67daec50de2397667a618317cdb5efff0f8.scope - libcontainer container 7a1fa199bfa8fcb997df7ded348bf67daec50de2397667a618317cdb5efff0f8. Jan 29 11:04:29.892879 systemd[1]: Started cri-containerd-e2f926d312609f52913b456621b06295f28823ddeaca090d3bc390566189d9dc.scope - libcontainer container e2f926d312609f52913b456621b06295f28823ddeaca090d3bc390566189d9dc. Jan 29 11:04:29.906776 systemd[1]: Started cri-containerd-acf11183ea205d294abb0da62a04d9d65ccfb42ac4442939d08a2b9528d763c5.scope - libcontainer container acf11183ea205d294abb0da62a04d9d65ccfb42ac4442939d08a2b9528d763c5. Jan 29 11:04:29.930213 kubelet[2391]: W0129 11:04:29.930143 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://116.202.15.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 116.202.15.110:6443: connect: connection refused Jan 29 11:04:29.931695 kubelet[2391]: E0129 11:04:29.930997 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://116.202.15.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 116.202.15.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:29.937693 kubelet[2391]: E0129 11:04:29.936878 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.15.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-1-dfe7c46cbd?timeout=10s\": dial tcp 116.202.15.110:6443: connect: connection refused" interval="1.6s" Jan 29 11:04:29.952884 containerd[1481]: time="2025-01-29T11:04:29.952789369Z" level=info msg="StartContainer for \"7a1fa199bfa8fcb997df7ded348bf67daec50de2397667a618317cdb5efff0f8\" returns successfully" Jan 29 11:04:29.978092 containerd[1481]: time="2025-01-29T11:04:29.977822091Z" level=info msg="StartContainer for \"e2f926d312609f52913b456621b06295f28823ddeaca090d3bc390566189d9dc\" returns successfully" Jan 29 11:04:29.978719 containerd[1481]: time="2025-01-29T11:04:29.977817731Z" level=info msg="StartContainer for \"acf11183ea205d294abb0da62a04d9d65ccfb42ac4442939d08a2b9528d763c5\" returns successfully" Jan 29 11:04:30.097810 kubelet[2391]: W0129 11:04:30.097620 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://116.202.15.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 116.202.15.110:6443: connect: connection refused Jan 29 11:04:30.097810 kubelet[2391]: E0129 11:04:30.097761 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://116.202.15.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 116.202.15.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:30.127982 kubelet[2391]: I0129 11:04:30.127944 2391 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:30.586676 kubelet[2391]: E0129 11:04:30.586238 2391 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186-1-0-1-dfe7c46cbd\" not found" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:30.599033 kubelet[2391]: E0129 11:04:30.598992 2391 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186-1-0-1-dfe7c46cbd\" not found" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:30.605433 kubelet[2391]: E0129 11:04:30.605280 2391 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186-1-0-1-dfe7c46cbd\" not found" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:31.611235 kubelet[2391]: E0129 11:04:31.611164 2391 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186-1-0-1-dfe7c46cbd\" not found" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:31.611815 kubelet[2391]: E0129 11:04:31.611539 2391 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186-1-0-1-dfe7c46cbd\" not found" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:32.523168 kubelet[2391]: I0129 11:04:32.523103 2391 apiserver.go:52] "Watching apiserver" Jan 29 11:04:32.591985 kubelet[2391]: E0129 11:04:32.591933 2391 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186-1-0-1-dfe7c46cbd\" not found" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:32.614541 kubelet[2391]: E0129 11:04:32.614483 2391 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186-1-0-1-dfe7c46cbd\" not found" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:32.622185 kubelet[2391]: E0129 11:04:32.622125 2391 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-1-0-1-dfe7c46cbd\" not found" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:32.632679 kubelet[2391]: I0129 11:04:32.632347 2391 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:04:32.739908 kubelet[2391]: I0129 11:04:32.739518 2391 kubelet_node_status.go:79] "Successfully registered node" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:32.835410 kubelet[2391]: I0129 11:04:32.833111 2391 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:32.854249 kubelet[2391]: E0129 11:04:32.854168 2391 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4186-1-0-1-dfe7c46cbd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:32.854249 kubelet[2391]: I0129 11:04:32.854233 2391 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:32.860406 kubelet[2391]: E0129 11:04:32.860333 2391 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:32.860406 kubelet[2391]: I0129 11:04:32.860391 2391 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:32.864035 kubelet[2391]: E0129 11:04:32.863977 2391 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4186-1-0-1-dfe7c46cbd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:33.554293 kubelet[2391]: I0129 11:04:33.553902 2391 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:35.177301 systemd[1]: Reloading requested from client PID 2660 ('systemctl') (unit session-7.scope)... Jan 29 11:04:35.177862 systemd[1]: Reloading... Jan 29 11:04:35.287688 zram_generator::config[2703]: No configuration found. Jan 29 11:04:35.466872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:04:35.598744 systemd[1]: Reloading finished in 420 ms. Jan 29 11:04:35.665707 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:35.680023 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:04:35.680352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:35.680438 systemd[1]: kubelet.service: Consumed 1.342s CPU time, 120.3M memory peak, 0B memory swap peak. Jan 29 11:04:35.691569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:35.899232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:35.904250 (kubelet)[2745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:04:35.966690 kubelet[2745]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:04:35.966690 kubelet[2745]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:04:35.966690 kubelet[2745]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:04:35.966690 kubelet[2745]: I0129 11:04:35.965469 2745 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:04:35.994899 kubelet[2745]: I0129 11:04:35.994810 2745 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:04:35.994899 kubelet[2745]: I0129 11:04:35.994885 2745 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:04:35.995611 kubelet[2745]: I0129 11:04:35.995560 2745 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:04:35.999291 kubelet[2745]: I0129 11:04:35.999236 2745 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:04:36.008423 kubelet[2745]: I0129 11:04:36.007506 2745 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:04:36.020689 kubelet[2745]: E0129 11:04:36.020591 2745 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:04:36.020689 kubelet[2745]: I0129 11:04:36.020677 2745 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:04:36.025788 kubelet[2745]: I0129 11:04:36.025723 2745 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:04:36.026294 kubelet[2745]: I0129 11:04:36.026012 2745 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:04:36.026742 kubelet[2745]: I0129 11:04:36.026053 2745 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-1-dfe7c46cbd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:04:36.026742 kubelet[2745]: I0129 11:04:36.026578 2745 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:04:36.026742 kubelet[2745]: I0129 11:04:36.026593 2745 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:04:36.026742 kubelet[2745]: I0129 11:04:36.026704 2745 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:04:36.027062 kubelet[2745]: I0129 11:04:36.026962 2745 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:04:36.027062 kubelet[2745]: I0129 11:04:36.026975 2745 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:04:36.036784 kubelet[2745]: I0129 11:04:36.027003 2745 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:04:36.036784 kubelet[2745]: I0129 11:04:36.036042 2745 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:04:36.042321 kubelet[2745]: I0129 11:04:36.041236 2745 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:04:36.042321 kubelet[2745]: I0129 11:04:36.042045 2745 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:04:36.044212 kubelet[2745]: I0129 11:04:36.043800 2745 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:04:36.044212 kubelet[2745]: I0129 11:04:36.043870 2745 server.go:1287] "Started kubelet" Jan 29 11:04:36.050886 kubelet[2745]: I0129 11:04:36.050433 2745 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:04:36.059002 kubelet[2745]: I0129 11:04:36.058481 2745 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:04:36.060605 kubelet[2745]: I0129 11:04:36.060297 2745 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:04:36.062708 kubelet[2745]: I0129 11:04:36.061566 2745 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:04:36.062708 kubelet[2745]: I0129 11:04:36.061962 2745 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:04:36.062962 kubelet[2745]: I0129 11:04:36.062791 2745 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:04:36.066283 kubelet[2745]: I0129 11:04:36.065330 2745 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:04:36.066283 kubelet[2745]: E0129 11:04:36.065820 2745 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186-1-0-1-dfe7c46cbd\" not found" Jan 29 11:04:36.069890 kubelet[2745]: I0129 11:04:36.069825 2745 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:04:36.070094 kubelet[2745]: I0129 11:04:36.070063 2745 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:04:36.073752 kubelet[2745]: I0129 11:04:36.073421 2745 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:04:36.076690 kubelet[2745]: I0129 11:04:36.075457 2745 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:04:36.076690 kubelet[2745]: I0129 11:04:36.075506 2745 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:04:36.076690 kubelet[2745]: I0129 11:04:36.075534 2745 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:04:36.076690 kubelet[2745]: I0129 11:04:36.075540 2745 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:04:36.076690 kubelet[2745]: E0129 11:04:36.075602 2745 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:04:36.090253 kubelet[2745]: I0129 11:04:36.089799 2745 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:04:36.090253 kubelet[2745]: I0129 11:04:36.090056 2745 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:04:36.096672 kubelet[2745]: I0129 11:04:36.093743 2745 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:04:36.178893 kubelet[2745]: E0129 11:04:36.176537 2745 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:04:36.190975 sudo[2776]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:04:36.191561 sudo[2776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:04:36.205530 kubelet[2745]: I0129 11:04:36.205496 2745 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:04:36.205753 kubelet[2745]: I0129 11:04:36.205737 2745 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:04:36.206800 kubelet[2745]: I0129 11:04:36.205852 2745 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:04:36.206800 kubelet[2745]: I0129 11:04:36.206088 2745 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:04:36.206800 kubelet[2745]: I0129 11:04:36.206150 2745 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:04:36.206800 kubelet[2745]: I0129 11:04:36.206181 2745 policy_none.go:49] "None policy: Start" Jan 29 11:04:36.206800 kubelet[2745]: I0129 11:04:36.206193 2745 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:04:36.206800 kubelet[2745]: I0129 11:04:36.206205 2745 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:04:36.207225 kubelet[2745]: I0129 11:04:36.207205 2745 state_mem.go:75] "Updated machine memory state" Jan 29 11:04:36.213949 kubelet[2745]: I0129 11:04:36.213814 2745 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:04:36.214833 kubelet[2745]: I0129 11:04:36.214807 2745 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:04:36.215569 kubelet[2745]: I0129 11:04:36.214960 2745 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:04:36.215569 kubelet[2745]: I0129 11:04:36.215450 2745 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:04:36.223772 kubelet[2745]: E0129 11:04:36.222597 2745 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:04:36.350616 kubelet[2745]: I0129 11:04:36.349180 2745 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:36.371033 kubelet[2745]: I0129 11:04:36.369792 2745 kubelet_node_status.go:125] "Node was previously registered" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:36.372577 kubelet[2745]: I0129 11:04:36.371466 2745 kubelet_node_status.go:79] "Successfully registered node" node="ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:36.380852 kubelet[2745]: I0129 11:04:36.380447 2745 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:36.382836 kubelet[2745]: I0129 11:04:36.380525 2745 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:36.383465 kubelet[2745]: I0129 11:04:36.382181 2745 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:36.430809 kubelet[2745]: E0129 11:04:36.430018 2745 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4186-1-0-1-dfe7c46cbd\" already exists" pod="kube-system/kube-apiserver-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:36.472804 kubelet[2745]: I0129 11:04:36.472436 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/169cc8b857019d81c44b7ae543477fd7-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"169cc8b857019d81c44b7ae543477fd7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:36.472804 kubelet[2745]: I0129 11:04:36.472493 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/169cc8b857019d81c44b7ae543477fd7-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"169cc8b857019d81c44b7ae543477fd7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:36.472804 kubelet[2745]: I0129 11:04:36.472516 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/169cc8b857019d81c44b7ae543477fd7-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"169cc8b857019d81c44b7ae543477fd7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:36.472804 kubelet[2745]: I0129 11:04:36.472549 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ab45a9571af7eebb2c5f2f55f8143ff-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"1ab45a9571af7eebb2c5f2f55f8143ff\") " pod="kube-system/kube-apiserver-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:36.472804 kubelet[2745]: I0129 11:04:36.472575 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ab45a9571af7eebb2c5f2f55f8143ff-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"1ab45a9571af7eebb2c5f2f55f8143ff\") " pod="kube-system/kube-apiserver-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:36.474314 kubelet[2745]: I0129 11:04:36.473913 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/169cc8b857019d81c44b7ae543477fd7-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"169cc8b857019d81c44b7ae543477fd7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:36.474314 kubelet[2745]: I0129 11:04:36.474002 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/169cc8b857019d81c44b7ae543477fd7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"169cc8b857019d81c44b7ae543477fd7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:36.474314 kubelet[2745]: I0129 11:04:36.474032 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10a439c0ae8de3a2bfd6f92b4c7ac182-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"10a439c0ae8de3a2bfd6f92b4c7ac182\") " pod="kube-system/kube-scheduler-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:36.474314 kubelet[2745]: I0129 11:04:36.474210 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ab45a9571af7eebb2c5f2f55f8143ff-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-1-dfe7c46cbd\" (UID: \"1ab45a9571af7eebb2c5f2f55f8143ff\") " pod="kube-system/kube-apiserver-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:36.770818 sudo[2776]: pam_unix(sudo:session): session closed for user root Jan 29 11:04:37.037966 kubelet[2745]: I0129 11:04:37.037638 2745 apiserver.go:52] "Watching apiserver" Jan 29 11:04:37.070071 kubelet[2745]: I0129 11:04:37.070010 2745 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:04:37.130752 kubelet[2745]: I0129 11:04:37.129493 2745 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:37.151767 kubelet[2745]: E0129 11:04:37.151174 2745 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4186-1-0-1-dfe7c46cbd\" already exists" pod="kube-system/kube-scheduler-ci-4186-1-0-1-dfe7c46cbd" Jan 29 11:04:37.195729 kubelet[2745]: I0129 11:04:37.194832 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-0-1-dfe7c46cbd" podStartSLOduration=1.194801894 podStartE2EDuration="1.194801894s" podCreationTimestamp="2025-01-29 11:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:04:37.163549657 +0000 UTC m=+1.253266679" watchObservedRunningTime="2025-01-29 11:04:37.194801894 +0000 UTC m=+1.284518916" Jan 29 11:04:37.216991 kubelet[2745]: I0129 11:04:37.216383 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-0-1-dfe7c46cbd" podStartSLOduration=4.216334412 podStartE2EDuration="4.216334412s" podCreationTimestamp="2025-01-29 11:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:04:37.197834552 +0000 UTC m=+1.287551574" watchObservedRunningTime="2025-01-29 11:04:37.216334412 +0000 UTC m=+1.306051474" Jan 29 11:04:37.218789 kubelet[2745]: I0129 11:04:37.217771 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-0-1-dfe7c46cbd" podStartSLOduration=1.217718703 podStartE2EDuration="1.217718703s" podCreationTimestamp="2025-01-29 11:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:04:37.216303012 +0000 UTC m=+1.306020034" watchObservedRunningTime="2025-01-29 11:04:37.217718703 +0000 UTC m=+1.307436085" Jan 29 11:04:39.083305 sudo[1858]: pam_unix(sudo:session): session closed for user root Jan 29 11:04:39.246449 sshd[1857]: Connection closed by 147.75.109.163 port 55694 Jan 29 11:04:39.247537 sshd-session[1855]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:39.255708 systemd[1]: sshd@6-116.202.15.110:22-147.75.109.163:55694.service: Deactivated successfully. Jan 29 11:04:39.261936 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:04:39.263054 systemd[1]: session-7.scope: Consumed 7.820s CPU time, 152.1M memory peak, 0B memory swap peak. Jan 29 11:04:39.267108 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:04:39.268633 systemd-logind[1459]: Removed session 7. Jan 29 11:04:40.048822 kubelet[2745]: I0129 11:04:40.048784 2745 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:04:40.050104 containerd[1481]: time="2025-01-29T11:04:40.050007878Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:04:40.050523 kubelet[2745]: I0129 11:04:40.050477 2745 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:04:40.763054 systemd[1]: Created slice kubepods-besteffort-pod32748024_eaf2_4d77_b607_ace7efb39a0a.slice - libcontainer container kubepods-besteffort-pod32748024_eaf2_4d77_b607_ace7efb39a0a.slice. Jan 29 11:04:40.774882 kubelet[2745]: I0129 11:04:40.774627 2745 status_manager.go:890] "Failed to get status for pod" podUID="32748024-eaf2-4d77-b607-ace7efb39a0a" pod="kube-system/kube-proxy-j4jfv" err="pods \"kube-proxy-j4jfv\" is forbidden: User \"system:node:ci-4186-1-0-1-dfe7c46cbd\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186-1-0-1-dfe7c46cbd' and this object" Jan 29 11:04:40.774882 kubelet[2745]: W0129 11:04:40.774750 2745 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186-1-0-1-dfe7c46cbd" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-1-dfe7c46cbd' and this object Jan 29 11:04:40.774882 kubelet[2745]: E0129 11:04:40.774781 2745 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4186-1-0-1-dfe7c46cbd\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186-1-0-1-dfe7c46cbd' and this object" logger="UnhandledError" Jan 29 11:04:40.774882 kubelet[2745]: W0129 11:04:40.774820 2745 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4186-1-0-1-dfe7c46cbd" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-1-dfe7c46cbd' and this object Jan 29 11:04:40.774882 kubelet[2745]: E0129 11:04:40.774832 2745 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4186-1-0-1-dfe7c46cbd\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186-1-0-1-dfe7c46cbd' and this object" logger="UnhandledError" Jan 29 11:04:40.804786 systemd[1]: Created slice kubepods-burstable-podecddc1ca_2734_412e_a9d4_a87cd0bff1d9.slice - libcontainer container kubepods-burstable-podecddc1ca_2734_412e_a9d4_a87cd0bff1d9.slice. Jan 29 11:04:40.814352 kubelet[2745]: I0129 11:04:40.814280 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-bpf-maps\") pod \"cilium-zwmtv\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " pod="kube-system/cilium-zwmtv" Jan 29 11:04:40.814352 kubelet[2745]: I0129 11:04:40.814334 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-cni-path\") pod \"cilium-zwmtv\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " pod="kube-system/cilium-zwmtv" Jan 29 11:04:40.814352 kubelet[2745]: I0129 11:04:40.814354 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-etc-cni-netd\") pod \"cilium-zwmtv\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " pod="kube-system/cilium-zwmtv" Jan 29 11:04:40.814600 kubelet[2745]: I0129 11:04:40.814371 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-lib-modules\") pod \"cilium-zwmtv\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " pod="kube-system/cilium-zwmtv" Jan 29 11:04:40.814600 kubelet[2745]: I0129 11:04:40.814394 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-cilium-run\") pod \"cilium-zwmtv\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " pod="kube-system/cilium-zwmtv" Jan 29 11:04:40.814600 kubelet[2745]: I0129 11:04:40.814407 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-clustermesh-secrets\") pod \"cilium-zwmtv\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " pod="kube-system/cilium-zwmtv" Jan 29 11:04:40.814600 kubelet[2745]: I0129 11:04:40.814423 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-hostproc\") pod \"cilium-zwmtv\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " pod="kube-system/cilium-zwmtv" Jan 29 11:04:40.814600 kubelet[2745]: I0129 11:04:40.814438 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-cilium-cgroup\") pod \"cilium-zwmtv\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " pod="kube-system/cilium-zwmtv" Jan 29 11:04:40.814600 kubelet[2745]: I0129 11:04:40.814455 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-xtables-lock\") pod \"cilium-zwmtv\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " pod="kube-system/cilium-zwmtv" Jan 29 11:04:40.814808 kubelet[2745]: I0129 11:04:40.814470 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-cilium-config-path\") pod \"cilium-zwmtv\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " pod="kube-system/cilium-zwmtv" Jan 29 11:04:40.914751 kubelet[2745]: I0129 11:04:40.914694 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqxkh\" (UniqueName: \"kubernetes.io/projected/32748024-eaf2-4d77-b607-ace7efb39a0a-kube-api-access-bqxkh\") pod \"kube-proxy-j4jfv\" (UID: \"32748024-eaf2-4d77-b607-ace7efb39a0a\") " pod="kube-system/kube-proxy-j4jfv" Jan 29 11:04:40.917844 kubelet[2745]: I0129 11:04:40.917769 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-host-proc-sys-net\") pod \"cilium-zwmtv\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " pod="kube-system/cilium-zwmtv" Jan 29 11:04:40.918280 kubelet[2745]: I0129 11:04:40.918243 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32748024-eaf2-4d77-b607-ace7efb39a0a-lib-modules\") pod \"kube-proxy-j4jfv\" (UID: \"32748024-eaf2-4d77-b607-ace7efb39a0a\") " pod="kube-system/kube-proxy-j4jfv" Jan 29 11:04:40.918415 kubelet[2745]: I0129 11:04:40.918401 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/32748024-eaf2-4d77-b607-ace7efb39a0a-kube-proxy\") pod \"kube-proxy-j4jfv\" (UID: \"32748024-eaf2-4d77-b607-ace7efb39a0a\") " pod="kube-system/kube-proxy-j4jfv" Jan 29 11:04:40.918718 kubelet[2745]: I0129 11:04:40.918696 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-hubble-tls\") pod \"cilium-zwmtv\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " pod="kube-system/cilium-zwmtv" Jan 29 11:04:40.919197 kubelet[2745]: I0129 11:04:40.919163 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-host-proc-sys-kernel\") pod \"cilium-zwmtv\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " pod="kube-system/cilium-zwmtv" Jan 29 11:04:40.919346 kubelet[2745]: I0129 11:04:40.919311 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32748024-eaf2-4d77-b607-ace7efb39a0a-xtables-lock\") pod \"kube-proxy-j4jfv\" (UID: \"32748024-eaf2-4d77-b607-ace7efb39a0a\") " pod="kube-system/kube-proxy-j4jfv" Jan 29 11:04:40.919638 kubelet[2745]: I0129 11:04:40.919529 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2clj\" (UniqueName: \"kubernetes.io/projected/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-kube-api-access-t2clj\") pod \"cilium-zwmtv\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " pod="kube-system/cilium-zwmtv" Jan 29 11:04:41.196606 systemd[1]: Created slice kubepods-besteffort-pod728b6ad9_4b71_475f_a661_81e8f9bc8501.slice - libcontainer container kubepods-besteffort-pod728b6ad9_4b71_475f_a661_81e8f9bc8501.slice. Jan 29 11:04:41.222397 kubelet[2745]: I0129 11:04:41.222265 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwzb5\" (UniqueName: \"kubernetes.io/projected/728b6ad9-4b71-475f-a661-81e8f9bc8501-kube-api-access-hwzb5\") pod \"cilium-operator-6c4d7847fc-t2wsh\" (UID: \"728b6ad9-4b71-475f-a661-81e8f9bc8501\") " pod="kube-system/cilium-operator-6c4d7847fc-t2wsh" Jan 29 11:04:41.222397 kubelet[2745]: I0129 11:04:41.222382 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/728b6ad9-4b71-475f-a661-81e8f9bc8501-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-t2wsh\" (UID: \"728b6ad9-4b71-475f-a661-81e8f9bc8501\") " pod="kube-system/cilium-operator-6c4d7847fc-t2wsh" Jan 29 11:04:42.019487 containerd[1481]: time="2025-01-29T11:04:42.019391869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zwmtv,Uid:ecddc1ca-2734-412e-a9d4-a87cd0bff1d9,Namespace:kube-system,Attempt:0,}" Jan 29 11:04:42.056087 containerd[1481]: time="2025-01-29T11:04:42.055894380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:04:42.056454 containerd[1481]: time="2025-01-29T11:04:42.056091696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:04:42.056454 containerd[1481]: time="2025-01-29T11:04:42.056120256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:42.057277 containerd[1481]: time="2025-01-29T11:04:42.056905961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:42.080931 systemd[1]: run-containerd-runc-k8s.io-112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897-runc.lVp2jd.mount: Deactivated successfully. Jan 29 11:04:42.098095 systemd[1]: Started cri-containerd-112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897.scope - libcontainer container 112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897. Jan 29 11:04:42.105640 containerd[1481]: time="2025-01-29T11:04:42.105148691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-t2wsh,Uid:728b6ad9-4b71-475f-a661-81e8f9bc8501,Namespace:kube-system,Attempt:0,}" Jan 29 11:04:42.160264 containerd[1481]: time="2025-01-29T11:04:42.160192892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zwmtv,Uid:ecddc1ca-2734-412e-a9d4-a87cd0bff1d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\"" Jan 29 11:04:42.169913 containerd[1481]: time="2025-01-29T11:04:42.169794871Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:04:42.172344 containerd[1481]: time="2025-01-29T11:04:42.168933087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:04:42.172344 containerd[1481]: time="2025-01-29T11:04:42.169035965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:04:42.172344 containerd[1481]: time="2025-01-29T11:04:42.169048725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:42.172344 containerd[1481]: time="2025-01-29T11:04:42.169156483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:42.199136 systemd[1]: Started cri-containerd-bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7.scope - libcontainer container bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7. Jan 29 11:04:42.255040 containerd[1481]: time="2025-01-29T11:04:42.254843746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-t2wsh,Uid:728b6ad9-4b71-475f-a661-81e8f9bc8501,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7\"" Jan 29 11:04:42.277889 containerd[1481]: time="2025-01-29T11:04:42.276086225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j4jfv,Uid:32748024-eaf2-4d77-b607-ace7efb39a0a,Namespace:kube-system,Attempt:0,}" Jan 29 11:04:42.316987 containerd[1481]: time="2025-01-29T11:04:42.316800777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:04:42.316987 containerd[1481]: time="2025-01-29T11:04:42.316913095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:04:42.316987 containerd[1481]: time="2025-01-29T11:04:42.316942094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:42.317505 containerd[1481]: time="2025-01-29T11:04:42.317141651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:42.346026 systemd[1]: Started cri-containerd-3e30ef3ba9fa1c345a4f85aa0567dd3ab86d07f22f0ce64332fcbc7aff479faa.scope - libcontainer container 3e30ef3ba9fa1c345a4f85aa0567dd3ab86d07f22f0ce64332fcbc7aff479faa. Jan 29 11:04:42.382062 containerd[1481]: time="2025-01-29T11:04:42.381969187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j4jfv,Uid:32748024-eaf2-4d77-b607-ace7efb39a0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e30ef3ba9fa1c345a4f85aa0567dd3ab86d07f22f0ce64332fcbc7aff479faa\"" Jan 29 11:04:42.392727 containerd[1481]: time="2025-01-29T11:04:42.391810762Z" level=info msg="CreateContainer within sandbox \"3e30ef3ba9fa1c345a4f85aa0567dd3ab86d07f22f0ce64332fcbc7aff479faa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:04:42.434826 containerd[1481]: time="2025-01-29T11:04:42.434723472Z" level=info msg="CreateContainer within sandbox \"3e30ef3ba9fa1c345a4f85aa0567dd3ab86d07f22f0ce64332fcbc7aff479faa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e3a9486adda113a953d08ef110ef90c8693edd184688e65942ec034a75b24489\"" Jan 29 11:04:42.436118 containerd[1481]: time="2025-01-29T11:04:42.436064567Z" level=info msg="StartContainer for \"e3a9486adda113a953d08ef110ef90c8693edd184688e65942ec034a75b24489\"" Jan 29 11:04:42.480029 systemd[1]: Started cri-containerd-e3a9486adda113a953d08ef110ef90c8693edd184688e65942ec034a75b24489.scope - libcontainer container e3a9486adda113a953d08ef110ef90c8693edd184688e65942ec034a75b24489. Jan 29 11:04:42.535333 containerd[1481]: time="2025-01-29T11:04:42.532504507Z" level=info msg="StartContainer for \"e3a9486adda113a953d08ef110ef90c8693edd184688e65942ec034a75b24489\" returns successfully" Jan 29 11:04:43.199921 kubelet[2745]: I0129 11:04:43.199832 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j4jfv" podStartSLOduration=3.199799776 podStartE2EDuration="3.199799776s" podCreationTimestamp="2025-01-29 11:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:04:43.181633953 +0000 UTC m=+7.271350975" watchObservedRunningTime="2025-01-29 11:04:43.199799776 +0000 UTC m=+7.289516798" Jan 29 11:04:47.387319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount132954997.mount: Deactivated successfully. Jan 29 11:04:48.773357 containerd[1481]: time="2025-01-29T11:04:48.772303848Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:48.773992 containerd[1481]: time="2025-01-29T11:04:48.773944020Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 11:04:48.775763 containerd[1481]: time="2025-01-29T11:04:48.775726069Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:48.777481 containerd[1481]: time="2025-01-29T11:04:48.777429120Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.60756341s" Jan 29 11:04:48.777481 containerd[1481]: time="2025-01-29T11:04:48.777473719Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 11:04:48.780739 containerd[1481]: time="2025-01-29T11:04:48.779954277Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:04:48.783181 containerd[1481]: time="2025-01-29T11:04:48.782906906Z" level=info msg="CreateContainer within sandbox \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:04:48.805502 containerd[1481]: time="2025-01-29T11:04:48.805440359Z" level=info msg="CreateContainer within sandbox \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65\"" Jan 29 11:04:48.806707 containerd[1481]: time="2025-01-29T11:04:48.806388383Z" level=info msg="StartContainer for \"909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65\"" Jan 29 11:04:48.841888 systemd[1]: Started cri-containerd-909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65.scope - libcontainer container 909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65. Jan 29 11:04:48.874584 containerd[1481]: time="2025-01-29T11:04:48.874380736Z" level=info msg="StartContainer for \"909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65\" returns successfully" Jan 29 11:04:48.894203 systemd[1]: cri-containerd-909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65.scope: Deactivated successfully. Jan 29 11:04:49.126478 containerd[1481]: time="2025-01-29T11:04:49.126247483Z" level=info msg="shim disconnected" id=909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65 namespace=k8s.io Jan 29 11:04:49.126478 containerd[1481]: time="2025-01-29T11:04:49.126311202Z" level=warning msg="cleaning up after shim disconnected" id=909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65 namespace=k8s.io Jan 29 11:04:49.126478 containerd[1481]: time="2025-01-29T11:04:49.126319682Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:04:49.191628 containerd[1481]: time="2025-01-29T11:04:49.191584738Z" level=info msg="CreateContainer within sandbox \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:04:49.212961 containerd[1481]: time="2025-01-29T11:04:49.212883578Z" level=info msg="CreateContainer within sandbox \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af\"" Jan 29 11:04:49.214059 containerd[1481]: time="2025-01-29T11:04:49.213989079Z" level=info msg="StartContainer for \"52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af\"" Jan 29 11:04:49.248907 systemd[1]: Started cri-containerd-52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af.scope - libcontainer container 52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af. Jan 29 11:04:49.279992 containerd[1481]: time="2025-01-29T11:04:49.279773286Z" level=info msg="StartContainer for \"52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af\" returns successfully" Jan 29 11:04:49.304168 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:04:49.304864 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:04:49.304950 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:04:49.314587 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:04:49.315077 systemd[1]: cri-containerd-52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af.scope: Deactivated successfully. Jan 29 11:04:49.339692 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:04:49.351676 containerd[1481]: time="2025-01-29T11:04:49.351343836Z" level=info msg="shim disconnected" id=52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af namespace=k8s.io Jan 29 11:04:49.351676 containerd[1481]: time="2025-01-29T11:04:49.351455714Z" level=warning msg="cleaning up after shim disconnected" id=52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af namespace=k8s.io Jan 29 11:04:49.351676 containerd[1481]: time="2025-01-29T11:04:49.351492833Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:04:49.796808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65-rootfs.mount: Deactivated successfully. Jan 29 11:04:50.202243 containerd[1481]: time="2025-01-29T11:04:50.200583840Z" level=info msg="CreateContainer within sandbox \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:04:50.238291 containerd[1481]: time="2025-01-29T11:04:50.238110414Z" level=info msg="CreateContainer within sandbox \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6\"" Jan 29 11:04:50.238971 containerd[1481]: time="2025-01-29T11:04:50.238944641Z" level=info msg="StartContainer for \"b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6\"" Jan 29 11:04:50.278474 systemd[1]: Started cri-containerd-b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6.scope - libcontainer container b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6. Jan 29 11:04:50.331412 containerd[1481]: time="2025-01-29T11:04:50.331243502Z" level=info msg="StartContainer for \"b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6\" returns successfully" Jan 29 11:04:50.334567 systemd[1]: cri-containerd-b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6.scope: Deactivated successfully. Jan 29 11:04:50.360543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6-rootfs.mount: Deactivated successfully. Jan 29 11:04:50.369313 containerd[1481]: time="2025-01-29T11:04:50.369211629Z" level=info msg="shim disconnected" id=b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6 namespace=k8s.io Jan 29 11:04:50.369313 containerd[1481]: time="2025-01-29T11:04:50.369284508Z" level=warning msg="cleaning up after shim disconnected" id=b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6 namespace=k8s.io Jan 29 11:04:50.369844 containerd[1481]: time="2025-01-29T11:04:50.369293908Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:04:50.383735 containerd[1481]: time="2025-01-29T11:04:50.383619189Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:04:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:04:51.207752 containerd[1481]: time="2025-01-29T11:04:51.207677742Z" level=info msg="CreateContainer within sandbox \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:04:51.242533 containerd[1481]: time="2025-01-29T11:04:51.242079136Z" level=info msg="CreateContainer within sandbox \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d\"" Jan 29 11:04:51.244100 containerd[1481]: time="2025-01-29T11:04:51.243606791Z" level=info msg="StartContainer for \"4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d\"" Jan 29 11:04:51.280909 systemd[1]: Started cri-containerd-4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d.scope - libcontainer container 4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d. Jan 29 11:04:51.310071 systemd[1]: cri-containerd-4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d.scope: Deactivated successfully. Jan 29 11:04:51.314799 containerd[1481]: time="2025-01-29T11:04:51.314277350Z" level=info msg="StartContainer for \"4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d\" returns successfully" Jan 29 11:04:51.335736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d-rootfs.mount: Deactivated successfully. Jan 29 11:04:51.342699 containerd[1481]: time="2025-01-29T11:04:51.342599885Z" level=info msg="shim disconnected" id=4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d namespace=k8s.io Jan 29 11:04:51.342699 containerd[1481]: time="2025-01-29T11:04:51.342694003Z" level=warning msg="cleaning up after shim disconnected" id=4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d namespace=k8s.io Jan 29 11:04:51.342699 containerd[1481]: time="2025-01-29T11:04:51.342703603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:04:52.220935 containerd[1481]: time="2025-01-29T11:04:52.220696587Z" level=info msg="CreateContainer within sandbox \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:04:52.244283 containerd[1481]: time="2025-01-29T11:04:52.244131967Z" level=info msg="CreateContainer within sandbox \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622\"" Jan 29 11:04:52.247386 containerd[1481]: time="2025-01-29T11:04:52.244884595Z" level=info msg="StartContainer for \"f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622\"" Jan 29 11:04:52.284008 systemd[1]: Started cri-containerd-f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622.scope - libcontainer container f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622. Jan 29 11:04:52.317005 containerd[1481]: time="2025-01-29T11:04:52.316449635Z" level=info msg="StartContainer for \"f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622\" returns successfully" Jan 29 11:04:52.417413 kubelet[2745]: I0129 11:04:52.417380 2745 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 11:04:52.485695 systemd[1]: Created slice kubepods-burstable-pod70f7b9e3_294a_4112_b51e_d828e1594106.slice - libcontainer container kubepods-burstable-pod70f7b9e3_294a_4112_b51e_d828e1594106.slice. Jan 29 11:04:52.501460 systemd[1]: Created slice kubepods-burstable-pod9e5d27cf_4dfc_46d8_a21a_b7c2d27bfa6d.slice - libcontainer container kubepods-burstable-pod9e5d27cf_4dfc_46d8_a21a_b7c2d27bfa6d.slice. Jan 29 11:04:52.508082 kubelet[2745]: I0129 11:04:52.507921 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65b7p\" (UniqueName: \"kubernetes.io/projected/70f7b9e3-294a-4112-b51e-d828e1594106-kube-api-access-65b7p\") pod \"coredns-668d6bf9bc-cb7r7\" (UID: \"70f7b9e3-294a-4112-b51e-d828e1594106\") " pod="kube-system/coredns-668d6bf9bc-cb7r7" Jan 29 11:04:52.508082 kubelet[2745]: I0129 11:04:52.507976 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e5d27cf-4dfc-46d8-a21a-b7c2d27bfa6d-config-volume\") pod \"coredns-668d6bf9bc-4fv89\" (UID: \"9e5d27cf-4dfc-46d8-a21a-b7c2d27bfa6d\") " pod="kube-system/coredns-668d6bf9bc-4fv89" Jan 29 11:04:52.508082 kubelet[2745]: I0129 11:04:52.508000 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70f7b9e3-294a-4112-b51e-d828e1594106-config-volume\") pod \"coredns-668d6bf9bc-cb7r7\" (UID: \"70f7b9e3-294a-4112-b51e-d828e1594106\") " pod="kube-system/coredns-668d6bf9bc-cb7r7" Jan 29 11:04:52.508082 kubelet[2745]: I0129 11:04:52.508018 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhfm2\" (UniqueName: \"kubernetes.io/projected/9e5d27cf-4dfc-46d8-a21a-b7c2d27bfa6d-kube-api-access-qhfm2\") pod \"coredns-668d6bf9bc-4fv89\" (UID: \"9e5d27cf-4dfc-46d8-a21a-b7c2d27bfa6d\") " pod="kube-system/coredns-668d6bf9bc-4fv89" Jan 29 11:04:52.800822 containerd[1481]: time="2025-01-29T11:04:52.800552752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cb7r7,Uid:70f7b9e3-294a-4112-b51e-d828e1594106,Namespace:kube-system,Attempt:0,}" Jan 29 11:04:52.809456 containerd[1481]: time="2025-01-29T11:04:52.809409009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4fv89,Uid:9e5d27cf-4dfc-46d8-a21a-b7c2d27bfa6d,Namespace:kube-system,Attempt:0,}" Jan 29 11:04:56.221998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1290993496.mount: Deactivated successfully. Jan 29 11:05:01.995271 containerd[1481]: time="2025-01-29T11:05:01.995049307Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:01.998787 containerd[1481]: time="2025-01-29T11:05:01.997965625Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 11:05:01.999449 containerd[1481]: time="2025-01-29T11:05:01.999358125Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:02.000780 containerd[1481]: time="2025-01-29T11:05:02.000710185Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 13.219569369s" Jan 29 11:05:02.000780 containerd[1481]: time="2025-01-29T11:05:02.000762864Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 11:05:02.005029 containerd[1481]: time="2025-01-29T11:05:02.004964804Z" level=info msg="CreateContainer within sandbox \"bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:05:02.030722 containerd[1481]: time="2025-01-29T11:05:02.030555719Z" level=info msg="CreateContainer within sandbox \"bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05\"" Jan 29 11:05:02.032774 containerd[1481]: time="2025-01-29T11:05:02.031434307Z" level=info msg="StartContainer for \"fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05\"" Jan 29 11:05:02.067111 systemd[1]: run-containerd-runc-k8s.io-fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05-runc.emaM4h.mount: Deactivated successfully. Jan 29 11:05:02.080465 systemd[1]: Started cri-containerd-fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05.scope - libcontainer container fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05. Jan 29 11:05:02.111066 containerd[1481]: time="2025-01-29T11:05:02.110444580Z" level=info msg="StartContainer for \"fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05\" returns successfully" Jan 29 11:05:02.266682 kubelet[2745]: I0129 11:05:02.266469 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zwmtv" podStartSLOduration=15.653044092 podStartE2EDuration="22.266343397s" podCreationTimestamp="2025-01-29 11:04:40 +0000 UTC" firstStartedPulling="2025-01-29 11:04:42.165538431 +0000 UTC m=+6.255255453" lastFinishedPulling="2025-01-29 11:04:48.778837736 +0000 UTC m=+12.868554758" observedRunningTime="2025-01-29 11:04:53.249207379 +0000 UTC m=+17.338924401" watchObservedRunningTime="2025-01-29 11:05:02.266343397 +0000 UTC m=+26.356060419" Jan 29 11:05:02.267631 kubelet[2745]: I0129 11:05:02.267085 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-t2wsh" podStartSLOduration=1.522526354 podStartE2EDuration="21.267070307s" podCreationTimestamp="2025-01-29 11:04:41 +0000 UTC" firstStartedPulling="2025-01-29 11:04:42.258301281 +0000 UTC m=+6.348018303" lastFinishedPulling="2025-01-29 11:05:02.002845234 +0000 UTC m=+26.092562256" observedRunningTime="2025-01-29 11:05:02.264977657 +0000 UTC m=+26.354694679" watchObservedRunningTime="2025-01-29 11:05:02.267070307 +0000 UTC m=+26.356787329" Jan 29 11:05:05.540616 systemd-networkd[1384]: cilium_host: Link UP Jan 29 11:05:05.543256 systemd-networkd[1384]: cilium_net: Link UP Jan 29 11:05:05.543427 systemd-networkd[1384]: cilium_net: Gained carrier Jan 29 11:05:05.543571 systemd-networkd[1384]: cilium_host: Gained carrier Jan 29 11:05:05.671871 systemd-networkd[1384]: cilium_vxlan: Link UP Jan 29 11:05:05.671878 systemd-networkd[1384]: cilium_vxlan: Gained carrier Jan 29 11:05:05.967695 kernel: NET: Registered PF_ALG protocol family Jan 29 11:05:06.105447 systemd-networkd[1384]: cilium_net: Gained IPv6LL Jan 29 11:05:06.170741 systemd-networkd[1384]: cilium_host: Gained IPv6LL Jan 29 11:05:06.788826 systemd-networkd[1384]: lxc_health: Link UP Jan 29 11:05:06.800578 systemd-networkd[1384]: lxc_health: Gained carrier Jan 29 11:05:07.066530 systemd-networkd[1384]: cilium_vxlan: Gained IPv6LL Jan 29 11:05:07.408713 systemd-networkd[1384]: lxc2c632e89868e: Link UP Jan 29 11:05:07.412191 kernel: eth0: renamed from tmpe74ba Jan 29 11:05:07.417084 systemd-networkd[1384]: lxc87ac53931ec2: Link UP Jan 29 11:05:07.432446 kernel: eth0: renamed from tmp3aae8 Jan 29 11:05:07.429306 systemd-networkd[1384]: lxc2c632e89868e: Gained carrier Jan 29 11:05:07.435092 systemd-networkd[1384]: lxc87ac53931ec2: Gained carrier Jan 29 11:05:07.862836 update_engine[1460]: I20250129 11:05:07.862764 1460 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 11:05:07.862836 update_engine[1460]: I20250129 11:05:07.862825 1460 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 11:05:07.863254 update_engine[1460]: I20250129 11:05:07.863073 1460 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 11:05:07.865235 update_engine[1460]: I20250129 11:05:07.865175 1460 omaha_request_params.cc:62] Current group set to beta Jan 29 11:05:07.865382 update_engine[1460]: I20250129 11:05:07.865307 1460 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 11:05:07.865382 update_engine[1460]: I20250129 11:05:07.865319 1460 update_attempter.cc:643] Scheduling an action processor start. Jan 29 11:05:07.865382 update_engine[1460]: I20250129 11:05:07.865338 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 11:05:07.865382 update_engine[1460]: I20250129 11:05:07.865377 1460 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 11:05:07.865515 update_engine[1460]: I20250129 11:05:07.865449 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 11:05:07.865515 update_engine[1460]: I20250129 11:05:07.865458 1460 omaha_request_action.cc:272] Request: Jan 29 11:05:07.865515 update_engine[1460]: Jan 29 11:05:07.865515 update_engine[1460]: Jan 29 11:05:07.865515 update_engine[1460]: Jan 29 11:05:07.865515 update_engine[1460]: Jan 29 11:05:07.865515 update_engine[1460]: Jan 29 11:05:07.865515 update_engine[1460]: Jan 29 11:05:07.865515 update_engine[1460]: Jan 29 11:05:07.865515 update_engine[1460]: Jan 29 11:05:07.866101 update_engine[1460]: I20250129 11:05:07.865527 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 11:05:07.866126 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 11:05:07.869241 update_engine[1460]: I20250129 11:05:07.869153 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 11:05:07.869662 update_engine[1460]: I20250129 11:05:07.869614 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 11:05:07.872430 update_engine[1460]: E20250129 11:05:07.871830 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 11:05:07.872430 update_engine[1460]: I20250129 11:05:07.871958 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 11:05:07.961802 systemd-networkd[1384]: lxc_health: Gained IPv6LL Jan 29 11:05:08.473161 systemd-networkd[1384]: lxc87ac53931ec2: Gained IPv6LL Jan 29 11:05:08.920870 systemd-networkd[1384]: lxc2c632e89868e: Gained IPv6LL Jan 29 11:05:11.792688 containerd[1481]: time="2025-01-29T11:05:11.778785445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:11.792688 containerd[1481]: time="2025-01-29T11:05:11.779502915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:11.792688 containerd[1481]: time="2025-01-29T11:05:11.779533955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:11.792688 containerd[1481]: time="2025-01-29T11:05:11.779740112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:11.812168 containerd[1481]: time="2025-01-29T11:05:11.811809496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:11.812168 containerd[1481]: time="2025-01-29T11:05:11.811873816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:11.812168 containerd[1481]: time="2025-01-29T11:05:11.811890575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:11.812168 containerd[1481]: time="2025-01-29T11:05:11.811970334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:11.840999 systemd[1]: Started cri-containerd-e74ba67911d5b03dff134d87577dd834ada9c37cbfa7daab8af9bd7324d1f8ee.scope - libcontainer container e74ba67911d5b03dff134d87577dd834ada9c37cbfa7daab8af9bd7324d1f8ee. Jan 29 11:05:11.852629 systemd[1]: Started cri-containerd-3aae82fb97734cb188f6599714e452b0b39148cf6d54035c622f8a99471a06ba.scope - libcontainer container 3aae82fb97734cb188f6599714e452b0b39148cf6d54035c622f8a99471a06ba. Jan 29 11:05:11.905107 containerd[1481]: time="2025-01-29T11:05:11.905053887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4fv89,Uid:9e5d27cf-4dfc-46d8-a21a-b7c2d27bfa6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e74ba67911d5b03dff134d87577dd834ada9c37cbfa7daab8af9bd7324d1f8ee\"" Jan 29 11:05:11.912474 containerd[1481]: time="2025-01-29T11:05:11.912314913Z" level=info msg="CreateContainer within sandbox \"e74ba67911d5b03dff134d87577dd834ada9c37cbfa7daab8af9bd7324d1f8ee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:05:11.937302 containerd[1481]: time="2025-01-29T11:05:11.937112992Z" level=info msg="CreateContainer within sandbox \"e74ba67911d5b03dff134d87577dd834ada9c37cbfa7daab8af9bd7324d1f8ee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f5e7f8f1455ff9b65c482d76fffed2ec316aa1e530db6a21ca99c74c23c85f8c\"" Jan 29 11:05:11.937947 containerd[1481]: time="2025-01-29T11:05:11.937769423Z" level=info msg="StartContainer for \"f5e7f8f1455ff9b65c482d76fffed2ec316aa1e530db6a21ca99c74c23c85f8c\"" Jan 29 11:05:11.946965 containerd[1481]: time="2025-01-29T11:05:11.946896105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cb7r7,Uid:70f7b9e3-294a-4112-b51e-d828e1594106,Namespace:kube-system,Attempt:0,} returns sandbox id \"3aae82fb97734cb188f6599714e452b0b39148cf6d54035c622f8a99471a06ba\"" Jan 29 11:05:11.952642 containerd[1481]: time="2025-01-29T11:05:11.952387913Z" level=info msg="CreateContainer within sandbox \"3aae82fb97734cb188f6599714e452b0b39148cf6d54035c622f8a99471a06ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:05:11.970087 containerd[1481]: time="2025-01-29T11:05:11.970026725Z" level=info msg="CreateContainer within sandbox \"3aae82fb97734cb188f6599714e452b0b39148cf6d54035c622f8a99471a06ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f4c6065336c19f3891ccd5a6b57bab0df4ebd8fef865167087ce6383ae51a936\"" Jan 29 11:05:11.970684 containerd[1481]: time="2025-01-29T11:05:11.970618037Z" level=info msg="StartContainer for \"f4c6065336c19f3891ccd5a6b57bab0df4ebd8fef865167087ce6383ae51a936\"" Jan 29 11:05:11.993934 systemd[1]: Started cri-containerd-f5e7f8f1455ff9b65c482d76fffed2ec316aa1e530db6a21ca99c74c23c85f8c.scope - libcontainer container f5e7f8f1455ff9b65c482d76fffed2ec316aa1e530db6a21ca99c74c23c85f8c. Jan 29 11:05:12.021848 systemd[1]: Started cri-containerd-f4c6065336c19f3891ccd5a6b57bab0df4ebd8fef865167087ce6383ae51a936.scope - libcontainer container f4c6065336c19f3891ccd5a6b57bab0df4ebd8fef865167087ce6383ae51a936. Jan 29 11:05:12.047914 containerd[1481]: time="2025-01-29T11:05:12.047481206Z" level=info msg="StartContainer for \"f5e7f8f1455ff9b65c482d76fffed2ec316aa1e530db6a21ca99c74c23c85f8c\" returns successfully" Jan 29 11:05:12.069892 containerd[1481]: time="2025-01-29T11:05:12.069846439Z" level=info msg="StartContainer for \"f4c6065336c19f3891ccd5a6b57bab0df4ebd8fef865167087ce6383ae51a936\" returns successfully" Jan 29 11:05:12.298679 kubelet[2745]: I0129 11:05:12.298459 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cb7r7" podStartSLOduration=31.298436102 podStartE2EDuration="31.298436102s" podCreationTimestamp="2025-01-29 11:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:05:12.29710132 +0000 UTC m=+36.386818342" watchObservedRunningTime="2025-01-29 11:05:12.298436102 +0000 UTC m=+36.388153164" Jan 29 11:05:17.862734 update_engine[1460]: I20250129 11:05:17.862481 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 11:05:17.863242 update_engine[1460]: I20250129 11:05:17.862890 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 11:05:17.863242 update_engine[1460]: I20250129 11:05:17.863198 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 11:05:17.864613 update_engine[1460]: E20250129 11:05:17.864534 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 11:05:17.864736 update_engine[1460]: I20250129 11:05:17.864637 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 29 11:05:27.866604 update_engine[1460]: I20250129 11:05:27.866472 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 11:05:27.867265 update_engine[1460]: I20250129 11:05:27.866950 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 11:05:27.867820 update_engine[1460]: I20250129 11:05:27.867743 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 11:05:27.868093 update_engine[1460]: E20250129 11:05:27.868021 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 11:05:27.868183 update_engine[1460]: I20250129 11:05:27.868113 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 29 11:05:37.863264 update_engine[1460]: I20250129 11:05:37.862622 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 11:05:37.863264 update_engine[1460]: I20250129 11:05:37.862898 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 11:05:37.863264 update_engine[1460]: I20250129 11:05:37.863137 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 11:05:37.864670 update_engine[1460]: E20250129 11:05:37.863927 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 11:05:37.864670 update_engine[1460]: I20250129 11:05:37.863993 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 11:05:37.864670 update_engine[1460]: I20250129 11:05:37.864013 1460 omaha_request_action.cc:617] Omaha request response: Jan 29 11:05:37.864670 update_engine[1460]: E20250129 11:05:37.864090 1460 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 29 11:05:37.864670 update_engine[1460]: I20250129 11:05:37.864107 1460 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 29 11:05:37.864670 update_engine[1460]: I20250129 11:05:37.864113 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 11:05:37.864670 update_engine[1460]: I20250129 11:05:37.864118 1460 update_attempter.cc:306] Processing Done. Jan 29 11:05:37.864670 update_engine[1460]: E20250129 11:05:37.864132 1460 update_attempter.cc:619] Update failed. Jan 29 11:05:37.864670 update_engine[1460]: I20250129 11:05:37.864137 1460 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 29 11:05:37.864670 update_engine[1460]: I20250129 11:05:37.864142 1460 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 29 11:05:37.864670 update_engine[1460]: I20250129 11:05:37.864148 1460 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 29 11:05:37.864670 update_engine[1460]: I20250129 11:05:37.864212 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 11:05:37.864670 update_engine[1460]: I20250129 11:05:37.864231 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 11:05:37.864670 update_engine[1460]: I20250129 11:05:37.864238 1460 omaha_request_action.cc:272] Request: Jan 29 11:05:37.864670 update_engine[1460]: Jan 29 11:05:37.864670 update_engine[1460]: Jan 29 11:05:37.864670 update_engine[1460]: Jan 29 11:05:37.865132 update_engine[1460]: Jan 29 11:05:37.865132 update_engine[1460]: Jan 29 11:05:37.865132 update_engine[1460]: Jan 29 11:05:37.865132 update_engine[1460]: I20250129 11:05:37.864244 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 11:05:37.865132 update_engine[1460]: I20250129 11:05:37.864388 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 11:05:37.865132 update_engine[1460]: I20250129 11:05:37.864590 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 11:05:37.865246 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 29 11:05:37.865494 update_engine[1460]: E20250129 11:05:37.865342 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 11:05:37.865494 update_engine[1460]: I20250129 11:05:37.865417 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 11:05:37.865494 update_engine[1460]: I20250129 11:05:37.865430 1460 omaha_request_action.cc:617] Omaha request response: Jan 29 11:05:37.865494 update_engine[1460]: I20250129 11:05:37.865442 1460 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 11:05:37.865494 update_engine[1460]: I20250129 11:05:37.865449 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 11:05:37.865494 update_engine[1460]: I20250129 11:05:37.865458 1460 update_attempter.cc:306] Processing Done. Jan 29 11:05:37.865494 update_engine[1460]: I20250129 11:05:37.865469 1460 update_attempter.cc:310] Error event sent. Jan 29 11:05:37.865494 update_engine[1460]: I20250129 11:05:37.865484 1460 update_check_scheduler.cc:74] Next update check in 49m47s Jan 29 11:05:37.865918 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 29 11:09:26.642217 systemd[1]: Started sshd@7-116.202.15.110:22-147.75.109.163:37010.service - OpenSSH per-connection server daemon (147.75.109.163:37010). Jan 29 11:09:27.636394 sshd[4168]: Accepted publickey for core from 147.75.109.163 port 37010 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:09:27.638497 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:27.644678 systemd-logind[1459]: New session 8 of user core. Jan 29 11:09:27.656200 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:09:28.421383 sshd[4170]: Connection closed by 147.75.109.163 port 37010 Jan 29 11:09:28.420570 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:28.425093 systemd[1]: sshd@7-116.202.15.110:22-147.75.109.163:37010.service: Deactivated successfully. Jan 29 11:09:28.427968 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:09:28.429437 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:09:28.431085 systemd-logind[1459]: Removed session 8. Jan 29 11:09:33.595213 systemd[1]: Started sshd@8-116.202.15.110:22-147.75.109.163:57916.service - OpenSSH per-connection server daemon (147.75.109.163:57916). Jan 29 11:09:34.591064 sshd[4182]: Accepted publickey for core from 147.75.109.163 port 57916 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:09:34.593355 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:34.598315 systemd-logind[1459]: New session 9 of user core. Jan 29 11:09:34.604908 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:09:35.353555 sshd[4184]: Connection closed by 147.75.109.163 port 57916 Jan 29 11:09:35.352011 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:35.358841 systemd[1]: sshd@8-116.202.15.110:22-147.75.109.163:57916.service: Deactivated successfully. Jan 29 11:09:35.362089 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:09:35.366889 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:09:35.368263 systemd-logind[1459]: Removed session 9. Jan 29 11:09:40.529338 systemd[1]: Started sshd@9-116.202.15.110:22-147.75.109.163:39878.service - OpenSSH per-connection server daemon (147.75.109.163:39878). Jan 29 11:09:41.533699 sshd[4198]: Accepted publickey for core from 147.75.109.163 port 39878 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:09:41.536497 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:41.541388 systemd-logind[1459]: New session 10 of user core. Jan 29 11:09:41.549045 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:09:42.308008 sshd[4200]: Connection closed by 147.75.109.163 port 39878 Jan 29 11:09:42.307402 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:42.312056 systemd[1]: sshd@9-116.202.15.110:22-147.75.109.163:39878.service: Deactivated successfully. Jan 29 11:09:42.315471 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:09:42.317926 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:09:42.321449 systemd-logind[1459]: Removed session 10. Jan 29 11:09:42.481118 systemd[1]: Started sshd@10-116.202.15.110:22-147.75.109.163:39892.service - OpenSSH per-connection server daemon (147.75.109.163:39892). Jan 29 11:09:43.475624 sshd[4212]: Accepted publickey for core from 147.75.109.163 port 39892 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:09:43.477484 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:43.484200 systemd-logind[1459]: New session 11 of user core. Jan 29 11:09:43.490470 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:09:44.300961 sshd[4216]: Connection closed by 147.75.109.163 port 39892 Jan 29 11:09:44.302058 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:44.308529 systemd[1]: sshd@10-116.202.15.110:22-147.75.109.163:39892.service: Deactivated successfully. Jan 29 11:09:44.313546 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:09:44.316123 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:09:44.317462 systemd-logind[1459]: Removed session 11. Jan 29 11:09:44.478149 systemd[1]: Started sshd@11-116.202.15.110:22-147.75.109.163:39908.service - OpenSSH per-connection server daemon (147.75.109.163:39908). Jan 29 11:09:45.468096 sshd[4225]: Accepted publickey for core from 147.75.109.163 port 39908 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:09:45.470215 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:45.477718 systemd-logind[1459]: New session 12 of user core. Jan 29 11:09:45.484267 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:09:46.265527 sshd[4227]: Connection closed by 147.75.109.163 port 39908 Jan 29 11:09:46.264973 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:46.271583 systemd[1]: sshd@11-116.202.15.110:22-147.75.109.163:39908.service: Deactivated successfully. Jan 29 11:09:46.276560 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:09:46.279535 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:09:46.283722 systemd-logind[1459]: Removed session 12. Jan 29 11:09:51.444177 systemd[1]: Started sshd@12-116.202.15.110:22-147.75.109.163:48340.service - OpenSSH per-connection server daemon (147.75.109.163:48340). Jan 29 11:09:52.439538 sshd[4238]: Accepted publickey for core from 147.75.109.163 port 48340 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:09:52.446772 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:52.468317 systemd-logind[1459]: New session 13 of user core. Jan 29 11:09:52.478172 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:09:53.204523 sshd[4240]: Connection closed by 147.75.109.163 port 48340 Jan 29 11:09:53.203919 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:53.207630 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:09:53.208230 systemd[1]: sshd@12-116.202.15.110:22-147.75.109.163:48340.service: Deactivated successfully. Jan 29 11:09:53.211867 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:09:53.214393 systemd-logind[1459]: Removed session 13. Jan 29 11:09:53.383843 systemd[1]: Started sshd@13-116.202.15.110:22-147.75.109.163:48348.service - OpenSSH per-connection server daemon (147.75.109.163:48348). Jan 29 11:09:54.388380 sshd[4250]: Accepted publickey for core from 147.75.109.163 port 48348 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:09:54.391629 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:54.400858 systemd-logind[1459]: New session 14 of user core. Jan 29 11:09:54.406927 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:09:55.196665 sshd[4252]: Connection closed by 147.75.109.163 port 48348 Jan 29 11:09:55.197475 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:55.202972 systemd[1]: sshd@13-116.202.15.110:22-147.75.109.163:48348.service: Deactivated successfully. Jan 29 11:09:55.207505 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:09:55.209296 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:09:55.211255 systemd-logind[1459]: Removed session 14. Jan 29 11:09:55.380088 systemd[1]: Started sshd@14-116.202.15.110:22-147.75.109.163:48362.service - OpenSSH per-connection server daemon (147.75.109.163:48362). Jan 29 11:09:56.370759 sshd[4261]: Accepted publickey for core from 147.75.109.163 port 48362 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:09:56.373037 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:56.379157 systemd-logind[1459]: New session 15 of user core. Jan 29 11:09:56.388742 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:09:58.109873 sshd[4263]: Connection closed by 147.75.109.163 port 48362 Jan 29 11:09:58.114172 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:58.120818 systemd[1]: sshd@14-116.202.15.110:22-147.75.109.163:48362.service: Deactivated successfully. Jan 29 11:09:58.123602 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:09:58.124841 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:09:58.128503 systemd-logind[1459]: Removed session 15. Jan 29 11:09:58.289000 systemd[1]: Started sshd@15-116.202.15.110:22-147.75.109.163:57582.service - OpenSSH per-connection server daemon (147.75.109.163:57582). Jan 29 11:09:59.267615 sshd[4280]: Accepted publickey for core from 147.75.109.163 port 57582 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:09:59.269444 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:59.275195 systemd-logind[1459]: New session 16 of user core. Jan 29 11:09:59.285035 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:10:00.139537 sshd[4282]: Connection closed by 147.75.109.163 port 57582 Jan 29 11:10:00.140616 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:00.147506 systemd[1]: sshd@15-116.202.15.110:22-147.75.109.163:57582.service: Deactivated successfully. Jan 29 11:10:00.147871 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:10:00.151914 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:10:00.156100 systemd-logind[1459]: Removed session 16. Jan 29 11:10:00.319099 systemd[1]: Started sshd@16-116.202.15.110:22-147.75.109.163:57594.service - OpenSSH per-connection server daemon (147.75.109.163:57594). Jan 29 11:10:01.333503 sshd[4291]: Accepted publickey for core from 147.75.109.163 port 57594 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:10:01.336539 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:01.341579 systemd-logind[1459]: New session 17 of user core. Jan 29 11:10:01.349951 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:10:02.106354 sshd[4293]: Connection closed by 147.75.109.163 port 57594 Jan 29 11:10:02.105687 sshd-session[4291]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:02.111598 systemd[1]: sshd@16-116.202.15.110:22-147.75.109.163:57594.service: Deactivated successfully. Jan 29 11:10:02.119185 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:10:02.123136 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:10:02.125356 systemd-logind[1459]: Removed session 17. Jan 29 11:10:07.281213 systemd[1]: Started sshd@17-116.202.15.110:22-147.75.109.163:57606.service - OpenSSH per-connection server daemon (147.75.109.163:57606). Jan 29 11:10:08.266959 sshd[4305]: Accepted publickey for core from 147.75.109.163 port 57606 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:10:08.269457 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:08.278419 systemd-logind[1459]: New session 18 of user core. Jan 29 11:10:08.283216 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:10:09.038434 sshd[4307]: Connection closed by 147.75.109.163 port 57606 Jan 29 11:10:09.039332 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:09.045032 systemd[1]: sshd@17-116.202.15.110:22-147.75.109.163:57606.service: Deactivated successfully. Jan 29 11:10:09.050370 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:10:09.056729 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:10:09.060866 systemd-logind[1459]: Removed session 18. Jan 29 11:10:14.225246 systemd[1]: Started sshd@18-116.202.15.110:22-147.75.109.163:37200.service - OpenSSH per-connection server daemon (147.75.109.163:37200). Jan 29 11:10:15.210247 systemd[1]: Started sshd@19-116.202.15.110:22-47.239.223.244:54344.service - OpenSSH per-connection server daemon (47.239.223.244:54344). Jan 29 11:10:15.217216 sshd[4320]: Accepted publickey for core from 147.75.109.163 port 37200 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:10:15.219531 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:15.227733 systemd-logind[1459]: New session 19 of user core. Jan 29 11:10:15.232943 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:10:15.975083 sshd[4325]: Connection closed by 147.75.109.163 port 37200 Jan 29 11:10:15.975880 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:15.980969 systemd[1]: sshd@18-116.202.15.110:22-147.75.109.163:37200.service: Deactivated successfully. Jan 29 11:10:15.984608 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:10:15.989068 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:10:15.991804 systemd-logind[1459]: Removed session 19. Jan 29 11:10:16.147103 systemd[1]: Started sshd@20-116.202.15.110:22-147.75.109.163:37204.service - OpenSSH per-connection server daemon (147.75.109.163:37204). Jan 29 11:10:17.146128 sshd[4336]: Accepted publickey for core from 147.75.109.163 port 37204 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:10:17.148842 sshd-session[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:17.154627 systemd-logind[1459]: New session 20 of user core. Jan 29 11:10:17.167721 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:10:19.219328 kubelet[2745]: I0129 11:10:19.219231 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4fv89" podStartSLOduration=338.219206184 podStartE2EDuration="5m38.219206184s" podCreationTimestamp="2025-01-29 11:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:05:12.354113667 +0000 UTC m=+36.443830689" watchObservedRunningTime="2025-01-29 11:10:19.219206184 +0000 UTC m=+343.308923206" Jan 29 11:10:19.250113 containerd[1481]: time="2025-01-29T11:10:19.247988476Z" level=info msg="StopContainer for \"fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05\" with timeout 30 (s)" Jan 29 11:10:19.252100 containerd[1481]: time="2025-01-29T11:10:19.251957527Z" level=info msg="Stop container \"fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05\" with signal terminated" Jan 29 11:10:19.255724 systemd[1]: run-containerd-runc-k8s.io-f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622-runc.ZKD5Xo.mount: Deactivated successfully. Jan 29 11:10:19.272669 systemd[1]: cri-containerd-fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05.scope: Deactivated successfully. Jan 29 11:10:19.279519 containerd[1481]: time="2025-01-29T11:10:19.279139718Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:10:19.287161 containerd[1481]: time="2025-01-29T11:10:19.287017020Z" level=info msg="StopContainer for \"f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622\" with timeout 2 (s)" Jan 29 11:10:19.288199 containerd[1481]: time="2025-01-29T11:10:19.288092754Z" level=info msg="Stop container \"f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622\" with signal terminated" Jan 29 11:10:19.299879 systemd-networkd[1384]: lxc_health: Link DOWN Jan 29 11:10:19.299890 systemd-networkd[1384]: lxc_health: Lost carrier Jan 29 11:10:19.322109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05-rootfs.mount: Deactivated successfully. Jan 29 11:10:19.327883 systemd[1]: cri-containerd-f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622.scope: Deactivated successfully. Jan 29 11:10:19.329045 systemd[1]: cri-containerd-f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622.scope: Consumed 8.570s CPU time. Jan 29 11:10:19.337240 containerd[1481]: time="2025-01-29T11:10:19.337170988Z" level=info msg="shim disconnected" id=fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05 namespace=k8s.io Jan 29 11:10:19.337240 containerd[1481]: time="2025-01-29T11:10:19.337230789Z" level=warning msg="cleaning up after shim disconnected" id=fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05 namespace=k8s.io Jan 29 11:10:19.337240 containerd[1481]: time="2025-01-29T11:10:19.337240149Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:19.362758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622-rootfs.mount: Deactivated successfully. Jan 29 11:10:19.377513 containerd[1481]: time="2025-01-29T11:10:19.377240745Z" level=info msg="StopContainer for \"fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05\" returns successfully" Jan 29 11:10:19.378618 containerd[1481]: time="2025-01-29T11:10:19.378551122Z" level=info msg="shim disconnected" id=f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622 namespace=k8s.io Jan 29 11:10:19.378745 containerd[1481]: time="2025-01-29T11:10:19.378624803Z" level=warning msg="cleaning up after shim disconnected" id=f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622 namespace=k8s.io Jan 29 11:10:19.378745 containerd[1481]: time="2025-01-29T11:10:19.378636163Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:19.379031 containerd[1481]: time="2025-01-29T11:10:19.378872526Z" level=info msg="StopPodSandbox for \"bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7\"" Jan 29 11:10:19.379031 containerd[1481]: time="2025-01-29T11:10:19.378924807Z" level=info msg="Container to stop \"fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:10:19.381357 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7-shm.mount: Deactivated successfully. Jan 29 11:10:19.394108 systemd[1]: cri-containerd-bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7.scope: Deactivated successfully. Jan 29 11:10:19.406110 containerd[1481]: time="2025-01-29T11:10:19.406053197Z" level=info msg="StopContainer for \"f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622\" returns successfully" Jan 29 11:10:19.407007 containerd[1481]: time="2025-01-29T11:10:19.406835687Z" level=info msg="StopPodSandbox for \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\"" Jan 29 11:10:19.407007 containerd[1481]: time="2025-01-29T11:10:19.406881688Z" level=info msg="Container to stop \"909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:10:19.407007 containerd[1481]: time="2025-01-29T11:10:19.406892648Z" level=info msg="Container to stop \"52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:10:19.407007 containerd[1481]: time="2025-01-29T11:10:19.406901648Z" level=info msg="Container to stop \"b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:10:19.407007 containerd[1481]: time="2025-01-29T11:10:19.406911248Z" level=info msg="Container to stop \"4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:10:19.407007 containerd[1481]: time="2025-01-29T11:10:19.406919489Z" level=info msg="Container to stop \"f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:10:19.414640 systemd[1]: cri-containerd-112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897.scope: Deactivated successfully. Jan 29 11:10:19.436989 containerd[1481]: time="2025-01-29T11:10:19.436698433Z" level=info msg="shim disconnected" id=bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7 namespace=k8s.io Jan 29 11:10:19.436989 containerd[1481]: time="2025-01-29T11:10:19.436764354Z" level=warning msg="cleaning up after shim disconnected" id=bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7 namespace=k8s.io Jan 29 11:10:19.436989 containerd[1481]: time="2025-01-29T11:10:19.436773514Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:19.449376 containerd[1481]: time="2025-01-29T11:10:19.449296156Z" level=info msg="shim disconnected" id=112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897 namespace=k8s.io Jan 29 11:10:19.449376 containerd[1481]: time="2025-01-29T11:10:19.449361037Z" level=warning msg="cleaning up after shim disconnected" id=112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897 namespace=k8s.io Jan 29 11:10:19.449376 containerd[1481]: time="2025-01-29T11:10:19.449369437Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:19.457734 containerd[1481]: time="2025-01-29T11:10:19.457256179Z" level=info msg="TearDown network for sandbox \"bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7\" successfully" Jan 29 11:10:19.457734 containerd[1481]: time="2025-01-29T11:10:19.457305459Z" level=info msg="StopPodSandbox for \"bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7\" returns successfully" Jan 29 11:10:19.475777 kubelet[2745]: I0129 11:10:19.471889 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/728b6ad9-4b71-475f-a661-81e8f9bc8501-cilium-config-path\") pod \"728b6ad9-4b71-475f-a661-81e8f9bc8501\" (UID: \"728b6ad9-4b71-475f-a661-81e8f9bc8501\") " Jan 29 11:10:19.475777 kubelet[2745]: I0129 11:10:19.471941 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwzb5\" (UniqueName: \"kubernetes.io/projected/728b6ad9-4b71-475f-a661-81e8f9bc8501-kube-api-access-hwzb5\") pod \"728b6ad9-4b71-475f-a661-81e8f9bc8501\" (UID: \"728b6ad9-4b71-475f-a661-81e8f9bc8501\") " Jan 29 11:10:19.479611 kubelet[2745]: I0129 11:10:19.478493 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/728b6ad9-4b71-475f-a661-81e8f9bc8501-kube-api-access-hwzb5" (OuterVolumeSpecName: "kube-api-access-hwzb5") pod "728b6ad9-4b71-475f-a661-81e8f9bc8501" (UID: "728b6ad9-4b71-475f-a661-81e8f9bc8501"). InnerVolumeSpecName "kube-api-access-hwzb5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 11:10:19.480368 kubelet[2745]: I0129 11:10:19.480303 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/728b6ad9-4b71-475f-a661-81e8f9bc8501-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "728b6ad9-4b71-475f-a661-81e8f9bc8501" (UID: "728b6ad9-4b71-475f-a661-81e8f9bc8501"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 11:10:19.481930 containerd[1481]: time="2025-01-29T11:10:19.481873817Z" level=info msg="TearDown network for sandbox \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\" successfully" Jan 29 11:10:19.481930 containerd[1481]: time="2025-01-29T11:10:19.481917737Z" level=info msg="StopPodSandbox for \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\" returns successfully" Jan 29 11:10:19.574687 kubelet[2745]: I0129 11:10:19.572840 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-cilium-cgroup\") pod \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " Jan 29 11:10:19.574687 kubelet[2745]: I0129 11:10:19.572926 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-cilium-config-path\") pod \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " Jan 29 11:10:19.574687 kubelet[2745]: I0129 11:10:19.572960 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-bpf-maps\") pod \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " Jan 29 11:10:19.574687 kubelet[2745]: I0129 11:10:19.572993 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-clustermesh-secrets\") pod \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " Jan 29 11:10:19.574687 kubelet[2745]: I0129 11:10:19.573022 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-xtables-lock\") pod \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " Jan 29 11:10:19.574687 kubelet[2745]: I0129 11:10:19.573055 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-hubble-tls\") pod \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " Jan 29 11:10:19.575153 kubelet[2745]: I0129 11:10:19.573066 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9" (UID: "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:10:19.575153 kubelet[2745]: I0129 11:10:19.573124 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-cni-path" (OuterVolumeSpecName: "cni-path") pod "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9" (UID: "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:10:19.575153 kubelet[2745]: I0129 11:10:19.573083 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-cni-path\") pod \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " Jan 29 11:10:19.575153 kubelet[2745]: I0129 11:10:19.573190 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-lib-modules\") pod \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " Jan 29 11:10:19.575153 kubelet[2745]: I0129 11:10:19.573261 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2clj\" (UniqueName: \"kubernetes.io/projected/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-kube-api-access-t2clj\") pod \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " Jan 29 11:10:19.575153 kubelet[2745]: I0129 11:10:19.573319 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-etc-cni-netd\") pod \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " Jan 29 11:10:19.575415 kubelet[2745]: I0129 11:10:19.573409 2745 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-cni-path\") on node \"ci-4186-1-0-1-dfe7c46cbd\" DevicePath \"\"" Jan 29 11:10:19.575415 kubelet[2745]: I0129 11:10:19.573543 2745 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/728b6ad9-4b71-475f-a661-81e8f9bc8501-cilium-config-path\") on node \"ci-4186-1-0-1-dfe7c46cbd\" DevicePath \"\"" Jan 29 11:10:19.575415 kubelet[2745]: I0129 11:10:19.573570 2745 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hwzb5\" (UniqueName: \"kubernetes.io/projected/728b6ad9-4b71-475f-a661-81e8f9bc8501-kube-api-access-hwzb5\") on node \"ci-4186-1-0-1-dfe7c46cbd\" DevicePath \"\"" Jan 29 11:10:19.575415 kubelet[2745]: I0129 11:10:19.573619 2745 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-cilium-cgroup\") on node \"ci-4186-1-0-1-dfe7c46cbd\" DevicePath \"\"" Jan 29 11:10:19.575415 kubelet[2745]: I0129 11:10:19.573710 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9" (UID: "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:10:19.575415 kubelet[2745]: I0129 11:10:19.573774 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9" (UID: "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:10:19.577377 kubelet[2745]: I0129 11:10:19.577330 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9" (UID: "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 11:10:19.577618 kubelet[2745]: I0129 11:10:19.577601 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9" (UID: "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:10:19.578745 kubelet[2745]: I0129 11:10:19.577669 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9" (UID: "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:10:19.579583 kubelet[2745]: I0129 11:10:19.579515 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-kube-api-access-t2clj" (OuterVolumeSpecName: "kube-api-access-t2clj") pod "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9" (UID: "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9"). InnerVolumeSpecName "kube-api-access-t2clj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 11:10:19.582556 kubelet[2745]: I0129 11:10:19.582506 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9" (UID: "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 11:10:19.582820 kubelet[2745]: I0129 11:10:19.582581 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9" (UID: "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 29 11:10:19.674001 kubelet[2745]: I0129 11:10:19.673943 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-cilium-run\") pod \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " Jan 29 11:10:19.674363 kubelet[2745]: I0129 11:10:19.674320 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-hostproc\") pod \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " Jan 29 11:10:19.674483 kubelet[2745]: I0129 11:10:19.674464 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-host-proc-sys-net\") pod \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " Jan 29 11:10:19.674590 kubelet[2745]: I0129 11:10:19.674572 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-host-proc-sys-kernel\") pod \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\" (UID: \"ecddc1ca-2734-412e-a9d4-a87cd0bff1d9\") " Jan 29 11:10:19.674774 kubelet[2745]: I0129 11:10:19.674755 2745 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-cilium-config-path\") on node \"ci-4186-1-0-1-dfe7c46cbd\" DevicePath \"\"" Jan 29 11:10:19.674931 kubelet[2745]: I0129 11:10:19.674908 2745 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-bpf-maps\") on node \"ci-4186-1-0-1-dfe7c46cbd\" DevicePath \"\"" Jan 29 11:10:19.675027 kubelet[2745]: I0129 11:10:19.675012 2745 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-clustermesh-secrets\") on node \"ci-4186-1-0-1-dfe7c46cbd\" DevicePath \"\"" Jan 29 11:10:19.675121 kubelet[2745]: I0129 11:10:19.675102 2745 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-xtables-lock\") on node \"ci-4186-1-0-1-dfe7c46cbd\" DevicePath \"\"" Jan 29 11:10:19.675208 kubelet[2745]: I0129 11:10:19.675194 2745 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-hubble-tls\") on node \"ci-4186-1-0-1-dfe7c46cbd\" DevicePath \"\"" Jan 29 11:10:19.675494 kubelet[2745]: I0129 11:10:19.675283 2745 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-lib-modules\") on node \"ci-4186-1-0-1-dfe7c46cbd\" DevicePath \"\"" Jan 29 11:10:19.675494 kubelet[2745]: I0129 11:10:19.675304 2745 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t2clj\" (UniqueName: \"kubernetes.io/projected/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-kube-api-access-t2clj\") on node \"ci-4186-1-0-1-dfe7c46cbd\" DevicePath \"\"" Jan 29 11:10:19.675494 kubelet[2745]: I0129 11:10:19.675320 2745 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-etc-cni-netd\") on node \"ci-4186-1-0-1-dfe7c46cbd\" DevicePath \"\"" Jan 29 11:10:19.675494 kubelet[2745]: I0129 11:10:19.675376 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9" (UID: "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:10:19.675494 kubelet[2745]: I0129 11:10:19.675413 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9" (UID: "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:10:19.675494 kubelet[2745]: I0129 11:10:19.675440 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-hostproc" (OuterVolumeSpecName: "hostproc") pod "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9" (UID: "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:10:19.675813 kubelet[2745]: I0129 11:10:19.675465 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9" (UID: "ecddc1ca-2734-412e-a9d4-a87cd0bff1d9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:10:19.777584 kubelet[2745]: I0129 11:10:19.776279 2745 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-host-proc-sys-kernel\") on node \"ci-4186-1-0-1-dfe7c46cbd\" DevicePath \"\"" Jan 29 11:10:19.777584 kubelet[2745]: I0129 11:10:19.776336 2745 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-hostproc\") on node \"ci-4186-1-0-1-dfe7c46cbd\" DevicePath \"\"" Jan 29 11:10:19.777584 kubelet[2745]: I0129 11:10:19.776349 2745 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-host-proc-sys-net\") on node \"ci-4186-1-0-1-dfe7c46cbd\" DevicePath \"\"" Jan 29 11:10:19.777584 kubelet[2745]: I0129 11:10:19.776360 2745 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9-cilium-run\") on node \"ci-4186-1-0-1-dfe7c46cbd\" DevicePath \"\"" Jan 29 11:10:20.098070 systemd[1]: Removed slice kubepods-besteffort-pod728b6ad9_4b71_475f_a661_81e8f9bc8501.slice - libcontainer container kubepods-besteffort-pod728b6ad9_4b71_475f_a661_81e8f9bc8501.slice. Jan 29 11:10:20.102690 systemd[1]: Removed slice kubepods-burstable-podecddc1ca_2734_412e_a9d4_a87cd0bff1d9.slice - libcontainer container kubepods-burstable-podecddc1ca_2734_412e_a9d4_a87cd0bff1d9.slice. Jan 29 11:10:20.103980 systemd[1]: kubepods-burstable-podecddc1ca_2734_412e_a9d4_a87cd0bff1d9.slice: Consumed 8.684s CPU time. Jan 29 11:10:20.137240 kubelet[2745]: I0129 11:10:20.137114 2745 scope.go:117] "RemoveContainer" containerID="fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05" Jan 29 11:10:20.149178 containerd[1481]: time="2025-01-29T11:10:20.147710478Z" level=info msg="RemoveContainer for \"fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05\"" Jan 29 11:10:20.158056 containerd[1481]: time="2025-01-29T11:10:20.157580124Z" level=info msg="RemoveContainer for \"fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05\" returns successfully" Jan 29 11:10:20.160429 kubelet[2745]: I0129 11:10:20.160402 2745 scope.go:117] "RemoveContainer" containerID="fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05" Jan 29 11:10:20.161387 containerd[1481]: time="2025-01-29T11:10:20.161106009Z" level=error msg="ContainerStatus for \"fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05\": not found" Jan 29 11:10:20.161523 kubelet[2745]: E0129 11:10:20.161305 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05\": not found" containerID="fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05" Jan 29 11:10:20.161523 kubelet[2745]: I0129 11:10:20.161340 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05"} err="failed to get container status \"fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd02df9c471df0e0d74c2c69f294050a52d4d1bbca3037f3d24859987e8b9b05\": not found" Jan 29 11:10:20.161523 kubelet[2745]: I0129 11:10:20.161420 2745 scope.go:117] "RemoveContainer" containerID="f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622" Jan 29 11:10:20.165447 containerd[1481]: time="2025-01-29T11:10:20.164337690Z" level=info msg="RemoveContainer for \"f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622\"" Jan 29 11:10:20.169778 containerd[1481]: time="2025-01-29T11:10:20.169709079Z" level=info msg="RemoveContainer for \"f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622\" returns successfully" Jan 29 11:10:20.170445 kubelet[2745]: I0129 11:10:20.170421 2745 scope.go:117] "RemoveContainer" containerID="4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d" Jan 29 11:10:20.172428 containerd[1481]: time="2025-01-29T11:10:20.172114870Z" level=info msg="RemoveContainer for \"4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d\"" Jan 29 11:10:20.177527 containerd[1481]: time="2025-01-29T11:10:20.176355524Z" level=info msg="RemoveContainer for \"4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d\" returns successfully" Jan 29 11:10:20.177639 kubelet[2745]: I0129 11:10:20.176774 2745 scope.go:117] "RemoveContainer" containerID="b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6" Jan 29 11:10:20.182521 containerd[1481]: time="2025-01-29T11:10:20.181416709Z" level=info msg="RemoveContainer for \"b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6\"" Jan 29 11:10:20.186582 containerd[1481]: time="2025-01-29T11:10:20.186522054Z" level=info msg="RemoveContainer for \"b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6\" returns successfully" Jan 29 11:10:20.187104 kubelet[2745]: I0129 11:10:20.187082 2745 scope.go:117] "RemoveContainer" containerID="52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af" Jan 29 11:10:20.190534 containerd[1481]: time="2025-01-29T11:10:20.189830736Z" level=info msg="RemoveContainer for \"52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af\"" Jan 29 11:10:20.194750 containerd[1481]: time="2025-01-29T11:10:20.194638038Z" level=info msg="RemoveContainer for \"52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af\" returns successfully" Jan 29 11:10:20.195018 kubelet[2745]: I0129 11:10:20.194991 2745 scope.go:117] "RemoveContainer" containerID="909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65" Jan 29 11:10:20.198597 containerd[1481]: time="2025-01-29T11:10:20.198553568Z" level=info msg="RemoveContainer for \"909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65\"" Jan 29 11:10:20.202496 containerd[1481]: time="2025-01-29T11:10:20.202445898Z" level=info msg="RemoveContainer for \"909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65\" returns successfully" Jan 29 11:10:20.203394 kubelet[2745]: I0129 11:10:20.202961 2745 scope.go:117] "RemoveContainer" containerID="f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622" Jan 29 11:10:20.203528 containerd[1481]: time="2025-01-29T11:10:20.203245468Z" level=error msg="ContainerStatus for \"f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622\": not found" Jan 29 11:10:20.203913 kubelet[2745]: E0129 11:10:20.203713 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622\": not found" containerID="f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622" Jan 29 11:10:20.203913 kubelet[2745]: I0129 11:10:20.203752 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622"} err="failed to get container status \"f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622\": rpc error: code = NotFound desc = an error occurred when try to find container \"f95064830a0473390491bc9170c9d7c71205c4d1981bceac5e34a935707af622\": not found" Jan 29 11:10:20.203913 kubelet[2745]: I0129 11:10:20.203778 2745 scope.go:117] "RemoveContainer" containerID="4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d" Jan 29 11:10:20.204103 containerd[1481]: time="2025-01-29T11:10:20.204034318Z" level=error msg="ContainerStatus for \"4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d\": not found" Jan 29 11:10:20.204200 kubelet[2745]: E0129 11:10:20.204172 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d\": not found" containerID="4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d" Jan 29 11:10:20.204241 kubelet[2745]: I0129 11:10:20.204200 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d"} err="failed to get container status \"4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e209c1480c1ca74edb11d52d5f6b478425ed8a3a3a6f82fbf28bb5c383c341d\": not found" Jan 29 11:10:20.204369 kubelet[2745]: I0129 11:10:20.204221 2745 scope.go:117] "RemoveContainer" containerID="b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6" Jan 29 11:10:20.204475 containerd[1481]: time="2025-01-29T11:10:20.204435043Z" level=error msg="ContainerStatus for \"b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6\": not found" Jan 29 11:10:20.204816 kubelet[2745]: E0129 11:10:20.204613 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6\": not found" containerID="b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6" Jan 29 11:10:20.204816 kubelet[2745]: I0129 11:10:20.204643 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6"} err="failed to get container status \"b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6\": rpc error: code = NotFound desc = an error occurred when try to find container \"b8d2b890923ef834bf62f07192aff13fd773eb4b3b70b5d8538dd71d23847ad6\": not found" Jan 29 11:10:20.204816 kubelet[2745]: I0129 11:10:20.204686 2745 scope.go:117] "RemoveContainer" containerID="52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af" Jan 29 11:10:20.205017 containerd[1481]: time="2025-01-29T11:10:20.204983610Z" level=error msg="ContainerStatus for \"52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af\": not found" Jan 29 11:10:20.205196 kubelet[2745]: E0129 11:10:20.205130 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af\": not found" containerID="52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af" Jan 29 11:10:20.205196 kubelet[2745]: I0129 11:10:20.205162 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af"} err="failed to get container status \"52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af\": rpc error: code = NotFound desc = an error occurred when try to find container \"52375fdaa5063cb4c460dbb10c9495d441788039b52f8199ab3165f6e2b835af\": not found" Jan 29 11:10:20.205196 kubelet[2745]: I0129 11:10:20.205182 2745 scope.go:117] "RemoveContainer" containerID="909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65" Jan 29 11:10:20.205543 containerd[1481]: time="2025-01-29T11:10:20.205496417Z" level=error msg="ContainerStatus for \"909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65\": not found" Jan 29 11:10:20.205765 kubelet[2745]: E0129 11:10:20.205746 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65\": not found" containerID="909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65" Jan 29 11:10:20.205844 kubelet[2745]: I0129 11:10:20.205772 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65"} err="failed to get container status \"909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65\": rpc error: code = NotFound desc = an error occurred when try to find container \"909e6d2853201d61dc658f4663c9de73d93c749038e0702da04cc8562a6cfd65\": not found" Jan 29 11:10:20.245288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7-rootfs.mount: Deactivated successfully. Jan 29 11:10:20.245412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897-rootfs.mount: Deactivated successfully. Jan 29 11:10:20.245467 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897-shm.mount: Deactivated successfully. Jan 29 11:10:20.245523 systemd[1]: var-lib-kubelet-pods-ecddc1ca\x2d2734\x2d412e\x2da9d4\x2da87cd0bff1d9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt2clj.mount: Deactivated successfully. Jan 29 11:10:20.245576 systemd[1]: var-lib-kubelet-pods-728b6ad9\x2d4b71\x2d475f\x2da661\x2d81e8f9bc8501-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhwzb5.mount: Deactivated successfully. Jan 29 11:10:20.245636 systemd[1]: var-lib-kubelet-pods-ecddc1ca\x2d2734\x2d412e\x2da9d4\x2da87cd0bff1d9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:10:20.245736 systemd[1]: var-lib-kubelet-pods-ecddc1ca\x2d2734\x2d412e\x2da9d4\x2da87cd0bff1d9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:10:21.320571 sshd[4338]: Connection closed by 147.75.109.163 port 37204 Jan 29 11:10:21.322045 sshd-session[4336]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:21.327029 systemd[1]: sshd@20-116.202.15.110:22-147.75.109.163:37204.service: Deactivated successfully. Jan 29 11:10:21.329737 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:10:21.332641 kubelet[2745]: E0129 11:10:21.332569 2745 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:10:21.334122 systemd-logind[1459]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:10:21.336135 systemd-logind[1459]: Removed session 20. Jan 29 11:10:21.497128 systemd[1]: Started sshd@21-116.202.15.110:22-147.75.109.163:60344.service - OpenSSH per-connection server daemon (147.75.109.163:60344). Jan 29 11:10:22.080677 kubelet[2745]: I0129 11:10:22.080588 2745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="728b6ad9-4b71-475f-a661-81e8f9bc8501" path="/var/lib/kubelet/pods/728b6ad9-4b71-475f-a661-81e8f9bc8501/volumes" Jan 29 11:10:22.081204 kubelet[2745]: I0129 11:10:22.081147 2745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecddc1ca-2734-412e-a9d4-a87cd0bff1d9" path="/var/lib/kubelet/pods/ecddc1ca-2734-412e-a9d4-a87cd0bff1d9/volumes" Jan 29 11:10:22.483615 sshd[4501]: Accepted publickey for core from 147.75.109.163 port 60344 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:10:22.484276 sshd-session[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:22.489192 systemd-logind[1459]: New session 21 of user core. Jan 29 11:10:22.496895 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:10:23.478313 sshd[4324]: Connection closed by authenticating user root 47.239.223.244 port 54344 [preauth] Jan 29 11:10:23.482278 systemd[1]: sshd@19-116.202.15.110:22-47.239.223.244:54344.service: Deactivated successfully. Jan 29 11:10:23.592586 kubelet[2745]: I0129 11:10:23.592450 2745 setters.go:602] "Node became not ready" node="ci-4186-1-0-1-dfe7c46cbd" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T11:10:23Z","lastTransitionTime":"2025-01-29T11:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 11:10:23.873795 kubelet[2745]: I0129 11:10:23.868140 2745 memory_manager.go:355] "RemoveStaleState removing state" podUID="ecddc1ca-2734-412e-a9d4-a87cd0bff1d9" containerName="cilium-agent" Jan 29 11:10:23.873795 kubelet[2745]: I0129 11:10:23.868219 2745 memory_manager.go:355] "RemoveStaleState removing state" podUID="728b6ad9-4b71-475f-a661-81e8f9bc8501" containerName="cilium-operator" Jan 29 11:10:23.882556 kubelet[2745]: W0129 11:10:23.882511 2745 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4186-1-0-1-dfe7c46cbd" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-1-dfe7c46cbd' and this object Jan 29 11:10:23.882734 kubelet[2745]: E0129 11:10:23.882556 2745 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4186-1-0-1-dfe7c46cbd\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186-1-0-1-dfe7c46cbd' and this object" logger="UnhandledError" Jan 29 11:10:23.882734 kubelet[2745]: I0129 11:10:23.882603 2745 status_manager.go:890] "Failed to get status for pod" podUID="5f745286-28cb-44d4-b3f9-ee2694734392" pod="kube-system/cilium-w9wsj" err="pods \"cilium-w9wsj\" is forbidden: User \"system:node:ci-4186-1-0-1-dfe7c46cbd\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186-1-0-1-dfe7c46cbd' and this object" Jan 29 11:10:23.882734 kubelet[2745]: W0129 11:10:23.882667 2745 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4186-1-0-1-dfe7c46cbd" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-1-dfe7c46cbd' and this object Jan 29 11:10:23.882734 kubelet[2745]: E0129 11:10:23.882682 2745 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4186-1-0-1-dfe7c46cbd\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186-1-0-1-dfe7c46cbd' and this object" logger="UnhandledError" Jan 29 11:10:23.882906 kubelet[2745]: W0129 11:10:23.882763 2745 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4186-1-0-1-dfe7c46cbd" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-1-dfe7c46cbd' and this object Jan 29 11:10:23.882906 kubelet[2745]: E0129 11:10:23.882776 2745 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4186-1-0-1-dfe7c46cbd\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186-1-0-1-dfe7c46cbd' and this object" logger="UnhandledError" Jan 29 11:10:23.882906 kubelet[2745]: W0129 11:10:23.882816 2745 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4186-1-0-1-dfe7c46cbd" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-1-dfe7c46cbd' and this object Jan 29 11:10:23.882906 kubelet[2745]: E0129 11:10:23.882846 2745 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4186-1-0-1-dfe7c46cbd\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186-1-0-1-dfe7c46cbd' and this object" logger="UnhandledError" Jan 29 11:10:23.888336 systemd[1]: Created slice kubepods-burstable-pod5f745286_28cb_44d4_b3f9_ee2694734392.slice - libcontainer container kubepods-burstable-pod5f745286_28cb_44d4_b3f9_ee2694734392.slice. Jan 29 11:10:23.907430 kubelet[2745]: I0129 11:10:23.907383 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5f745286-28cb-44d4-b3f9-ee2694734392-bpf-maps\") pod \"cilium-w9wsj\" (UID: \"5f745286-28cb-44d4-b3f9-ee2694734392\") " pod="kube-system/cilium-w9wsj" Jan 29 11:10:23.907430 kubelet[2745]: I0129 11:10:23.907422 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5f745286-28cb-44d4-b3f9-ee2694734392-cilium-ipsec-secrets\") pod \"cilium-w9wsj\" (UID: \"5f745286-28cb-44d4-b3f9-ee2694734392\") " pod="kube-system/cilium-w9wsj" Jan 29 11:10:23.907944 kubelet[2745]: I0129 11:10:23.907451 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f745286-28cb-44d4-b3f9-ee2694734392-xtables-lock\") pod \"cilium-w9wsj\" (UID: \"5f745286-28cb-44d4-b3f9-ee2694734392\") " pod="kube-system/cilium-w9wsj" Jan 29 11:10:23.907944 kubelet[2745]: I0129 11:10:23.907467 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f745286-28cb-44d4-b3f9-ee2694734392-cilium-config-path\") pod \"cilium-w9wsj\" (UID: \"5f745286-28cb-44d4-b3f9-ee2694734392\") " pod="kube-system/cilium-w9wsj" Jan 29 11:10:23.907944 kubelet[2745]: I0129 11:10:23.907483 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5f745286-28cb-44d4-b3f9-ee2694734392-cilium-run\") pod \"cilium-w9wsj\" (UID: \"5f745286-28cb-44d4-b3f9-ee2694734392\") " pod="kube-system/cilium-w9wsj" Jan 29 11:10:23.907944 kubelet[2745]: I0129 11:10:23.907507 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5f745286-28cb-44d4-b3f9-ee2694734392-host-proc-sys-net\") pod \"cilium-w9wsj\" (UID: \"5f745286-28cb-44d4-b3f9-ee2694734392\") " pod="kube-system/cilium-w9wsj" Jan 29 11:10:23.907944 kubelet[2745]: I0129 11:10:23.907529 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5f745286-28cb-44d4-b3f9-ee2694734392-clustermesh-secrets\") pod \"cilium-w9wsj\" (UID: \"5f745286-28cb-44d4-b3f9-ee2694734392\") " pod="kube-system/cilium-w9wsj" Jan 29 11:10:23.907944 kubelet[2745]: I0129 11:10:23.907543 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5f745286-28cb-44d4-b3f9-ee2694734392-hostproc\") pod \"cilium-w9wsj\" (UID: \"5f745286-28cb-44d4-b3f9-ee2694734392\") " pod="kube-system/cilium-w9wsj" Jan 29 11:10:23.908084 kubelet[2745]: I0129 11:10:23.907559 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5f745286-28cb-44d4-b3f9-ee2694734392-etc-cni-netd\") pod \"cilium-w9wsj\" (UID: \"5f745286-28cb-44d4-b3f9-ee2694734392\") " pod="kube-system/cilium-w9wsj" Jan 29 11:10:23.908084 kubelet[2745]: I0129 11:10:23.907574 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f745286-28cb-44d4-b3f9-ee2694734392-lib-modules\") pod \"cilium-w9wsj\" (UID: \"5f745286-28cb-44d4-b3f9-ee2694734392\") " pod="kube-system/cilium-w9wsj" Jan 29 11:10:23.908084 kubelet[2745]: I0129 11:10:23.907598 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5f745286-28cb-44d4-b3f9-ee2694734392-host-proc-sys-kernel\") pod \"cilium-w9wsj\" (UID: \"5f745286-28cb-44d4-b3f9-ee2694734392\") " pod="kube-system/cilium-w9wsj" Jan 29 11:10:23.908084 kubelet[2745]: I0129 11:10:23.907613 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5f745286-28cb-44d4-b3f9-ee2694734392-cilium-cgroup\") pod \"cilium-w9wsj\" (UID: \"5f745286-28cb-44d4-b3f9-ee2694734392\") " pod="kube-system/cilium-w9wsj" Jan 29 11:10:23.908084 kubelet[2745]: I0129 11:10:23.907630 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5f745286-28cb-44d4-b3f9-ee2694734392-cni-path\") pod \"cilium-w9wsj\" (UID: \"5f745286-28cb-44d4-b3f9-ee2694734392\") " pod="kube-system/cilium-w9wsj" Jan 29 11:10:23.908084 kubelet[2745]: I0129 11:10:23.907673 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddn9f\" (UniqueName: \"kubernetes.io/projected/5f745286-28cb-44d4-b3f9-ee2694734392-kube-api-access-ddn9f\") pod \"cilium-w9wsj\" (UID: \"5f745286-28cb-44d4-b3f9-ee2694734392\") " pod="kube-system/cilium-w9wsj" Jan 29 11:10:23.908208 kubelet[2745]: I0129 11:10:23.907717 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5f745286-28cb-44d4-b3f9-ee2694734392-hubble-tls\") pod \"cilium-w9wsj\" (UID: \"5f745286-28cb-44d4-b3f9-ee2694734392\") " pod="kube-system/cilium-w9wsj" Jan 29 11:10:24.013718 sshd[4503]: Connection closed by 147.75.109.163 port 60344 Jan 29 11:10:24.015084 sshd-session[4501]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:24.021196 systemd[1]: sshd@21-116.202.15.110:22-147.75.109.163:60344.service: Deactivated successfully. Jan 29 11:10:24.024222 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:10:24.025591 systemd-logind[1459]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:10:24.027169 systemd-logind[1459]: Removed session 21. Jan 29 11:10:24.191307 systemd[1]: Started sshd@22-116.202.15.110:22-147.75.109.163:60354.service - OpenSSH per-connection server daemon (147.75.109.163:60354). Jan 29 11:10:25.013643 kubelet[2745]: E0129 11:10:25.010792 2745 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 29 11:10:25.013643 kubelet[2745]: E0129 11:10:25.010933 2745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f745286-28cb-44d4-b3f9-ee2694734392-clustermesh-secrets podName:5f745286-28cb-44d4-b3f9-ee2694734392 nodeName:}" failed. No retries permitted until 2025-01-29 11:10:25.510901543 +0000 UTC m=+349.600618605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/5f745286-28cb-44d4-b3f9-ee2694734392-clustermesh-secrets") pod "cilium-w9wsj" (UID: "5f745286-28cb-44d4-b3f9-ee2694734392") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:10:25.013643 kubelet[2745]: E0129 11:10:25.011514 2745 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 29 11:10:25.013643 kubelet[2745]: E0129 11:10:25.011533 2745 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-w9wsj: failed to sync secret cache: timed out waiting for the condition Jan 29 11:10:25.013643 kubelet[2745]: E0129 11:10:25.011583 2745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f745286-28cb-44d4-b3f9-ee2694734392-hubble-tls podName:5f745286-28cb-44d4-b3f9-ee2694734392 nodeName:}" failed. No retries permitted until 2025-01-29 11:10:25.511570111 +0000 UTC m=+349.601287093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/5f745286-28cb-44d4-b3f9-ee2694734392-hubble-tls") pod "cilium-w9wsj" (UID: "5f745286-28cb-44d4-b3f9-ee2694734392") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:10:25.013643 kubelet[2745]: E0129 11:10:25.011604 2745 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jan 29 11:10:25.014278 kubelet[2745]: E0129 11:10:25.011625 2745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f745286-28cb-44d4-b3f9-ee2694734392-cilium-ipsec-secrets podName:5f745286-28cb-44d4-b3f9-ee2694734392 nodeName:}" failed. No retries permitted until 2025-01-29 11:10:25.511619632 +0000 UTC m=+349.601336654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/5f745286-28cb-44d4-b3f9-ee2694734392-cilium-ipsec-secrets") pod "cilium-w9wsj" (UID: "5f745286-28cb-44d4-b3f9-ee2694734392") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:10:25.014278 kubelet[2745]: E0129 11:10:25.011661 2745 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 11:10:25.014278 kubelet[2745]: E0129 11:10:25.011686 2745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f745286-28cb-44d4-b3f9-ee2694734392-cilium-config-path podName:5f745286-28cb-44d4-b3f9-ee2694734392 nodeName:}" failed. No retries permitted until 2025-01-29 11:10:25.511679833 +0000 UTC m=+349.601396855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/5f745286-28cb-44d4-b3f9-ee2694734392-cilium-config-path") pod "cilium-w9wsj" (UID: "5f745286-28cb-44d4-b3f9-ee2694734392") : failed to sync configmap cache: timed out waiting for the condition Jan 29 11:10:25.190672 sshd[4516]: Accepted publickey for core from 147.75.109.163 port 60354 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:10:25.194898 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:25.202183 systemd-logind[1459]: New session 22 of user core. Jan 29 11:10:25.207874 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:10:25.693907 containerd[1481]: time="2025-01-29T11:10:25.693823227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w9wsj,Uid:5f745286-28cb-44d4-b3f9-ee2694734392,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:25.730348 containerd[1481]: time="2025-01-29T11:10:25.729942547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:25.730348 containerd[1481]: time="2025-01-29T11:10:25.730019428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:25.730348 containerd[1481]: time="2025-01-29T11:10:25.730035548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:25.730709 containerd[1481]: time="2025-01-29T11:10:25.730511834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:25.764082 systemd[1]: Started cri-containerd-e173d5fd371f928a1e556f50f395a2dd4fccd6e516c1bad4d78d43ef767607e1.scope - libcontainer container e173d5fd371f928a1e556f50f395a2dd4fccd6e516c1bad4d78d43ef767607e1. Jan 29 11:10:25.791822 containerd[1481]: time="2025-01-29T11:10:25.791753101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w9wsj,Uid:5f745286-28cb-44d4-b3f9-ee2694734392,Namespace:kube-system,Attempt:0,} returns sandbox id \"e173d5fd371f928a1e556f50f395a2dd4fccd6e516c1bad4d78d43ef767607e1\"" Jan 29 11:10:25.796514 containerd[1481]: time="2025-01-29T11:10:25.795906191Z" level=info msg="CreateContainer within sandbox \"e173d5fd371f928a1e556f50f395a2dd4fccd6e516c1bad4d78d43ef767607e1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:10:25.811546 containerd[1481]: time="2025-01-29T11:10:25.811493901Z" level=info msg="CreateContainer within sandbox \"e173d5fd371f928a1e556f50f395a2dd4fccd6e516c1bad4d78d43ef767607e1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f3df4ac8bdd177967825f6839ee7a95e5a687d84d5e075c24efa8edc74e1b5c6\"" Jan 29 11:10:25.815014 containerd[1481]: time="2025-01-29T11:10:25.812797277Z" level=info msg="StartContainer for \"f3df4ac8bdd177967825f6839ee7a95e5a687d84d5e075c24efa8edc74e1b5c6\"" Jan 29 11:10:25.846946 systemd[1]: Started cri-containerd-f3df4ac8bdd177967825f6839ee7a95e5a687d84d5e075c24efa8edc74e1b5c6.scope - libcontainer container f3df4ac8bdd177967825f6839ee7a95e5a687d84d5e075c24efa8edc74e1b5c6. Jan 29 11:10:25.874529 sshd[4518]: Connection closed by 147.75.109.163 port 60354 Jan 29 11:10:25.875267 sshd-session[4516]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:25.882215 containerd[1481]: time="2025-01-29T11:10:25.882163722Z" level=info msg="StartContainer for \"f3df4ac8bdd177967825f6839ee7a95e5a687d84d5e075c24efa8edc74e1b5c6\" returns successfully" Jan 29 11:10:25.883497 systemd[1]: sshd@22-116.202.15.110:22-147.75.109.163:60354.service: Deactivated successfully. Jan 29 11:10:25.889968 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:10:25.894029 systemd-logind[1459]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:10:25.895609 systemd-logind[1459]: Removed session 22. Jan 29 11:10:25.900300 systemd[1]: cri-containerd-f3df4ac8bdd177967825f6839ee7a95e5a687d84d5e075c24efa8edc74e1b5c6.scope: Deactivated successfully. Jan 29 11:10:25.944795 containerd[1481]: time="2025-01-29T11:10:25.944094157Z" level=info msg="shim disconnected" id=f3df4ac8bdd177967825f6839ee7a95e5a687d84d5e075c24efa8edc74e1b5c6 namespace=k8s.io Jan 29 11:10:25.944795 containerd[1481]: time="2025-01-29T11:10:25.944366521Z" level=warning msg="cleaning up after shim disconnected" id=f3df4ac8bdd177967825f6839ee7a95e5a687d84d5e075c24efa8edc74e1b5c6 namespace=k8s.io Jan 29 11:10:25.944795 containerd[1481]: time="2025-01-29T11:10:25.944390201Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:26.053141 systemd[1]: Started sshd@23-116.202.15.110:22-147.75.109.163:60368.service - OpenSSH per-connection server daemon (147.75.109.163:60368). Jan 29 11:10:26.180803 containerd[1481]: time="2025-01-29T11:10:26.179976171Z" level=info msg="CreateContainer within sandbox \"e173d5fd371f928a1e556f50f395a2dd4fccd6e516c1bad4d78d43ef767607e1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:10:26.193457 containerd[1481]: time="2025-01-29T11:10:26.193320532Z" level=info msg="CreateContainer within sandbox \"e173d5fd371f928a1e556f50f395a2dd4fccd6e516c1bad4d78d43ef767607e1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ee48384c068c58caf8f3a12cafcf1ed5fc2fb6a8817f6827767bb0e4474228a5\"" Jan 29 11:10:26.195390 containerd[1481]: time="2025-01-29T11:10:26.195271156Z" level=info msg="StartContainer for \"ee48384c068c58caf8f3a12cafcf1ed5fc2fb6a8817f6827767bb0e4474228a5\"" Jan 29 11:10:26.229726 systemd[1]: Started cri-containerd-ee48384c068c58caf8f3a12cafcf1ed5fc2fb6a8817f6827767bb0e4474228a5.scope - libcontainer container ee48384c068c58caf8f3a12cafcf1ed5fc2fb6a8817f6827767bb0e4474228a5. Jan 29 11:10:26.260029 containerd[1481]: time="2025-01-29T11:10:26.259830855Z" level=info msg="StartContainer for \"ee48384c068c58caf8f3a12cafcf1ed5fc2fb6a8817f6827767bb0e4474228a5\" returns successfully" Jan 29 11:10:26.265420 systemd[1]: cri-containerd-ee48384c068c58caf8f3a12cafcf1ed5fc2fb6a8817f6827767bb0e4474228a5.scope: Deactivated successfully. Jan 29 11:10:26.296243 containerd[1481]: time="2025-01-29T11:10:26.296019132Z" level=info msg="shim disconnected" id=ee48384c068c58caf8f3a12cafcf1ed5fc2fb6a8817f6827767bb0e4474228a5 namespace=k8s.io Jan 29 11:10:26.296243 containerd[1481]: time="2025-01-29T11:10:26.296078093Z" level=warning msg="cleaning up after shim disconnected" id=ee48384c068c58caf8f3a12cafcf1ed5fc2fb6a8817f6827767bb0e4474228a5 namespace=k8s.io Jan 29 11:10:26.296243 containerd[1481]: time="2025-01-29T11:10:26.296085973Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:26.333862 kubelet[2745]: E0129 11:10:26.333788 2745 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:10:27.044547 sshd[4635]: Accepted publickey for core from 147.75.109.163 port 60368 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:10:27.047619 sshd-session[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:27.055128 systemd-logind[1459]: New session 23 of user core. Jan 29 11:10:27.064981 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:10:27.185313 containerd[1481]: time="2025-01-29T11:10:27.185127163Z" level=info msg="CreateContainer within sandbox \"e173d5fd371f928a1e556f50f395a2dd4fccd6e516c1bad4d78d43ef767607e1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:10:27.211482 containerd[1481]: time="2025-01-29T11:10:27.210965312Z" level=info msg="CreateContainer within sandbox \"e173d5fd371f928a1e556f50f395a2dd4fccd6e516c1bad4d78d43ef767607e1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1c5c809451c153790243f44543ea8e91c1406885ac07e0237215f36b4ee26a37\"" Jan 29 11:10:27.212164 containerd[1481]: time="2025-01-29T11:10:27.212112685Z" level=info msg="StartContainer for \"1c5c809451c153790243f44543ea8e91c1406885ac07e0237215f36b4ee26a37\"" Jan 29 11:10:27.253089 systemd[1]: Started cri-containerd-1c5c809451c153790243f44543ea8e91c1406885ac07e0237215f36b4ee26a37.scope - libcontainer container 1c5c809451c153790243f44543ea8e91c1406885ac07e0237215f36b4ee26a37. Jan 29 11:10:27.286117 containerd[1481]: time="2025-01-29T11:10:27.286049849Z" level=info msg="StartContainer for \"1c5c809451c153790243f44543ea8e91c1406885ac07e0237215f36b4ee26a37\" returns successfully" Jan 29 11:10:27.289449 systemd[1]: cri-containerd-1c5c809451c153790243f44543ea8e91c1406885ac07e0237215f36b4ee26a37.scope: Deactivated successfully. Jan 29 11:10:27.325736 containerd[1481]: time="2025-01-29T11:10:27.325553801Z" level=info msg="shim disconnected" id=1c5c809451c153790243f44543ea8e91c1406885ac07e0237215f36b4ee26a37 namespace=k8s.io Jan 29 11:10:27.325736 containerd[1481]: time="2025-01-29T11:10:27.325632122Z" level=warning msg="cleaning up after shim disconnected" id=1c5c809451c153790243f44543ea8e91c1406885ac07e0237215f36b4ee26a37 namespace=k8s.io Jan 29 11:10:27.325736 containerd[1481]: time="2025-01-29T11:10:27.325668443Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:27.538405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c5c809451c153790243f44543ea8e91c1406885ac07e0237215f36b4ee26a37-rootfs.mount: Deactivated successfully. Jan 29 11:10:28.191747 containerd[1481]: time="2025-01-29T11:10:28.190877803Z" level=info msg="CreateContainer within sandbox \"e173d5fd371f928a1e556f50f395a2dd4fccd6e516c1bad4d78d43ef767607e1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:10:28.210210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3295217696.mount: Deactivated successfully. Jan 29 11:10:28.212227 containerd[1481]: time="2025-01-29T11:10:28.212092494Z" level=info msg="CreateContainer within sandbox \"e173d5fd371f928a1e556f50f395a2dd4fccd6e516c1bad4d78d43ef767607e1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6b8fe37aa1ea96e7edf71261e6eb45497a54d4c1454bf82e5a7dc736d56d1df8\"" Jan 29 11:10:28.215547 containerd[1481]: time="2025-01-29T11:10:28.214428362Z" level=info msg="StartContainer for \"6b8fe37aa1ea96e7edf71261e6eb45497a54d4c1454bf82e5a7dc736d56d1df8\"" Jan 29 11:10:28.257985 systemd[1]: Started cri-containerd-6b8fe37aa1ea96e7edf71261e6eb45497a54d4c1454bf82e5a7dc736d56d1df8.scope - libcontainer container 6b8fe37aa1ea96e7edf71261e6eb45497a54d4c1454bf82e5a7dc736d56d1df8. Jan 29 11:10:28.290238 containerd[1481]: time="2025-01-29T11:10:28.290098658Z" level=info msg="StartContainer for \"6b8fe37aa1ea96e7edf71261e6eb45497a54d4c1454bf82e5a7dc736d56d1df8\" returns successfully" Jan 29 11:10:28.291060 systemd[1]: cri-containerd-6b8fe37aa1ea96e7edf71261e6eb45497a54d4c1454bf82e5a7dc736d56d1df8.scope: Deactivated successfully. Jan 29 11:10:28.322683 containerd[1481]: time="2025-01-29T11:10:28.322576402Z" level=info msg="shim disconnected" id=6b8fe37aa1ea96e7edf71261e6eb45497a54d4c1454bf82e5a7dc736d56d1df8 namespace=k8s.io Jan 29 11:10:28.323315 containerd[1481]: time="2025-01-29T11:10:28.322778725Z" level=warning msg="cleaning up after shim disconnected" id=6b8fe37aa1ea96e7edf71261e6eb45497a54d4c1454bf82e5a7dc736d56d1df8 namespace=k8s.io Jan 29 11:10:28.323315 containerd[1481]: time="2025-01-29T11:10:28.322812125Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:28.536897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b8fe37aa1ea96e7edf71261e6eb45497a54d4c1454bf82e5a7dc736d56d1df8-rootfs.mount: Deactivated successfully. Jan 29 11:10:29.207829 containerd[1481]: time="2025-01-29T11:10:29.207347212Z" level=info msg="CreateContainer within sandbox \"e173d5fd371f928a1e556f50f395a2dd4fccd6e516c1bad4d78d43ef767607e1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:10:29.236343 containerd[1481]: time="2025-01-29T11:10:29.236269991Z" level=info msg="CreateContainer within sandbox \"e173d5fd371f928a1e556f50f395a2dd4fccd6e516c1bad4d78d43ef767607e1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3f8ef61f32c86f2802f40b70ddace657391085ff1e55acb9058e8f07702dd135\"" Jan 29 11:10:29.238934 containerd[1481]: time="2025-01-29T11:10:29.238858222Z" level=info msg="StartContainer for \"3f8ef61f32c86f2802f40b70ddace657391085ff1e55acb9058e8f07702dd135\"" Jan 29 11:10:29.276916 systemd[1]: Started cri-containerd-3f8ef61f32c86f2802f40b70ddace657391085ff1e55acb9058e8f07702dd135.scope - libcontainer container 3f8ef61f32c86f2802f40b70ddace657391085ff1e55acb9058e8f07702dd135. Jan 29 11:10:29.310750 containerd[1481]: time="2025-01-29T11:10:29.310682504Z" level=info msg="StartContainer for \"3f8ef61f32c86f2802f40b70ddace657391085ff1e55acb9058e8f07702dd135\" returns successfully" Jan 29 11:10:29.630791 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 11:10:30.229669 kubelet[2745]: I0129 11:10:30.229538 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w9wsj" podStartSLOduration=7.229511169 podStartE2EDuration="7.229511169s" podCreationTimestamp="2025-01-29 11:10:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:30.229178285 +0000 UTC m=+354.318895347" watchObservedRunningTime="2025-01-29 11:10:30.229511169 +0000 UTC m=+354.319228231" Jan 29 11:10:32.806826 systemd-networkd[1384]: lxc_health: Link UP Jan 29 11:10:32.819078 systemd-networkd[1384]: lxc_health: Gained carrier Jan 29 11:10:34.179199 kubelet[2745]: E0129 11:10:34.179154 2745 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55794->127.0.0.1:41527: write tcp 127.0.0.1:55794->127.0.0.1:41527: write: connection reset by peer Jan 29 11:10:34.488846 systemd-networkd[1384]: lxc_health: Gained IPv6LL Jan 29 11:10:36.132351 containerd[1481]: time="2025-01-29T11:10:36.132237204Z" level=info msg="StopPodSandbox for \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\"" Jan 29 11:10:36.133314 containerd[1481]: time="2025-01-29T11:10:36.132727209Z" level=info msg="TearDown network for sandbox \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\" successfully" Jan 29 11:10:36.133314 containerd[1481]: time="2025-01-29T11:10:36.132761570Z" level=info msg="StopPodSandbox for \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\" returns successfully" Jan 29 11:10:36.133818 containerd[1481]: time="2025-01-29T11:10:36.133487818Z" level=info msg="RemovePodSandbox for \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\"" Jan 29 11:10:36.133818 containerd[1481]: time="2025-01-29T11:10:36.133664980Z" level=info msg="Forcibly stopping sandbox \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\"" Jan 29 11:10:36.133818 containerd[1481]: time="2025-01-29T11:10:36.133726740Z" level=info msg="TearDown network for sandbox \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\" successfully" Jan 29 11:10:36.139554 containerd[1481]: time="2025-01-29T11:10:36.139310681Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:10:36.139554 containerd[1481]: time="2025-01-29T11:10:36.139396242Z" level=info msg="RemovePodSandbox \"112c2618ac3b1f7c6111f7b553fc5175e36abeda696cba347189514695897897\" returns successfully" Jan 29 11:10:36.140816 containerd[1481]: time="2025-01-29T11:10:36.140438854Z" level=info msg="StopPodSandbox for \"bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7\"" Jan 29 11:10:36.140816 containerd[1481]: time="2025-01-29T11:10:36.140549095Z" level=info msg="TearDown network for sandbox \"bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7\" successfully" Jan 29 11:10:36.140816 containerd[1481]: time="2025-01-29T11:10:36.140559775Z" level=info msg="StopPodSandbox for \"bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7\" returns successfully" Jan 29 11:10:36.142429 containerd[1481]: time="2025-01-29T11:10:36.141159702Z" level=info msg="RemovePodSandbox for \"bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7\"" Jan 29 11:10:36.142429 containerd[1481]: time="2025-01-29T11:10:36.141191142Z" level=info msg="Forcibly stopping sandbox \"bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7\"" Jan 29 11:10:36.142429 containerd[1481]: time="2025-01-29T11:10:36.141254143Z" level=info msg="TearDown network for sandbox \"bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7\" successfully" Jan 29 11:10:36.145101 containerd[1481]: time="2025-01-29T11:10:36.145051304Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:10:36.145329 containerd[1481]: time="2025-01-29T11:10:36.145309427Z" level=info msg="RemovePodSandbox \"bf4bce2d8785d40a31badf1fee8724914739cbfafe854fdc0e1441f1d1290ab7\" returns successfully" Jan 29 11:10:38.432013 systemd[1]: run-containerd-runc-k8s.io-3f8ef61f32c86f2802f40b70ddace657391085ff1e55acb9058e8f07702dd135-runc.4DSAJI.mount: Deactivated successfully. Jan 29 11:10:38.658128 sshd[4698]: Connection closed by 147.75.109.163 port 60368 Jan 29 11:10:38.658905 sshd-session[4635]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:38.664469 systemd[1]: sshd@23-116.202.15.110:22-147.75.109.163:60368.service: Deactivated successfully. Jan 29 11:10:38.669684 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:10:38.672511 systemd-logind[1459]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:10:38.674320 systemd-logind[1459]: Removed session 23. Jan 29 11:10:53.055728 kubelet[2745]: E0129 11:10:53.055607 2745 controller.go:195] "Failed to update lease" err="Put \"https://116.202.15.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-1-dfe7c46cbd?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:10:53.500577 kubelet[2745]: E0129 11:10:53.500093 2745 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36674->10.0.0.2:2379: read: connection timed out" Jan 29 11:10:54.803666 systemd[1]: cri-containerd-acf11183ea205d294abb0da62a04d9d65ccfb42ac4442939d08a2b9528d763c5.scope: Deactivated successfully. Jan 29 11:10:54.804027 systemd[1]: cri-containerd-acf11183ea205d294abb0da62a04d9d65ccfb42ac4442939d08a2b9528d763c5.scope: Consumed 6.088s CPU time, 19.6M memory peak, 0B memory swap peak. Jan 29 11:10:54.833500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acf11183ea205d294abb0da62a04d9d65ccfb42ac4442939d08a2b9528d763c5-rootfs.mount: Deactivated successfully. Jan 29 11:10:54.840402 containerd[1481]: time="2025-01-29T11:10:54.840115127Z" level=info msg="shim disconnected" id=acf11183ea205d294abb0da62a04d9d65ccfb42ac4442939d08a2b9528d763c5 namespace=k8s.io Jan 29 11:10:54.840402 containerd[1481]: time="2025-01-29T11:10:54.840185048Z" level=warning msg="cleaning up after shim disconnected" id=acf11183ea205d294abb0da62a04d9d65ccfb42ac4442939d08a2b9528d763c5 namespace=k8s.io Jan 29 11:10:54.840402 containerd[1481]: time="2025-01-29T11:10:54.840193928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:55.279148 kubelet[2745]: I0129 11:10:55.279096 2745 scope.go:117] "RemoveContainer" containerID="acf11183ea205d294abb0da62a04d9d65ccfb42ac4442939d08a2b9528d763c5" Jan 29 11:10:55.282080 containerd[1481]: time="2025-01-29T11:10:55.281672689Z" level=info msg="CreateContainer within sandbox \"7f88446b4add3aa7391866e08355565f896b25a486184e0af8d77531950b3409\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 11:10:55.307299 containerd[1481]: time="2025-01-29T11:10:55.307151039Z" level=info msg="CreateContainer within sandbox \"7f88446b4add3aa7391866e08355565f896b25a486184e0af8d77531950b3409\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0d20cde5cbf9e893f9c258cc95b37c3985c50c09d36790ac86fa795f295e6d66\"" Jan 29 11:10:55.307861 containerd[1481]: time="2025-01-29T11:10:55.307768604Z" level=info msg="StartContainer for \"0d20cde5cbf9e893f9c258cc95b37c3985c50c09d36790ac86fa795f295e6d66\"" Jan 29 11:10:55.344160 systemd[1]: Started cri-containerd-0d20cde5cbf9e893f9c258cc95b37c3985c50c09d36790ac86fa795f295e6d66.scope - libcontainer container 0d20cde5cbf9e893f9c258cc95b37c3985c50c09d36790ac86fa795f295e6d66. Jan 29 11:10:55.388684 containerd[1481]: time="2025-01-29T11:10:55.388225691Z" level=info msg="StartContainer for \"0d20cde5cbf9e893f9c258cc95b37c3985c50c09d36790ac86fa795f295e6d66\" returns successfully" Jan 29 11:10:57.596082 kubelet[2745]: E0129 11:10:57.595847 2745 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36496->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4186-1-0-1-dfe7c46cbd.181f255968e047c4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4186-1-0-1-dfe7c46cbd,UID:1ab45a9571af7eebb2c5f2f55f8143ff,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-1-dfe7c46cbd,},FirstTimestamp:2025-01-29 11:10:47.130146756 +0000 UTC m=+371.219863858,LastTimestamp:2025-01-29 11:10:47.130146756 +0000 UTC m=+371.219863858,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-1-dfe7c46cbd,}" Jan 29 11:10:59.371144 systemd[1]: cri-containerd-e2f926d312609f52913b456621b06295f28823ddeaca090d3bc390566189d9dc.scope: Deactivated successfully. Jan 29 11:10:59.372793 systemd[1]: cri-containerd-e2f926d312609f52913b456621b06295f28823ddeaca090d3bc390566189d9dc.scope: Consumed 5.982s CPU time, 15.7M memory peak, 0B memory swap peak. Jan 29 11:10:59.396153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2f926d312609f52913b456621b06295f28823ddeaca090d3bc390566189d9dc-rootfs.mount: Deactivated successfully. Jan 29 11:10:59.410045 containerd[1481]: time="2025-01-29T11:10:59.409893693Z" level=info msg="shim disconnected" id=e2f926d312609f52913b456621b06295f28823ddeaca090d3bc390566189d9dc namespace=k8s.io Jan 29 11:10:59.410045 containerd[1481]: time="2025-01-29T11:10:59.409988734Z" level=warning msg="cleaning up after shim disconnected" id=e2f926d312609f52913b456621b06295f28823ddeaca090d3bc390566189d9dc namespace=k8s.io Jan 29 11:10:59.410045 containerd[1481]: time="2025-01-29T11:10:59.410007614Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:00.297697 kubelet[2745]: I0129 11:11:00.297136 2745 scope.go:117] "RemoveContainer" containerID="e2f926d312609f52913b456621b06295f28823ddeaca090d3bc390566189d9dc" Jan 29 11:11:00.300049 containerd[1481]: time="2025-01-29T11:11:00.299809573Z" level=info msg="CreateContainer within sandbox \"e12477d7641c9c4f4e3a2fb4e9ddfd4676dafecc4bf04eff02263397243db754\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 29 11:11:00.316965 containerd[1481]: time="2025-01-29T11:11:00.316918560Z" level=info msg="CreateContainer within sandbox \"e12477d7641c9c4f4e3a2fb4e9ddfd4676dafecc4bf04eff02263397243db754\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6cde51ad2b64fcc45a5dbc7eab3bb213b1841aba5db26ad5e97ce861d0e6ddca\"" Jan 29 11:11:00.318033 containerd[1481]: time="2025-01-29T11:11:00.317811408Z" level=info msg="StartContainer for \"6cde51ad2b64fcc45a5dbc7eab3bb213b1841aba5db26ad5e97ce861d0e6ddca\"" Jan 29 11:11:00.349870 systemd[1]: Started cri-containerd-6cde51ad2b64fcc45a5dbc7eab3bb213b1841aba5db26ad5e97ce861d0e6ddca.scope - libcontainer container 6cde51ad2b64fcc45a5dbc7eab3bb213b1841aba5db26ad5e97ce861d0e6ddca. Jan 29 11:11:00.387444 containerd[1481]: time="2025-01-29T11:11:00.387377124Z" level=info msg="StartContainer for \"6cde51ad2b64fcc45a5dbc7eab3bb213b1841aba5db26ad5e97ce861d0e6ddca\" returns successfully"