Dec 13 01:29:26.883153 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:29:26.883174 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:26.883182 kernel: BIOS-provided physical RAM map: Dec 13 01:29:26.883188 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:29:26.883193 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:29:26.883217 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:29:26.883223 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Dec 13 01:29:26.883228 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Dec 13 01:29:26.883236 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:29:26.883241 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:29:26.883247 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:29:26.883269 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:29:26.883275 kernel: NX (Execute Disable) protection: active Dec 13 01:29:26.883280 kernel: APIC: Static calls initialized Dec 13 01:29:26.883293 kernel: SMBIOS 2.8 present. Dec 13 01:29:26.883312 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Dec 13 01:29:26.883327 kernel: Hypervisor detected: KVM Dec 13 01:29:26.883339 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:29:26.883354 kernel: kvm-clock: using sched offset of 2865959988 cycles Dec 13 01:29:26.883370 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:29:26.883385 kernel: tsc: Detected 2445.406 MHz processor Dec 13 01:29:26.883406 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:29:26.883422 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:29:26.883433 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 13 01:29:26.883439 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:29:26.883445 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:29:26.883450 kernel: Using GB pages for direct mapping Dec 13 01:29:26.883456 kernel: ACPI: Early table checksum verification disabled Dec 13 01:29:26.883461 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Dec 13 01:29:26.883467 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:26.883472 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:26.883478 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:26.883486 kernel: ACPI: FACS 0x000000007CFE0000 000040 Dec 13 01:29:26.883491 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:26.883497 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:26.883503 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:26.883508 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:26.883932 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Dec 13 01:29:26.883943 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Dec 13 01:29:26.883950 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Dec 13 01:29:26.883963 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Dec 13 01:29:26.883969 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Dec 13 01:29:26.883975 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Dec 13 01:29:26.883981 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Dec 13 01:29:26.883986 kernel: No NUMA configuration found Dec 13 01:29:26.883992 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Dec 13 01:29:26.884000 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Dec 13 01:29:26.884006 kernel: Zone ranges: Dec 13 01:29:26.884012 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:29:26.884018 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Dec 13 01:29:26.884024 kernel: Normal empty Dec 13 01:29:26.884030 kernel: Movable zone start for each node Dec 13 01:29:26.884035 kernel: Early memory node ranges Dec 13 01:29:26.884041 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:29:26.884047 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Dec 13 01:29:26.884053 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Dec 13 01:29:26.884061 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:29:26.884067 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:29:26.884072 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:29:26.884078 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:29:26.884084 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:29:26.884090 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:29:26.884096 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:29:26.884101 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:29:26.884107 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:29:26.884115 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:29:26.884121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:29:26.884127 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:29:26.884133 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:29:26.884138 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:29:26.884144 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:29:26.884150 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:29:26.884156 kernel: Booting paravirtualized kernel on KVM Dec 13 01:29:26.884162 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:29:26.884170 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:29:26.884175 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:29:26.884181 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:29:26.884187 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:29:26.884193 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 01:29:26.884199 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:26.884206 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:29:26.884212 kernel: random: crng init done Dec 13 01:29:26.884219 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:29:26.884225 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:29:26.884231 kernel: Fallback order for Node 0: 0 Dec 13 01:29:26.884237 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Dec 13 01:29:26.884243 kernel: Policy zone: DMA32 Dec 13 01:29:26.884248 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:29:26.884255 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Dec 13 01:29:26.884261 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:29:26.884267 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:29:26.884274 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:29:26.884280 kernel: Dynamic Preempt: voluntary Dec 13 01:29:26.884286 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:29:26.884293 kernel: rcu: RCU event tracing is enabled. Dec 13 01:29:26.884299 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:29:26.884305 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:29:26.884311 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:29:26.884317 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:29:26.884323 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:29:26.884331 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:29:26.884336 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:29:26.884342 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:29:26.884348 kernel: Console: colour VGA+ 80x25 Dec 13 01:29:26.884354 kernel: printk: console [tty0] enabled Dec 13 01:29:26.884360 kernel: printk: console [ttyS0] enabled Dec 13 01:29:26.884365 kernel: ACPI: Core revision 20230628 Dec 13 01:29:26.884371 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:29:26.884377 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:29:26.884385 kernel: x2apic enabled Dec 13 01:29:26.884391 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:29:26.884397 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:29:26.884403 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:29:26.884408 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Dec 13 01:29:26.884414 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:29:26.884420 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:29:26.884426 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:29:26.884432 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:29:26.884447 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:29:26.884453 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:29:26.884459 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:29:26.884467 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:29:26.884473 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:29:26.884479 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:29:26.884485 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:29:26.884492 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:29:26.884498 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:29:26.884504 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:29:26.884511 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:29:26.885106 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:29:26.885113 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:29:26.885120 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:29:26.885126 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:29:26.885132 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:29:26.885187 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:29:26.885194 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:29:26.885201 kernel: landlock: Up and running. Dec 13 01:29:26.885207 kernel: SELinux: Initializing. Dec 13 01:29:26.885214 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:29:26.885220 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:29:26.885226 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:29:26.885233 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:26.885239 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:26.885248 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:26.885255 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:29:26.885261 kernel: ... version: 0 Dec 13 01:29:26.885267 kernel: ... bit width: 48 Dec 13 01:29:26.885273 kernel: ... generic registers: 6 Dec 13 01:29:26.885279 kernel: ... value mask: 0000ffffffffffff Dec 13 01:29:26.885285 kernel: ... max period: 00007fffffffffff Dec 13 01:29:26.885291 kernel: ... fixed-purpose events: 0 Dec 13 01:29:26.885297 kernel: ... event mask: 000000000000003f Dec 13 01:29:26.885305 kernel: signal: max sigframe size: 1776 Dec 13 01:29:26.885311 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:29:26.885318 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:29:26.885325 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:29:26.885331 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:29:26.885337 kernel: .... node #0, CPUs: #1 Dec 13 01:29:26.885343 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:29:26.885349 kernel: smpboot: Max logical packages: 1 Dec 13 01:29:26.885355 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Dec 13 01:29:26.885363 kernel: devtmpfs: initialized Dec 13 01:29:26.885370 kernel: x86/mm: Memory block size: 128MB Dec 13 01:29:26.885376 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:29:26.885382 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:29:26.885388 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:29:26.885394 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:29:26.885400 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:29:26.885407 kernel: audit: type=2000 audit(1734053365.436:1): state=initialized audit_enabled=0 res=1 Dec 13 01:29:26.885413 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:29:26.885421 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:29:26.885427 kernel: cpuidle: using governor menu Dec 13 01:29:26.885433 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:29:26.885439 kernel: dca service started, version 1.12.1 Dec 13 01:29:26.885446 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:29:26.885452 kernel: PCI: Using configuration type 1 for base access Dec 13 01:29:26.885458 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:29:26.885464 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:29:26.885470 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:29:26.885478 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:29:26.885485 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:29:26.885491 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:29:26.885497 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:29:26.885503 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:29:26.885509 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:29:26.886547 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:29:26.886560 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:29:26.886567 kernel: ACPI: Interpreter enabled Dec 13 01:29:26.886577 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:29:26.886584 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:29:26.886590 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:29:26.886596 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:29:26.886603 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:29:26.886609 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:29:26.886766 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:29:26.886881 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:29:26.886990 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:29:26.886999 kernel: PCI host bridge to bus 0000:00 Dec 13 01:29:26.887112 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:29:26.887209 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:29:26.887303 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:29:26.887397 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Dec 13 01:29:26.887489 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:29:26.887663 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:29:26.887762 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:29:26.887884 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:29:26.887999 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Dec 13 01:29:26.888104 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Dec 13 01:29:26.888207 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Dec 13 01:29:26.888315 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Dec 13 01:29:26.893635 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Dec 13 01:29:26.893781 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:29:26.893902 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:26.894011 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Dec 13 01:29:26.894143 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:26.894248 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Dec 13 01:29:26.894368 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:26.894474 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Dec 13 01:29:26.894621 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:26.894742 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Dec 13 01:29:26.894860 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:26.894965 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Dec 13 01:29:26.895084 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:26.895189 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Dec 13 01:29:26.895300 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:26.896682 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Dec 13 01:29:26.896802 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:26.896908 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Dec 13 01:29:26.897031 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:26.897136 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Dec 13 01:29:26.897248 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:29:26.897352 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:29:26.897466 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:29:26.898614 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Dec 13 01:29:26.898762 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Dec 13 01:29:26.898880 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:29:26.898985 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:29:26.899102 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 01:29:26.899212 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Dec 13 01:29:26.899321 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 01:29:26.899430 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Dec 13 01:29:26.901609 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 01:29:26.901726 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:26.901831 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:26.901948 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 01:29:26.902076 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Dec 13 01:29:26.902181 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 01:29:26.902290 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:26.902392 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:26.902507 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 01:29:26.902636 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Dec 13 01:29:26.902744 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Dec 13 01:29:26.902846 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 01:29:26.902947 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:26.903048 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:26.903169 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 01:29:26.903279 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 01:29:26.903383 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 01:29:26.903486 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:26.908009 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:26.908148 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 01:29:26.908264 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Dec 13 01:29:26.908379 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 01:29:26.908484 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:26.908624 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:26.908778 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 01:29:26.908894 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Dec 13 01:29:26.909016 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Dec 13 01:29:26.909123 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 01:29:26.909233 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 01:29:26.909395 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:26.909415 kernel: acpiphp: Slot [0] registered Dec 13 01:29:26.909687 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 01:29:26.909868 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Dec 13 01:29:26.910062 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Dec 13 01:29:26.910245 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Dec 13 01:29:26.910415 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 01:29:26.910614 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:26.910781 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:26.910800 kernel: acpiphp: Slot [0-2] registered Dec 13 01:29:26.910950 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 01:29:26.911059 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:26.911189 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:26.911208 kernel: acpiphp: Slot [0-3] registered Dec 13 01:29:26.911374 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 01:29:26.915859 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:26.916040 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:26.916060 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:29:26.916071 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:29:26.916083 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:29:26.916095 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:29:26.916105 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:29:26.916117 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:29:26.916128 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:29:26.916146 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:29:26.916158 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:29:26.916170 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:29:26.916180 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:29:26.916192 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:29:26.916204 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:29:26.916215 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:29:26.916227 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:29:26.916236 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:29:26.916246 kernel: iommu: Default domain type: Translated Dec 13 01:29:26.916253 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:29:26.916259 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:29:26.916267 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:29:26.916279 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:29:26.916291 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Dec 13 01:29:26.916466 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:29:26.916663 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:29:26.916839 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:29:26.916859 kernel: vgaarb: loaded Dec 13 01:29:26.916872 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:29:26.916885 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:29:26.916897 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:29:26.916908 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:29:26.916917 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:29:26.916927 kernel: pnp: PnP ACPI init Dec 13 01:29:26.917113 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:29:26.917140 kernel: pnp: PnP ACPI: found 5 devices Dec 13 01:29:26.917154 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:29:26.917166 kernel: NET: Registered PF_INET protocol family Dec 13 01:29:26.917176 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:29:26.917185 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:29:26.917197 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:29:26.917209 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:29:26.917220 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:29:26.917237 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:29:26.917247 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:29:26.917259 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:29:26.917271 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:29:26.917283 kernel: NET: Registered PF_XDP protocol family Dec 13 01:29:26.917458 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 01:29:26.917656 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 01:29:26.917831 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 01:29:26.918031 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 01:29:26.918148 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 01:29:26.918293 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 01:29:26.918401 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 01:29:26.918506 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:26.918638 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:26.918745 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 01:29:26.918847 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:26.918955 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:26.919057 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 01:29:26.919160 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:26.919298 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:26.919448 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 01:29:26.923853 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:26.923967 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:26.924087 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 01:29:26.924210 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:26.924313 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:26.924414 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 01:29:26.924535 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 01:29:26.924643 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:26.924744 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 01:29:26.924846 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Dec 13 01:29:26.924949 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:26.925052 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:26.925159 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 01:29:26.925261 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Dec 13 01:29:26.925362 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:26.925463 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:26.925583 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 01:29:26.925688 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Dec 13 01:29:26.925790 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:26.925953 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:26.926079 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:29:26.926180 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:29:26.926280 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:29:26.926375 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Dec 13 01:29:26.926468 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:29:26.926587 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:29:26.926699 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:26.926800 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:26.926907 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:26.927013 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:26.927119 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:26.927219 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:26.927329 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:26.927429 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:26.927611 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:26.927720 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:26.927826 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 01:29:26.927924 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:26.928029 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Dec 13 01:29:26.928127 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:26.928224 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:26.928330 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Dec 13 01:29:26.928433 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:26.928546 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:26.928656 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Dec 13 01:29:26.928756 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:26.928853 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:26.928864 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:29:26.928871 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:29:26.928882 kernel: Initialise system trusted keyrings Dec 13 01:29:26.928889 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:29:26.928895 kernel: Key type asymmetric registered Dec 13 01:29:26.928902 kernel: Asymmetric key parser 'x509' registered Dec 13 01:29:26.928908 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:29:26.928915 kernel: io scheduler mq-deadline registered Dec 13 01:29:26.928922 kernel: io scheduler kyber registered Dec 13 01:29:26.928928 kernel: io scheduler bfq registered Dec 13 01:29:26.929034 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 01:29:26.929141 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 01:29:26.929246 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 01:29:26.929348 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 01:29:26.929451 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 01:29:26.931639 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 01:29:26.931760 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 01:29:26.931865 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 01:29:26.931969 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 01:29:26.932077 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 01:29:26.932180 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 01:29:26.932282 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 01:29:26.932383 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 01:29:26.932485 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 01:29:26.932607 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 01:29:26.932713 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 01:29:26.932723 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:29:26.932824 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 13 01:29:26.932931 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 13 01:29:26.932940 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:29:26.932947 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 13 01:29:26.932954 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:29:26.932960 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:29:26.932967 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:29:26.932974 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:29:26.932981 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:29:26.933092 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 01:29:26.933103 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:29:26.933199 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 01:29:26.933295 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T01:29:26 UTC (1734053366) Dec 13 01:29:26.933391 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:29:26.933400 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:29:26.933407 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:29:26.933414 kernel: Segment Routing with IPv6 Dec 13 01:29:26.933424 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:29:26.933430 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:29:26.933437 kernel: Key type dns_resolver registered Dec 13 01:29:26.933444 kernel: IPI shorthand broadcast: enabled Dec 13 01:29:26.933450 kernel: sched_clock: Marking stable (1074007623, 137695964)->(1222214747, -10511160) Dec 13 01:29:26.933457 kernel: registered taskstats version 1 Dec 13 01:29:26.933464 kernel: Loading compiled-in X.509 certificates Dec 13 01:29:26.933470 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:29:26.933477 kernel: Key type .fscrypt registered Dec 13 01:29:26.933486 kernel: Key type fscrypt-provisioning registered Dec 13 01:29:26.933493 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:29:26.933499 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:29:26.933506 kernel: ima: No architecture policies found Dec 13 01:29:26.935126 kernel: clk: Disabling unused clocks Dec 13 01:29:26.935145 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:29:26.935153 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:29:26.935161 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:29:26.935167 kernel: Run /init as init process Dec 13 01:29:26.935178 kernel: with arguments: Dec 13 01:29:26.935185 kernel: /init Dec 13 01:29:26.935192 kernel: with environment: Dec 13 01:29:26.935198 kernel: HOME=/ Dec 13 01:29:26.935205 kernel: TERM=linux Dec 13 01:29:26.935212 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:29:26.935221 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:26.935231 systemd[1]: Detected virtualization kvm. Dec 13 01:29:26.935240 systemd[1]: Detected architecture x86-64. Dec 13 01:29:26.935247 systemd[1]: Running in initrd. Dec 13 01:29:26.935254 systemd[1]: No hostname configured, using default hostname. Dec 13 01:29:26.935261 systemd[1]: Hostname set to . Dec 13 01:29:26.935268 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:26.935275 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:29:26.935282 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:26.935289 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:26.935299 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:29:26.935306 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:26.935314 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:29:26.935321 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:29:26.935329 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:29:26.935337 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:29:26.935346 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:26.935353 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:26.935360 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:26.935367 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:26.935374 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:26.935381 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:26.935388 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:26.935395 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:26.935402 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:29:26.935411 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:29:26.935418 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:26.935425 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:26.935432 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:26.935439 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:26.935446 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:29:26.935453 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:26.935460 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:29:26.935468 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:29:26.935477 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:26.935484 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:26.935491 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:26.935498 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:26.935548 systemd-journald[187]: Collecting audit messages is disabled. Dec 13 01:29:26.935572 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:26.935580 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:29:26.935587 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:29:26.935597 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:26.935604 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:29:26.935611 kernel: Bridge firewalling registered Dec 13 01:29:26.935619 systemd-journald[187]: Journal started Dec 13 01:29:26.935636 systemd-journald[187]: Runtime Journal (/run/log/journal/10666363ce234f5fba3c8ae18ec04d00) is 4.8M, max 38.4M, 33.6M free. Dec 13 01:29:26.884966 systemd-modules-load[188]: Inserted module 'overlay' Dec 13 01:29:26.934411 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 13 01:29:26.941550 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:26.941809 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:26.942500 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:26.951642 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:26.955675 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:26.957186 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:26.958890 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:26.972408 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:26.975643 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:29:26.987590 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:26.989640 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:26.990377 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:26.998697 dracut-cmdline[216]: dracut-dracut-053 Dec 13 01:29:26.999726 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:27.004687 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:27.026051 systemd-resolved[225]: Positive Trust Anchors: Dec 13 01:29:27.026792 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:27.026821 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:27.032727 systemd-resolved[225]: Defaulting to hostname 'linux'. Dec 13 01:29:27.033785 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:27.034560 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:27.092564 kernel: SCSI subsystem initialized Dec 13 01:29:27.100542 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:29:27.110545 kernel: iscsi: registered transport (tcp) Dec 13 01:29:27.128743 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:29:27.128809 kernel: QLogic iSCSI HBA Driver Dec 13 01:29:27.172974 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:27.178642 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:29:27.201547 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:29:27.201615 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:29:27.202794 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:29:27.242547 kernel: raid6: avx2x4 gen() 31136 MB/s Dec 13 01:29:27.259537 kernel: raid6: avx2x2 gen() 29419 MB/s Dec 13 01:29:27.276676 kernel: raid6: avx2x1 gen() 25322 MB/s Dec 13 01:29:27.276716 kernel: raid6: using algorithm avx2x4 gen() 31136 MB/s Dec 13 01:29:27.294734 kernel: raid6: .... xor() 4438 MB/s, rmw enabled Dec 13 01:29:27.294797 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:29:27.313561 kernel: xor: automatically using best checksumming function avx Dec 13 01:29:27.435551 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:29:27.447848 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:27.452701 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:27.465845 systemd-udevd[404]: Using default interface naming scheme 'v255'. Dec 13 01:29:27.469739 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:27.478675 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:29:27.491338 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Dec 13 01:29:27.525241 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:27.538712 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:27.600899 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:27.608672 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:29:27.623402 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:27.625188 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:27.626766 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:27.627773 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:27.632698 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:29:27.643968 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:27.690959 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:29:27.691014 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:29:27.744832 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:29:27.744905 kernel: AES CTR mode by8 optimization enabled Dec 13 01:29:27.751540 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 01:29:27.786873 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:27.787749 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:27.789013 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:27.790294 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:27.790402 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:27.791211 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:27.801657 kernel: libata version 3.00 loaded. Dec 13 01:29:27.799826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:27.807539 kernel: ACPI: bus type USB registered Dec 13 01:29:27.813545 kernel: usbcore: registered new interface driver usbfs Dec 13 01:29:27.817545 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:29:27.836579 kernel: usbcore: registered new interface driver hub Dec 13 01:29:27.836597 kernel: usbcore: registered new device driver usb Dec 13 01:29:27.836613 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:29:27.836623 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:29:27.836788 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:29:27.836917 kernel: scsi host1: ahci Dec 13 01:29:27.837054 kernel: scsi host2: ahci Dec 13 01:29:27.837179 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 13 01:29:27.839815 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 01:29:27.840010 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:29:27.840166 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 13 01:29:27.840297 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:29:27.840425 kernel: scsi host3: ahci Dec 13 01:29:27.841597 kernel: scsi host4: ahci Dec 13 01:29:27.841830 kernel: scsi host5: ahci Dec 13 01:29:27.841958 kernel: scsi host6: ahci Dec 13 01:29:27.842107 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 46 Dec 13 01:29:27.842118 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 46 Dec 13 01:29:27.842126 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 46 Dec 13 01:29:27.842134 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 46 Dec 13 01:29:27.842142 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 46 Dec 13 01:29:27.842150 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 46 Dec 13 01:29:27.842158 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:29:27.842166 kernel: GPT:17805311 != 80003071 Dec 13 01:29:27.842176 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:29:27.842184 kernel: GPT:17805311 != 80003071 Dec 13 01:29:27.842192 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:29:27.842200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:27.842208 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:29:27.889858 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:27.901670 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:27.918582 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:28.152494 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:28.152582 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:28.152594 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:29:28.152602 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:28.152623 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:28.152632 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:29:28.154843 kernel: ata1.00: applying bridge limits Dec 13 01:29:28.156194 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:28.156544 kernel: ata1.00: configured for UDMA/100 Dec 13 01:29:28.157785 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:29:28.185695 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 01:29:28.212969 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 01:29:28.213198 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 01:29:28.213405 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 01:29:28.213641 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 01:29:28.213836 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 01:29:28.214071 kernel: hub 1-0:1.0: USB hub found Dec 13 01:29:28.214245 kernel: hub 1-0:1.0: 4 ports detected Dec 13 01:29:28.214375 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 01:29:28.214545 kernel: hub 2-0:1.0: USB hub found Dec 13 01:29:28.214690 kernel: hub 2-0:1.0: 4 ports detected Dec 13 01:29:28.214814 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:29:28.233479 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:29:28.233497 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (449) Dec 13 01:29:28.233506 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (454) Dec 13 01:29:28.233545 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:29:28.230912 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 01:29:28.241782 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 01:29:28.248195 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 01:29:28.249381 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 01:29:28.254664 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 01:29:28.261664 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:29:28.266615 disk-uuid[572]: Primary Header is updated. Dec 13 01:29:28.266615 disk-uuid[572]: Secondary Entries is updated. Dec 13 01:29:28.266615 disk-uuid[572]: Secondary Header is updated. Dec 13 01:29:28.272550 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:28.278544 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:28.442571 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 01:29:28.579565 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:29:28.585068 kernel: usbcore: registered new interface driver usbhid Dec 13 01:29:28.585106 kernel: usbhid: USB HID core driver Dec 13 01:29:28.591653 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 01:29:28.591683 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 01:29:29.282561 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:29.283054 disk-uuid[574]: The operation has completed successfully. Dec 13 01:29:29.332652 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:29:29.332757 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:29:29.346653 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:29:29.350139 sh[592]: Success Dec 13 01:29:29.361539 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:29:29.403847 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:29:29.411647 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:29:29.412276 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:29:29.426089 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:29:29.426138 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:29.427655 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:29:29.430241 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:29:29.430258 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:29:29.438554 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:29:29.439972 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:29:29.441078 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:29:29.456734 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:29:29.458698 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:29:29.470226 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:29.470264 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:29.470274 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:29.476163 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:29.476206 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:29.487108 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:29:29.488041 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:29.492872 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:29:29.498657 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:29:29.572188 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:29.575981 ignition[684]: Ignition 2.19.0 Dec 13 01:29:29.576133 ignition[684]: Stage: fetch-offline Dec 13 01:29:29.582635 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:29.576173 ignition[684]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:29.584054 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:29.576183 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:29.576264 ignition[684]: parsed url from cmdline: "" Dec 13 01:29:29.576270 ignition[684]: no config URL provided Dec 13 01:29:29.576275 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:29:29.576283 ignition[684]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:29:29.576288 ignition[684]: failed to fetch config: resource requires networking Dec 13 01:29:29.576456 ignition[684]: Ignition finished successfully Dec 13 01:29:29.601363 systemd-networkd[778]: lo: Link UP Dec 13 01:29:29.601373 systemd-networkd[778]: lo: Gained carrier Dec 13 01:29:29.603835 systemd-networkd[778]: Enumeration completed Dec 13 01:29:29.604037 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:29.604582 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:29.604586 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:29.605453 systemd[1]: Reached target network.target - Network. Dec 13 01:29:29.606596 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:29.606600 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:29.607839 systemd-networkd[778]: eth0: Link UP Dec 13 01:29:29.607843 systemd-networkd[778]: eth0: Gained carrier Dec 13 01:29:29.607849 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:29.611901 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:29:29.612350 systemd-networkd[778]: eth1: Link UP Dec 13 01:29:29.612353 systemd-networkd[778]: eth1: Gained carrier Dec 13 01:29:29.612361 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:29.625267 ignition[781]: Ignition 2.19.0 Dec 13 01:29:29.625278 ignition[781]: Stage: fetch Dec 13 01:29:29.625425 ignition[781]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:29.625436 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:29.625538 ignition[781]: parsed url from cmdline: "" Dec 13 01:29:29.625541 ignition[781]: no config URL provided Dec 13 01:29:29.625546 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:29:29.625555 ignition[781]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:29:29.625571 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 01:29:29.625707 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 01:29:29.631572 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:29:29.735584 systemd-networkd[778]: eth0: DHCPv4 address 188.245.236.49/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 01:29:29.825879 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 01:29:29.829757 ignition[781]: GET result: OK Dec 13 01:29:29.829863 ignition[781]: parsing config with SHA512: 7780f70d981492e7e8bd9a69cc4ba56e8577f8d8455d2d80aba924c9c7d8129cabbb43456e6dc80e682abe8e3fb7d1d88bb798c77bba1acf0a47370a760d54de Dec 13 01:29:29.833185 unknown[781]: fetched base config from "system" Dec 13 01:29:29.833199 unknown[781]: fetched base config from "system" Dec 13 01:29:29.833534 ignition[781]: fetch: fetch complete Dec 13 01:29:29.833207 unknown[781]: fetched user config from "hetzner" Dec 13 01:29:29.833541 ignition[781]: fetch: fetch passed Dec 13 01:29:29.833584 ignition[781]: Ignition finished successfully Dec 13 01:29:29.837047 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:29:29.842661 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:29:29.857292 ignition[788]: Ignition 2.19.0 Dec 13 01:29:29.857303 ignition[788]: Stage: kargs Dec 13 01:29:29.857468 ignition[788]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:29.857481 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:29.858295 ignition[788]: kargs: kargs passed Dec 13 01:29:29.859820 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:29:29.858339 ignition[788]: Ignition finished successfully Dec 13 01:29:29.868753 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:29:29.881878 ignition[795]: Ignition 2.19.0 Dec 13 01:29:29.881891 ignition[795]: Stage: disks Dec 13 01:29:29.882071 ignition[795]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:29.882084 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:29.884462 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:29:29.882899 ignition[795]: disks: disks passed Dec 13 01:29:29.882945 ignition[795]: Ignition finished successfully Dec 13 01:29:29.886126 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:29.887051 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:29:29.887951 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:29.888951 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:29.889972 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:29.899660 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:29:29.913083 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:29:29.915444 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:29:29.922611 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:29:30.001546 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:29:30.001993 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:29:30.002925 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:30.008575 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:30.011612 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:29:30.015168 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:29:30.016422 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:29:30.017312 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:30.021123 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (812) Dec 13 01:29:30.021148 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:30.023538 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:30.023561 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:30.027543 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:30.027565 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:30.030992 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:30.031627 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:29:30.038776 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:29:30.077617 coreos-metadata[814]: Dec 13 01:29:30.077 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 01:29:30.079574 coreos-metadata[814]: Dec 13 01:29:30.079 INFO Fetch successful Dec 13 01:29:30.080474 coreos-metadata[814]: Dec 13 01:29:30.080 INFO wrote hostname ci-4081-2-1-f-56be046f0f to /sysroot/etc/hostname Dec 13 01:29:30.082507 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:29:30.083609 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:29:30.087259 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:29:30.091552 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:29:30.095564 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:29:30.170865 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:30.175604 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:29:30.179434 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:29:30.186575 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:30.203756 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:29:30.206671 ignition[929]: INFO : Ignition 2.19.0 Dec 13 01:29:30.206671 ignition[929]: INFO : Stage: mount Dec 13 01:29:30.207679 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:30.207679 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:30.207679 ignition[929]: INFO : mount: mount passed Dec 13 01:29:30.207679 ignition[929]: INFO : Ignition finished successfully Dec 13 01:29:30.208916 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:29:30.216627 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:29:30.425104 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:29:30.436747 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:30.446561 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) Dec 13 01:29:30.449782 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:30.449814 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:30.452361 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:30.456676 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:30.456712 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:30.460082 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:30.479965 ignition[956]: INFO : Ignition 2.19.0 Dec 13 01:29:30.479965 ignition[956]: INFO : Stage: files Dec 13 01:29:30.481341 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:30.481341 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:30.481341 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:29:30.484243 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:29:30.484243 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:29:30.486049 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:29:30.486049 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:29:30.487735 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:29:30.486172 unknown[956]: wrote ssh authorized keys file for user: core Dec 13 01:29:30.489413 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:29:30.489413 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:29:30.571263 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:29:30.719419 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:29:30.719419 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:29:30.721790 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:29:30.819799 systemd-networkd[778]: eth1: Gained IPv6LL Dec 13 01:29:31.011764 systemd-networkd[778]: eth0: Gained IPv6LL Dec 13 01:29:31.258232 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:29:31.341915 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:29:31.341915 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:31.344179 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:31.344179 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:31.344179 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:31.344179 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:31.344179 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:31.344179 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:31.344179 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:31.344179 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:31.344179 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:31.344179 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:29:31.344179 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:29:31.344179 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:29:31.344179 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:29:31.879649 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:29:32.103808 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:29:32.103808 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:29:32.105453 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:32.105453 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:32.105453 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:29:32.105453 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 01:29:32.105453 ignition[956]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 01:29:32.105453 ignition[956]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 01:29:32.105453 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 01:29:32.105453 ignition[956]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:32.112881 ignition[956]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:32.112881 ignition[956]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:32.112881 ignition[956]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:32.112881 ignition[956]: INFO : files: files passed Dec 13 01:29:32.112881 ignition[956]: INFO : Ignition finished successfully Dec 13 01:29:32.108402 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:29:32.115728 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:29:32.118048 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:29:32.127793 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:29:32.128479 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:29:32.132616 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:32.132616 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:32.134753 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:32.137054 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:32.137824 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:29:32.143691 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:29:32.169835 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:29:32.169951 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:29:32.171136 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:29:32.172032 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:29:32.173078 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:29:32.174393 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:29:32.188820 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:32.193640 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:29:32.202675 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:32.204080 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:32.204763 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:29:32.205774 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:29:32.205878 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:32.206979 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:29:32.207647 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:29:32.208609 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:29:32.209494 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:32.210486 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:32.211559 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:29:32.212624 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:32.213683 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:29:32.214699 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:29:32.215705 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:29:32.216601 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:29:32.216705 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:32.217791 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:32.218444 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:32.219335 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:29:32.219432 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:32.220358 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:29:32.220451 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:32.221797 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:29:32.221898 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:32.222595 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:29:32.222750 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:29:32.223377 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:29:32.223466 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:29:32.230032 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:29:32.230480 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:29:32.230648 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:32.233714 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:29:32.234170 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:29:32.234328 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:32.234945 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:29:32.235075 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:32.240251 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:29:32.240366 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:29:32.249365 ignition[1010]: INFO : Ignition 2.19.0 Dec 13 01:29:32.249365 ignition[1010]: INFO : Stage: umount Dec 13 01:29:32.250508 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:32.250508 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:32.252989 ignition[1010]: INFO : umount: umount passed Dec 13 01:29:32.252989 ignition[1010]: INFO : Ignition finished successfully Dec 13 01:29:32.252727 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:29:32.252855 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:29:32.255137 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:29:32.255199 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:29:32.256006 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:29:32.256055 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:29:32.258757 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:29:32.258805 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:29:32.259600 systemd[1]: Stopped target network.target - Network. Dec 13 01:29:32.259978 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:29:32.260027 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:32.260498 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:29:32.261054 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:29:32.267809 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:32.268512 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:29:32.269785 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:29:32.271806 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:29:32.271853 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:32.277344 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:29:32.277386 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:32.278336 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:29:32.278407 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:29:32.279918 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:29:32.279969 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:32.281916 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:29:32.282950 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:29:32.286380 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:29:32.289756 systemd-networkd[778]: eth0: DHCPv6 lease lost Dec 13 01:29:32.293088 systemd-networkd[778]: eth1: DHCPv6 lease lost Dec 13 01:29:32.295313 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:29:32.295997 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:29:32.297598 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:29:32.297710 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:29:32.301024 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:29:32.301073 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:32.309564 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:29:32.310029 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:29:32.310087 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:32.310727 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:29:32.310774 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:32.311709 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:29:32.311754 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:32.312668 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:29:32.312713 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:32.313814 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:32.315460 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:29:32.315573 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:29:32.322317 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:29:32.322401 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:32.323903 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:29:32.324055 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:32.326208 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:29:32.326264 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:32.326845 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:29:32.326883 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:32.327342 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:29:32.327387 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:32.328671 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:29:32.328717 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:32.329813 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:32.329867 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:32.332668 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:29:32.334775 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:29:32.334827 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:32.335828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:32.335873 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:32.338147 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:29:32.338278 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:29:32.345225 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:29:32.345344 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:29:32.346490 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:29:32.360769 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:29:32.367865 systemd[1]: Switching root. Dec 13 01:29:32.396806 systemd-journald[187]: Journal stopped Dec 13 01:29:33.312649 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Dec 13 01:29:33.312720 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:29:33.312733 kernel: SELinux: policy capability open_perms=1 Dec 13 01:29:33.312748 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:29:33.312766 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:29:33.312776 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:29:33.312792 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:29:33.312802 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:29:33.312811 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:29:33.312824 kernel: audit: type=1403 audit(1734053372.524:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:29:33.312835 systemd[1]: Successfully loaded SELinux policy in 41.700ms. Dec 13 01:29:33.312851 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.442ms. Dec 13 01:29:33.312861 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:33.312872 systemd[1]: Detected virtualization kvm. Dec 13 01:29:33.312884 systemd[1]: Detected architecture x86-64. Dec 13 01:29:33.312894 systemd[1]: Detected first boot. Dec 13 01:29:33.312904 systemd[1]: Hostname set to . Dec 13 01:29:33.312914 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:33.312924 zram_generator::config[1053]: No configuration found. Dec 13 01:29:33.312935 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:29:33.312945 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:29:33.312955 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:29:33.312968 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:33.312978 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:29:33.312989 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:29:33.312999 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:29:33.313009 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:29:33.313019 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:29:33.313029 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:29:33.313039 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:29:33.313049 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:29:33.313061 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:33.313071 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:33.313081 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:29:33.313091 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:29:33.313102 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:29:33.313112 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:33.313122 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:29:33.313132 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:33.313142 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:29:33.313155 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:29:33.313165 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:33.313175 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:29:33.313184 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:33.313194 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:33.313205 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:33.313217 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:33.313227 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:29:33.313237 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:29:33.313247 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:33.313257 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:33.313267 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:33.313277 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:29:33.313287 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:29:33.313303 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:29:33.313316 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:29:33.313326 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:33.313336 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:29:33.313345 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:29:33.313355 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:29:33.313366 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:29:33.313378 systemd[1]: Reached target machines.target - Containers. Dec 13 01:29:33.313388 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:29:33.313398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:33.313414 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:33.313439 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:29:33.313460 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:33.313475 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:33.313485 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:33.313500 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:29:33.313510 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:33.314559 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:29:33.314574 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:29:33.314584 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:29:33.314596 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:29:33.314606 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:29:33.314622 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:33.314632 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:33.314655 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:29:33.314674 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:29:33.314713 systemd-journald[1129]: Collecting audit messages is disabled. Dec 13 01:29:33.314736 systemd-journald[1129]: Journal started Dec 13 01:29:33.314756 systemd-journald[1129]: Runtime Journal (/run/log/journal/10666363ce234f5fba3c8ae18ec04d00) is 4.8M, max 38.4M, 33.6M free. Dec 13 01:29:33.084739 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:29:33.104951 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:29:33.105393 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:29:33.317534 kernel: loop: module loaded Dec 13 01:29:33.324770 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:33.339120 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:29:33.339174 systemd[1]: Stopped verity-setup.service. Dec 13 01:29:33.339189 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:33.343708 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:33.346910 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:29:33.347465 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:29:33.349147 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:29:33.350575 kernel: ACPI: bus type drm_connector registered Dec 13 01:29:33.350943 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:29:33.353135 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:29:33.353713 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:29:33.354929 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:33.355869 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:29:33.356023 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:29:33.359294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:33.359462 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:33.365799 kernel: fuse: init (API version 7.39) Dec 13 01:29:33.360227 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:33.360567 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:33.361946 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:33.362142 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:33.364031 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:33.364343 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:33.367329 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:29:33.367494 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:29:33.368451 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:29:33.369258 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:33.370105 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:29:33.370921 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:29:33.385480 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:29:33.395104 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:29:33.401616 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:29:33.403587 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:29:33.403700 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:33.405979 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:29:33.412118 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:29:33.416625 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:29:33.417215 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:33.423997 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:29:33.426432 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:29:33.427320 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:33.431108 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:29:33.432717 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:33.440746 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:33.443854 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:29:33.457712 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:29:33.461895 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:29:33.463703 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:29:33.466069 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:29:33.491071 systemd-journald[1129]: Time spent on flushing to /var/log/journal/10666363ce234f5fba3c8ae18ec04d00 is 77.844ms for 1137 entries. Dec 13 01:29:33.491071 systemd-journald[1129]: System Journal (/var/log/journal/10666363ce234f5fba3c8ae18ec04d00) is 8.0M, max 584.8M, 576.8M free. Dec 13 01:29:33.595687 systemd-journald[1129]: Received client request to flush runtime journal. Dec 13 01:29:33.595740 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 01:29:33.595758 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:29:33.595773 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 01:29:33.506600 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:29:33.507316 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:29:33.516101 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:29:33.534674 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:33.543045 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:29:33.566908 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:33.584405 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:29:33.595582 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:29:33.605646 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:33.606616 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:29:33.614150 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:29:33.617873 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:29:33.641703 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 01:29:33.639983 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Dec 13 01:29:33.640000 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Dec 13 01:29:33.650337 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:33.694056 kernel: loop3: detected capacity change from 0 to 8 Dec 13 01:29:33.712568 kernel: loop4: detected capacity change from 0 to 211296 Dec 13 01:29:33.735545 kernel: loop5: detected capacity change from 0 to 140768 Dec 13 01:29:33.756051 kernel: loop6: detected capacity change from 0 to 142488 Dec 13 01:29:33.774548 kernel: loop7: detected capacity change from 0 to 8 Dec 13 01:29:33.777560 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 01:29:33.779620 (sd-merge)[1199]: Merged extensions into '/usr'. Dec 13 01:29:33.784966 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:29:33.785116 systemd[1]: Reloading... Dec 13 01:29:33.892556 zram_generator::config[1228]: No configuration found. Dec 13 01:29:33.992492 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:29:34.004618 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:34.044686 systemd[1]: Reloading finished in 258 ms. Dec 13 01:29:34.075284 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:29:34.077780 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:29:34.089061 systemd[1]: Starting ensure-sysext.service... Dec 13 01:29:34.091429 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:34.093203 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:29:34.098800 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:34.103722 systemd[1]: Reloading requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:29:34.103733 systemd[1]: Reloading... Dec 13 01:29:34.128265 systemd-udevd[1271]: Using default interface naming scheme 'v255'. Dec 13 01:29:34.131605 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:29:34.132246 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:29:34.133174 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:29:34.133491 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Dec 13 01:29:34.133685 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Dec 13 01:29:34.137474 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:34.137484 systemd-tmpfiles[1269]: Skipping /boot Dec 13 01:29:34.153246 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:34.154045 systemd-tmpfiles[1269]: Skipping /boot Dec 13 01:29:34.180549 zram_generator::config[1297]: No configuration found. Dec 13 01:29:34.275557 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1331) Dec 13 01:29:34.296579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:34.300535 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1331) Dec 13 01:29:34.352543 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:29:34.355911 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:29:34.356818 systemd[1]: Reloading finished in 252 ms. Dec 13 01:29:34.372105 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:34.376685 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:29:34.373953 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:34.396690 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:29:34.400667 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:29:34.406593 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:29:34.416677 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:34.426543 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:29:34.427675 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:34.436695 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:29:34.447857 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:29:34.452247 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 13 01:29:34.454129 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:34.454803 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:34.462388 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:34.468752 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:34.472794 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:34.473383 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:34.473474 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:34.482697 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:34.482932 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:34.483190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:34.483343 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:34.487662 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:34.487922 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:34.495019 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:34.495991 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:34.496140 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:34.500918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:34.501135 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:34.502105 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:34.503809 systemd[1]: Finished ensure-sysext.service. Dec 13 01:29:34.505189 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:29:34.514588 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1309) Dec 13 01:29:34.518282 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:29:34.526969 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:29:34.528911 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:29:34.542392 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:34.543785 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:34.563079 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:34.563724 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:34.564970 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:34.565814 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:34.567394 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:29:34.571340 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:34.572033 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:29:34.597760 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:29:34.599111 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:29:34.617332 augenrules[1415]: No rules Dec 13 01:29:34.618365 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:29:34.654683 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:29:34.654951 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:29:34.655142 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:29:34.694588 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:29:34.688463 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 01:29:34.695785 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:29:34.701539 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:29:34.717011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:34.746013 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 13 01:29:34.746106 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 13 01:29:34.750081 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:29:34.751653 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 01:29:34.751698 kernel: [drm] features: -context_init Dec 13 01:29:34.753031 kernel: [drm] number of scanouts: 1 Dec 13 01:29:34.755547 kernel: [drm] number of cap sets: 0 Dec 13 01:29:34.758559 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 01:29:34.764548 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 01:29:34.768797 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 01:29:34.773196 systemd-networkd[1376]: lo: Link UP Dec 13 01:29:34.773206 systemd-networkd[1376]: lo: Gained carrier Dec 13 01:29:34.775217 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:29:34.776544 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 01:29:34.782233 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:29:34.782483 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:29:34.791361 systemd-networkd[1376]: Enumeration completed Dec 13 01:29:34.792447 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:34.794485 systemd-timesyncd[1394]: No network connectivity, watching for changes. Dec 13 01:29:34.795564 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:34.795638 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:34.796698 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:34.796778 systemd-networkd[1376]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:34.797478 systemd-networkd[1376]: eth0: Link UP Dec 13 01:29:34.797600 systemd-networkd[1376]: eth0: Gained carrier Dec 13 01:29:34.797719 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:34.800964 systemd-networkd[1376]: eth1: Link UP Dec 13 01:29:34.801165 systemd-networkd[1376]: eth1: Gained carrier Dec 13 01:29:34.801363 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:34.805457 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:29:34.808973 systemd-resolved[1381]: Positive Trust Anchors: Dec 13 01:29:34.809442 systemd-resolved[1381]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:34.809535 systemd-resolved[1381]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:34.811602 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:34.811849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:34.813960 systemd-resolved[1381]: Using system hostname 'ci-4081-2-1-f-56be046f0f'. Dec 13 01:29:34.821740 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:34.824104 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:34.824332 systemd[1]: Reached target network.target - Network. Dec 13 01:29:34.824416 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:34.825603 systemd-networkd[1376]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:29:34.826433 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Dec 13 01:29:34.857743 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:29:34.867798 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:29:34.876186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:34.882565 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:34.912311 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:29:34.912649 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:34.912737 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:34.912937 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:29:34.913055 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:29:34.913317 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:29:34.913491 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:29:34.914694 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:29:34.914888 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:29:34.914979 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:34.915055 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:34.919218 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:29:34.922600 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:29:34.929715 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:29:34.932110 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:29:34.934957 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:29:34.935866 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:34.938004 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:34.938885 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:34.938917 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:34.943508 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:29:34.948567 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:34.960706 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:29:34.961674 systemd-networkd[1376]: eth0: DHCPv4 address 188.245.236.49/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 01:29:34.962714 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Dec 13 01:29:34.964468 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:29:34.975654 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:29:34.987672 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:29:34.988674 jq[1455]: false Dec 13 01:29:34.988147 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:29:34.989385 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:29:34.994623 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:29:34.998203 coreos-metadata[1451]: Dec 13 01:29:34.998 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 01:29:35.000569 coreos-metadata[1451]: Dec 13 01:29:35.000 INFO Fetch successful Dec 13 01:29:35.000569 coreos-metadata[1451]: Dec 13 01:29:35.000 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 01:29:35.000691 coreos-metadata[1451]: Dec 13 01:29:35.000 INFO Fetch successful Dec 13 01:29:35.007880 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 01:29:35.017758 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:29:35.027697 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:29:35.033001 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:29:35.037768 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:29:35.038420 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:29:35.041779 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:29:35.045629 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:29:35.050941 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:29:35.059006 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:29:35.059222 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:29:35.066886 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:29:35.067148 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:29:35.073596 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:29:35.075585 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:29:35.086096 update_engine[1471]: I20241213 01:29:35.086035 1471 main.cc:92] Flatcar Update Engine starting Dec 13 01:29:35.086440 dbus-daemon[1452]: [system] SELinux support is enabled Dec 13 01:29:35.090791 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:29:35.101054 extend-filesystems[1456]: Found loop4 Dec 13 01:29:35.111210 extend-filesystems[1456]: Found loop5 Dec 13 01:29:35.111210 extend-filesystems[1456]: Found loop6 Dec 13 01:29:35.111210 extend-filesystems[1456]: Found loop7 Dec 13 01:29:35.111210 extend-filesystems[1456]: Found sda Dec 13 01:29:35.111210 extend-filesystems[1456]: Found sda1 Dec 13 01:29:35.111210 extend-filesystems[1456]: Found sda2 Dec 13 01:29:35.111210 extend-filesystems[1456]: Found sda3 Dec 13 01:29:35.111210 extend-filesystems[1456]: Found usr Dec 13 01:29:35.111210 extend-filesystems[1456]: Found sda4 Dec 13 01:29:35.111210 extend-filesystems[1456]: Found sda6 Dec 13 01:29:35.111210 extend-filesystems[1456]: Found sda7 Dec 13 01:29:35.111210 extend-filesystems[1456]: Found sda9 Dec 13 01:29:35.111210 extend-filesystems[1456]: Checking size of /dev/sda9 Dec 13 01:29:35.157918 update_engine[1471]: I20241213 01:29:35.110401 1471 update_check_scheduler.cc:74] Next update check in 2m52s Dec 13 01:29:35.105088 (ntainerd)[1480]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:29:35.158243 extend-filesystems[1456]: Resized partition /dev/sda9 Dec 13 01:29:35.115974 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:29:35.158819 extend-filesystems[1494]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:29:35.116027 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:29:35.119087 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:29:35.179503 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 01:29:35.179625 jq[1472]: true Dec 13 01:29:35.119106 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:29:35.180090 tar[1475]: linux-amd64/helm Dec 13 01:29:35.119798 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:29:35.130824 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:29:35.153168 systemd-logind[1467]: New seat seat0. Dec 13 01:29:35.182319 systemd-logind[1467]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 01:29:35.182337 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:29:35.182648 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:29:35.210488 jq[1496]: true Dec 13 01:29:35.285954 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:29:35.288754 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:29:35.305556 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 01:29:35.318065 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1333) Dec 13 01:29:35.320728 extend-filesystems[1494]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 01:29:35.320728 extend-filesystems[1494]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 01:29:35.320728 extend-filesystems[1494]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 01:29:35.333050 extend-filesystems[1456]: Resized filesystem in /dev/sda9 Dec 13 01:29:35.333050 extend-filesystems[1456]: Found sr0 Dec 13 01:29:35.328967 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:29:35.329169 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:29:35.346560 bash[1529]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:35.349665 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:29:35.362639 systemd[1]: Starting sshkeys.service... Dec 13 01:29:35.376113 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:29:35.392896 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:29:35.472660 coreos-metadata[1535]: Dec 13 01:29:35.472 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 01:29:35.475346 coreos-metadata[1535]: Dec 13 01:29:35.475 INFO Fetch successful Dec 13 01:29:35.478872 unknown[1535]: wrote ssh authorized keys file for user: core Dec 13 01:29:35.491922 containerd[1480]: time="2024-12-13T01:29:35.491846658Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:29:35.498667 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:29:35.529182 update-ssh-keys[1541]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:35.525222 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:29:35.530696 systemd[1]: Finished sshkeys.service. Dec 13 01:29:35.561256 containerd[1480]: time="2024-12-13T01:29:35.561051975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:35.563404 containerd[1480]: time="2024-12-13T01:29:35.563379560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:35.563462 containerd[1480]: time="2024-12-13T01:29:35.563448919Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:29:35.563509 containerd[1480]: time="2024-12-13T01:29:35.563497471Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:29:35.563733 containerd[1480]: time="2024-12-13T01:29:35.563716411Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:29:35.566275 containerd[1480]: time="2024-12-13T01:29:35.565541643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:35.566275 containerd[1480]: time="2024-12-13T01:29:35.565628617Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:35.566275 containerd[1480]: time="2024-12-13T01:29:35.565641741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:35.566275 containerd[1480]: time="2024-12-13T01:29:35.565831507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:35.566275 containerd[1480]: time="2024-12-13T01:29:35.565854721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:35.566275 containerd[1480]: time="2024-12-13T01:29:35.565874177Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:35.566275 containerd[1480]: time="2024-12-13T01:29:35.565888665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:35.566275 containerd[1480]: time="2024-12-13T01:29:35.565997418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:35.566275 containerd[1480]: time="2024-12-13T01:29:35.566243279Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:35.566567 containerd[1480]: time="2024-12-13T01:29:35.566549443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:35.566627 containerd[1480]: time="2024-12-13T01:29:35.566614575Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:29:35.566756 containerd[1480]: time="2024-12-13T01:29:35.566740311Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:29:35.568773 containerd[1480]: time="2024-12-13T01:29:35.568585100Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:29:35.572168 containerd[1480]: time="2024-12-13T01:29:35.572122123Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:29:35.572260 containerd[1480]: time="2024-12-13T01:29:35.572244783Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:29:35.572432 containerd[1480]: time="2024-12-13T01:29:35.572415032Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:29:35.572504 containerd[1480]: time="2024-12-13T01:29:35.572491455Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:29:35.572581 containerd[1480]: time="2024-12-13T01:29:35.572567938Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:29:35.572759 containerd[1480]: time="2024-12-13T01:29:35.572743568Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:29:35.573697 containerd[1480]: time="2024-12-13T01:29:35.573670707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:29:35.573868 containerd[1480]: time="2024-12-13T01:29:35.573852537Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:29:35.573941 containerd[1480]: time="2024-12-13T01:29:35.573928169Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:29:35.574040 containerd[1480]: time="2024-12-13T01:29:35.574025622Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:29:35.574104 containerd[1480]: time="2024-12-13T01:29:35.574080836Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:29:35.577540 containerd[1480]: time="2024-12-13T01:29:35.576548202Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:29:35.577540 containerd[1480]: time="2024-12-13T01:29:35.576566296Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:29:35.577540 containerd[1480]: time="2024-12-13T01:29:35.576578319Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:29:35.577540 containerd[1480]: time="2024-12-13T01:29:35.576591283Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:29:35.577540 containerd[1480]: time="2024-12-13T01:29:35.576625437Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:29:35.577540 containerd[1480]: time="2024-12-13T01:29:35.576639724Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:29:35.577540 containerd[1480]: time="2024-12-13T01:29:35.576650053Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:29:35.577540 containerd[1480]: time="2024-12-13T01:29:35.576666684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.577540 containerd[1480]: time="2024-12-13T01:29:35.576679789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.577540 containerd[1480]: time="2024-12-13T01:29:35.576699566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.577540 containerd[1480]: time="2024-12-13T01:29:35.576710987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.577540 containerd[1480]: time="2024-12-13T01:29:35.576721326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.577540 containerd[1480]: time="2024-12-13T01:29:35.576732818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.577540 containerd[1480]: time="2024-12-13T01:29:35.576742336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.577768 containerd[1480]: time="2024-12-13T01:29:35.576753978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.577768 containerd[1480]: time="2024-12-13T01:29:35.576764327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.577768 containerd[1480]: time="2024-12-13T01:29:35.576782351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.577768 containerd[1480]: time="2024-12-13T01:29:35.576792791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.577768 containerd[1480]: time="2024-12-13T01:29:35.576804222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.577768 containerd[1480]: time="2024-12-13T01:29:35.576814401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.577768 containerd[1480]: time="2024-12-13T01:29:35.576827025Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:29:35.577768 containerd[1480]: time="2024-12-13T01:29:35.576843746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.577768 containerd[1480]: time="2024-12-13T01:29:35.576853505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.577768 containerd[1480]: time="2024-12-13T01:29:35.576862702Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:29:35.577768 containerd[1480]: time="2024-12-13T01:29:35.576915782Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:29:35.577768 containerd[1480]: time="2024-12-13T01:29:35.576930379Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:29:35.577768 containerd[1480]: time="2024-12-13T01:29:35.576939747Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:29:35.578013 containerd[1480]: time="2024-12-13T01:29:35.576949504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:29:35.578013 containerd[1480]: time="2024-12-13T01:29:35.576957670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.578013 containerd[1480]: time="2024-12-13T01:29:35.576967849Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:29:35.578013 containerd[1480]: time="2024-12-13T01:29:35.576981484Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:29:35.578013 containerd[1480]: time="2024-12-13T01:29:35.576991102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:29:35.578091 containerd[1480]: time="2024-12-13T01:29:35.577214391Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:29:35.578091 containerd[1480]: time="2024-12-13T01:29:35.577262472Z" level=info msg="Connect containerd service" Dec 13 01:29:35.578091 containerd[1480]: time="2024-12-13T01:29:35.577289282Z" level=info msg="using legacy CRI server" Dec 13 01:29:35.578091 containerd[1480]: time="2024-12-13T01:29:35.577295143Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:29:35.578091 containerd[1480]: time="2024-12-13T01:29:35.577382116Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:29:35.578797 containerd[1480]: time="2024-12-13T01:29:35.578760670Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:29:35.579035 containerd[1480]: time="2024-12-13T01:29:35.578975654Z" level=info msg="Start subscribing containerd event" Dec 13 01:29:35.579068 containerd[1480]: time="2024-12-13T01:29:35.579056315Z" level=info msg="Start recovering state" Dec 13 01:29:35.579187 containerd[1480]: time="2024-12-13T01:29:35.579161181Z" level=info msg="Start event monitor" Dec 13 01:29:35.579187 containerd[1480]: time="2024-12-13T01:29:35.579182652Z" level=info msg="Start snapshots syncer" Dec 13 01:29:35.579228 containerd[1480]: time="2024-12-13T01:29:35.579192961Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:29:35.579228 containerd[1480]: time="2024-12-13T01:29:35.579200987Z" level=info msg="Start streaming server" Dec 13 01:29:35.582286 containerd[1480]: time="2024-12-13T01:29:35.581649647Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:29:35.582286 containerd[1480]: time="2024-12-13T01:29:35.581729889Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:29:35.582286 containerd[1480]: time="2024-12-13T01:29:35.581809908Z" level=info msg="containerd successfully booted in 0.094060s" Dec 13 01:29:35.581898 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:29:35.765794 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:29:35.787106 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:29:35.797008 tar[1475]: linux-amd64/LICENSE Dec 13 01:29:35.797107 tar[1475]: linux-amd64/README.md Dec 13 01:29:35.798708 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:29:35.812920 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:29:35.813227 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:29:35.817578 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:29:35.825759 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:29:35.835002 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:29:35.837509 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:29:35.841821 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:29:35.845353 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:29:36.067736 systemd-networkd[1376]: eth0: Gained IPv6LL Dec 13 01:29:36.068440 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Dec 13 01:29:36.071420 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:29:36.074582 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:29:36.081793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:36.085765 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:29:36.110255 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:29:36.771669 systemd-networkd[1376]: eth1: Gained IPv6LL Dec 13 01:29:36.772228 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Dec 13 01:29:36.803468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:36.805108 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:29:36.808746 (kubelet)[1583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:36.810920 systemd[1]: Startup finished in 1.193s (kernel) + 5.838s (initrd) + 4.327s (userspace) = 11.359s. Dec 13 01:29:37.409250 kubelet[1583]: E1213 01:29:37.409149 1583 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:37.413959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:37.414154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:41.586558 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:29:41.592951 systemd[1]: Started sshd@0-188.245.236.49:22-34.141.85.251:34456.service - OpenSSH per-connection server daemon (34.141.85.251:34456). Dec 13 01:29:41.669075 sshd[1596]: Connection closed by authenticating user root 34.141.85.251 port 34456 [preauth] Dec 13 01:29:41.672818 systemd[1]: sshd@0-188.245.236.49:22-34.141.85.251:34456.service: Deactivated successfully. Dec 13 01:29:47.610258 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:47.615720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:47.750470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:47.755759 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:47.810851 kubelet[1608]: E1213 01:29:47.810773 1608 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:47.818171 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:47.818428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:57.860299 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:29:57.866716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:57.988317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:57.992813 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:58.041817 kubelet[1625]: E1213 01:29:58.041714 1625 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:58.045327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:58.045507 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:06.891352 systemd-timesyncd[1394]: Contacted time server 129.70.132.37:123 (2.flatcar.pool.ntp.org). Dec 13 01:30:06.891423 systemd-timesyncd[1394]: Initial clock synchronization to Fri 2024-12-13 01:30:06.808899 UTC. Dec 13 01:30:08.110160 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:30:08.115682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:08.241098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:08.246025 (kubelet)[1642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:08.293851 kubelet[1642]: E1213 01:30:08.293799 1642 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:08.298212 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:08.298430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:18.360201 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:30:18.369713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:18.495066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:18.508972 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:18.561686 kubelet[1659]: E1213 01:30:18.561611 1659 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:18.566360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:18.566577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:20.497335 update_engine[1471]: I20241213 01:30:20.497231 1471 update_attempter.cc:509] Updating boot flags... Dec 13 01:30:20.549581 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1676) Dec 13 01:30:20.609475 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1680) Dec 13 01:30:20.653551 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1680) Dec 13 01:30:28.610138 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:30:28.615710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:28.749701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:28.749865 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:28.791000 kubelet[1696]: E1213 01:30:28.790882 1696 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:28.794399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:28.794615 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:38.860136 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 01:30:38.865679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:39.000616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:39.004747 (kubelet)[1711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:39.046438 kubelet[1711]: E1213 01:30:39.046372 1711 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:39.050877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:39.051061 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:49.110338 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 01:30:49.115805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:49.253470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:49.257571 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:49.303880 kubelet[1727]: E1213 01:30:49.303826 1727 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:49.308959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:49.309150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:59.360242 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 01:30:59.366699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:59.499604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:59.504423 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:59.546728 kubelet[1744]: E1213 01:30:59.546659 1744 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:59.551654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:59.551866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:09.610154 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 01:31:09.614731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:09.745371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:09.749756 (kubelet)[1760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:09.791824 kubelet[1760]: E1213 01:31:09.791769 1760 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:09.796063 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:09.796253 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:19.860295 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 01:31:19.865694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:20.003456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:20.012885 (kubelet)[1777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:20.060895 kubelet[1777]: E1213 01:31:20.060833 1777 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:20.065263 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:20.065446 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:30.110115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 13 01:31:30.117072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:30.249201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:30.254232 (kubelet)[1793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:30.298826 kubelet[1793]: E1213 01:31:30.298759 1793 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:30.302842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:30.303026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:30.425794 systemd[1]: Started sshd@1-188.245.236.49:22-147.75.109.163:41064.service - OpenSSH per-connection server daemon (147.75.109.163:41064). Dec 13 01:31:31.392256 sshd[1803]: Accepted publickey for core from 147.75.109.163 port 41064 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:31.394979 sshd[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:31.404322 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:31:31.419846 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:31:31.422018 systemd-logind[1467]: New session 1 of user core. Dec 13 01:31:31.434494 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:31:31.443847 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:31:31.449697 (systemd)[1807]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:31:31.568937 systemd[1807]: Queued start job for default target default.target. Dec 13 01:31:31.575772 systemd[1807]: Created slice app.slice - User Application Slice. Dec 13 01:31:31.575801 systemd[1807]: Reached target paths.target - Paths. Dec 13 01:31:31.575815 systemd[1807]: Reached target timers.target - Timers. Dec 13 01:31:31.577306 systemd[1807]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:31:31.598828 systemd[1807]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:31:31.598999 systemd[1807]: Reached target sockets.target - Sockets. Dec 13 01:31:31.599020 systemd[1807]: Reached target basic.target - Basic System. Dec 13 01:31:31.599068 systemd[1807]: Reached target default.target - Main User Target. Dec 13 01:31:31.599111 systemd[1807]: Startup finished in 141ms. Dec 13 01:31:31.599255 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:31:31.613762 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:31:32.303801 systemd[1]: Started sshd@2-188.245.236.49:22-147.75.109.163:41066.service - OpenSSH per-connection server daemon (147.75.109.163:41066). Dec 13 01:31:33.278672 sshd[1818]: Accepted publickey for core from 147.75.109.163 port 41066 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:33.280542 sshd[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:33.285787 systemd-logind[1467]: New session 2 of user core. Dec 13 01:31:33.299808 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:31:33.961343 sshd[1818]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:33.965627 systemd-logind[1467]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:31:33.966656 systemd[1]: sshd@2-188.245.236.49:22-147.75.109.163:41066.service: Deactivated successfully. Dec 13 01:31:33.968885 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:31:33.969742 systemd-logind[1467]: Removed session 2. Dec 13 01:31:34.127809 systemd[1]: Started sshd@3-188.245.236.49:22-147.75.109.163:41070.service - OpenSSH per-connection server daemon (147.75.109.163:41070). Dec 13 01:31:35.111427 sshd[1825]: Accepted publickey for core from 147.75.109.163 port 41070 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:35.113151 sshd[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:35.118843 systemd-logind[1467]: New session 3 of user core. Dec 13 01:31:35.125665 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:31:35.788022 sshd[1825]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:35.790666 systemd[1]: sshd@3-188.245.236.49:22-147.75.109.163:41070.service: Deactivated successfully. Dec 13 01:31:35.793179 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:31:35.794108 systemd-logind[1467]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:31:35.795451 systemd-logind[1467]: Removed session 3. Dec 13 01:31:35.958802 systemd[1]: Started sshd@4-188.245.236.49:22-147.75.109.163:41074.service - OpenSSH per-connection server daemon (147.75.109.163:41074). Dec 13 01:31:36.924282 sshd[1832]: Accepted publickey for core from 147.75.109.163 port 41074 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:36.926116 sshd[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:36.931765 systemd-logind[1467]: New session 4 of user core. Dec 13 01:31:36.945670 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:31:37.599985 sshd[1832]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:37.604430 systemd[1]: sshd@4-188.245.236.49:22-147.75.109.163:41074.service: Deactivated successfully. Dec 13 01:31:37.606605 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:31:37.607308 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:31:37.608463 systemd-logind[1467]: Removed session 4. Dec 13 01:31:37.771807 systemd[1]: Started sshd@5-188.245.236.49:22-147.75.109.163:42608.service - OpenSSH per-connection server daemon (147.75.109.163:42608). Dec 13 01:31:38.737389 sshd[1839]: Accepted publickey for core from 147.75.109.163 port 42608 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:38.738899 sshd[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:38.743088 systemd-logind[1467]: New session 5 of user core. Dec 13 01:31:38.752638 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:31:39.266900 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:31:39.267338 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:39.284568 sudo[1842]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:39.443447 sshd[1839]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:39.447423 systemd[1]: sshd@5-188.245.236.49:22-147.75.109.163:42608.service: Deactivated successfully. Dec 13 01:31:39.449668 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:31:39.451678 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:31:39.453101 systemd-logind[1467]: Removed session 5. Dec 13 01:31:39.624880 systemd[1]: Started sshd@6-188.245.236.49:22-147.75.109.163:42620.service - OpenSSH per-connection server daemon (147.75.109.163:42620). Dec 13 01:31:40.360281 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 13 01:31:40.366040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:40.509857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:40.524826 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:40.568284 kubelet[1857]: E1213 01:31:40.568186 1857 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:40.572277 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:40.572583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:40.610622 sshd[1847]: Accepted publickey for core from 147.75.109.163 port 42620 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:40.612334 sshd[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:40.618052 systemd-logind[1467]: New session 6 of user core. Dec 13 01:31:40.623714 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:31:41.139290 sudo[1867]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:31:41.139701 sudo[1867]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:41.144189 sudo[1867]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:41.150898 sudo[1866]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:31:41.151256 sudo[1866]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:41.163798 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:31:41.168207 auditctl[1870]: No rules Dec 13 01:31:41.168767 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:31:41.169043 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:31:41.175990 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:31:41.202885 augenrules[1888]: No rules Dec 13 01:31:41.204601 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:31:41.206046 sudo[1866]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:41.367336 sshd[1847]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:41.371865 systemd[1]: sshd@6-188.245.236.49:22-147.75.109.163:42620.service: Deactivated successfully. Dec 13 01:31:41.373948 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:31:41.374721 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:31:41.375846 systemd-logind[1467]: Removed session 6. Dec 13 01:31:41.538027 systemd[1]: Started sshd@7-188.245.236.49:22-147.75.109.163:42634.service - OpenSSH per-connection server daemon (147.75.109.163:42634). Dec 13 01:31:42.539024 sshd[1896]: Accepted publickey for core from 147.75.109.163 port 42634 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:42.540662 sshd[1896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:42.545454 systemd-logind[1467]: New session 7 of user core. Dec 13 01:31:42.553667 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:31:43.066364 sudo[1899]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:31:43.066716 sudo[1899]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:43.317787 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:31:43.319824 (dockerd)[1915]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:31:43.563137 dockerd[1915]: time="2024-12-13T01:31:43.563061534Z" level=info msg="Starting up" Dec 13 01:31:43.668238 dockerd[1915]: time="2024-12-13T01:31:43.668191902Z" level=info msg="Loading containers: start." Dec 13 01:31:43.774547 kernel: Initializing XFRM netlink socket Dec 13 01:31:43.849486 systemd-networkd[1376]: docker0: Link UP Dec 13 01:31:43.869029 dockerd[1915]: time="2024-12-13T01:31:43.868977655Z" level=info msg="Loading containers: done." Dec 13 01:31:43.885989 dockerd[1915]: time="2024-12-13T01:31:43.885945665Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:31:43.886187 dockerd[1915]: time="2024-12-13T01:31:43.886037878Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:31:43.886187 dockerd[1915]: time="2024-12-13T01:31:43.886142716Z" level=info msg="Daemon has completed initialization" Dec 13 01:31:43.919958 dockerd[1915]: time="2024-12-13T01:31:43.919360289Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:31:43.919851 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:31:44.989986 containerd[1480]: time="2024-12-13T01:31:44.989940896Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:31:45.544301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3330816042.mount: Deactivated successfully. Dec 13 01:31:46.543678 containerd[1480]: time="2024-12-13T01:31:46.543603379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:46.544553 containerd[1480]: time="2024-12-13T01:31:46.544450126Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139346" Dec 13 01:31:46.545341 containerd[1480]: time="2024-12-13T01:31:46.545304749Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:46.547428 containerd[1480]: time="2024-12-13T01:31:46.547387134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:46.548444 containerd[1480]: time="2024-12-13T01:31:46.548309613Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 1.558329745s" Dec 13 01:31:46.548444 containerd[1480]: time="2024-12-13T01:31:46.548337396Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:31:46.567740 containerd[1480]: time="2024-12-13T01:31:46.567701122Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:31:47.862999 containerd[1480]: time="2024-12-13T01:31:47.862934882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:47.863946 containerd[1480]: time="2024-12-13T01:31:47.863907716Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217752" Dec 13 01:31:47.864745 containerd[1480]: time="2024-12-13T01:31:47.864709971Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:47.867072 containerd[1480]: time="2024-12-13T01:31:47.866992450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:47.868113 containerd[1480]: time="2024-12-13T01:31:47.867948764Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.300214569s" Dec 13 01:31:47.868113 containerd[1480]: time="2024-12-13T01:31:47.867997094Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:31:47.887879 containerd[1480]: time="2024-12-13T01:31:47.887833247Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:31:48.779923 containerd[1480]: time="2024-12-13T01:31:48.779847712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:48.780666 containerd[1480]: time="2024-12-13T01:31:48.780638604Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332842" Dec 13 01:31:48.781262 containerd[1480]: time="2024-12-13T01:31:48.781224413Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:48.784210 containerd[1480]: time="2024-12-13T01:31:48.784154397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:48.785090 containerd[1480]: time="2024-12-13T01:31:48.784921566Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 897.054405ms" Dec 13 01:31:48.785090 containerd[1480]: time="2024-12-13T01:31:48.784947123Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:31:48.805157 containerd[1480]: time="2024-12-13T01:31:48.805117815Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:31:49.715421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1982004179.mount: Deactivated successfully. Dec 13 01:31:49.972913 containerd[1480]: time="2024-12-13T01:31:49.972814564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:49.973840 containerd[1480]: time="2024-12-13T01:31:49.973802727Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619984" Dec 13 01:31:49.974615 containerd[1480]: time="2024-12-13T01:31:49.974584323Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:49.976268 containerd[1480]: time="2024-12-13T01:31:49.976227774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:49.976907 containerd[1480]: time="2024-12-13T01:31:49.976693788Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.171543072s" Dec 13 01:31:49.976907 containerd[1480]: time="2024-12-13T01:31:49.976720228Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:31:49.997073 containerd[1480]: time="2024-12-13T01:31:49.997025942Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:31:50.527306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1187756200.mount: Deactivated successfully. Dec 13 01:31:50.610047 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Dec 13 01:31:50.620629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:50.784773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:50.787911 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:50.833603 kubelet[2169]: E1213 01:31:50.833555 2169 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:50.837856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:50.838028 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:51.280539 containerd[1480]: time="2024-12-13T01:31:51.280453356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:51.281539 containerd[1480]: time="2024-12-13T01:31:51.281369224Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Dec 13 01:31:51.282337 containerd[1480]: time="2024-12-13T01:31:51.282284179Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:51.284556 containerd[1480]: time="2024-12-13T01:31:51.284493993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:51.285622 containerd[1480]: time="2024-12-13T01:31:51.285493166Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.288428862s" Dec 13 01:31:51.285622 containerd[1480]: time="2024-12-13T01:31:51.285539443Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:31:51.305132 containerd[1480]: time="2024-12-13T01:31:51.305100602Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:31:52.014465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2010456224.mount: Deactivated successfully. Dec 13 01:31:52.019483 containerd[1480]: time="2024-12-13T01:31:52.019425196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:52.020478 containerd[1480]: time="2024-12-13T01:31:52.020421654Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Dec 13 01:31:52.021275 containerd[1480]: time="2024-12-13T01:31:52.021220081Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:52.023549 containerd[1480]: time="2024-12-13T01:31:52.023480019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:52.024772 containerd[1480]: time="2024-12-13T01:31:52.024346804Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 719.103423ms" Dec 13 01:31:52.024772 containerd[1480]: time="2024-12-13T01:31:52.024376660Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:31:52.047316 containerd[1480]: time="2024-12-13T01:31:52.047274527Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:31:52.581266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702727205.mount: Deactivated successfully. Dec 13 01:31:54.770339 containerd[1480]: time="2024-12-13T01:31:54.770288832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:54.771363 containerd[1480]: time="2024-12-13T01:31:54.771329072Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651705" Dec 13 01:31:54.772092 containerd[1480]: time="2024-12-13T01:31:54.772046197Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:54.776783 containerd[1480]: time="2024-12-13T01:31:54.776715132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:54.778503 containerd[1480]: time="2024-12-13T01:31:54.778464461Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.731151393s" Dec 13 01:31:54.778503 containerd[1480]: time="2024-12-13T01:31:54.778500689Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:31:57.470691 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:57.479922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:57.509747 systemd[1]: Reloading requested from client PID 2333 ('systemctl') (unit session-7.scope)... Dec 13 01:31:57.509776 systemd[1]: Reloading... Dec 13 01:31:57.641590 zram_generator::config[2374]: No configuration found. Dec 13 01:31:57.766717 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:31:57.837559 systemd[1]: Reloading finished in 327 ms. Dec 13 01:31:57.882914 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:31:57.883020 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:31:57.883289 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:57.888759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:58.053708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:58.053814 (kubelet)[2425]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:31:58.108627 kubelet[2425]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:58.108627 kubelet[2425]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:31:58.108627 kubelet[2425]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:58.109098 kubelet[2425]: I1213 01:31:58.108674 2425 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:31:58.392497 kubelet[2425]: I1213 01:31:58.392444 2425 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:31:58.392497 kubelet[2425]: I1213 01:31:58.392481 2425 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:31:58.392713 kubelet[2425]: I1213 01:31:58.392699 2425 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:31:58.413753 kubelet[2425]: E1213 01:31:58.413673 2425 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://188.245.236.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:58.414164 kubelet[2425]: I1213 01:31:58.413919 2425 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:31:58.430530 kubelet[2425]: I1213 01:31:58.430485 2425 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:31:58.432707 kubelet[2425]: I1213 01:31:58.432676 2425 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:31:58.433677 kubelet[2425]: I1213 01:31:58.433641 2425 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:31:58.433677 kubelet[2425]: I1213 01:31:58.433673 2425 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:31:58.433801 kubelet[2425]: I1213 01:31:58.433683 2425 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:31:58.433801 kubelet[2425]: I1213 01:31:58.433790 2425 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:58.433914 kubelet[2425]: I1213 01:31:58.433895 2425 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:31:58.433914 kubelet[2425]: I1213 01:31:58.433911 2425 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:31:58.436001 kubelet[2425]: I1213 01:31:58.433937 2425 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:31:58.436001 kubelet[2425]: I1213 01:31:58.433951 2425 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:31:58.436001 kubelet[2425]: W1213 01:31:58.435142 2425 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://188.245.236.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:58.436001 kubelet[2425]: E1213 01:31:58.435180 2425 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.236.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:58.436001 kubelet[2425]: W1213 01:31:58.435363 2425 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://188.245.236.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-f-56be046f0f&limit=500&resourceVersion=0": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:58.436001 kubelet[2425]: E1213 01:31:58.435388 2425 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.236.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-f-56be046f0f&limit=500&resourceVersion=0": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:58.436001 kubelet[2425]: I1213 01:31:58.435690 2425 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:31:58.439152 kubelet[2425]: I1213 01:31:58.439129 2425 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:31:58.439220 kubelet[2425]: W1213 01:31:58.439190 2425 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:31:58.440098 kubelet[2425]: I1213 01:31:58.439933 2425 server.go:1256] "Started kubelet" Dec 13 01:31:58.440098 kubelet[2425]: I1213 01:31:58.439991 2425 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:31:58.440664 kubelet[2425]: I1213 01:31:58.440645 2425 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:31:58.444229 kubelet[2425]: I1213 01:31:58.443916 2425 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:31:58.445782 kubelet[2425]: I1213 01:31:58.445114 2425 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:31:58.445782 kubelet[2425]: I1213 01:31:58.445271 2425 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:31:58.450208 kubelet[2425]: E1213 01:31:58.449367 2425 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.236.49:6443/api/v1/namespaces/default/events\": dial tcp 188.245.236.49:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-f-56be046f0f.1810987cd4209d25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-f-56be046f0f,UID:ci-4081-2-1-f-56be046f0f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-f-56be046f0f,},FirstTimestamp:2024-12-13 01:31:58.439914789 +0000 UTC m=+0.381636598,LastTimestamp:2024-12-13 01:31:58.439914789 +0000 UTC m=+0.381636598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-f-56be046f0f,}" Dec 13 01:31:58.451196 kubelet[2425]: I1213 01:31:58.451182 2425 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:31:58.451830 kubelet[2425]: I1213 01:31:58.451816 2425 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:31:58.451952 kubelet[2425]: I1213 01:31:58.451941 2425 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:31:58.452256 kubelet[2425]: W1213 01:31:58.452228 2425 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://188.245.236.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:58.452326 kubelet[2425]: E1213 01:31:58.452316 2425 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.236.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:58.453753 kubelet[2425]: E1213 01:31:58.453736 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.236.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-f-56be046f0f?timeout=10s\": dial tcp 188.245.236.49:6443: connect: connection refused" interval="200ms" Dec 13 01:31:58.453897 kubelet[2425]: E1213 01:31:58.453885 2425 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:31:58.454131 kubelet[2425]: I1213 01:31:58.454113 2425 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:31:58.455911 kubelet[2425]: I1213 01:31:58.455886 2425 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:31:58.455911 kubelet[2425]: I1213 01:31:58.455904 2425 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:31:58.467312 kubelet[2425]: I1213 01:31:58.467278 2425 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:31:58.468352 kubelet[2425]: I1213 01:31:58.468332 2425 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:31:58.468439 kubelet[2425]: I1213 01:31:58.468359 2425 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:31:58.468439 kubelet[2425]: I1213 01:31:58.468374 2425 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:31:58.468439 kubelet[2425]: E1213 01:31:58.468410 2425 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:31:58.476387 kubelet[2425]: W1213 01:31:58.476350 2425 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://188.245.236.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:58.476695 kubelet[2425]: E1213 01:31:58.476663 2425 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.236.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:58.484271 kubelet[2425]: I1213 01:31:58.484256 2425 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:31:58.484567 kubelet[2425]: I1213 01:31:58.484361 2425 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:31:58.484567 kubelet[2425]: I1213 01:31:58.484382 2425 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:58.486083 kubelet[2425]: I1213 01:31:58.486060 2425 policy_none.go:49] "None policy: Start" Dec 13 01:31:58.486662 kubelet[2425]: I1213 01:31:58.486649 2425 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:31:58.486980 kubelet[2425]: I1213 01:31:58.486763 2425 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:31:58.495752 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:31:58.515806 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:31:58.518780 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:31:58.531551 kubelet[2425]: I1213 01:31:58.531458 2425 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:31:58.531793 kubelet[2425]: I1213 01:31:58.531768 2425 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:31:58.533825 kubelet[2425]: E1213 01:31:58.533801 2425 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-f-56be046f0f\" not found" Dec 13 01:31:58.553784 kubelet[2425]: I1213 01:31:58.553658 2425 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-f-56be046f0f" Dec 13 01:31:58.554750 kubelet[2425]: E1213 01:31:58.553986 2425 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.236.49:6443/api/v1/nodes\": dial tcp 188.245.236.49:6443: connect: connection refused" node="ci-4081-2-1-f-56be046f0f" Dec 13 01:31:58.569705 kubelet[2425]: I1213 01:31:58.569380 2425 topology_manager.go:215] "Topology Admit Handler" podUID="6084332dbea7d099edb1ce0012185d5b" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-f-56be046f0f" Dec 13 01:31:58.571656 kubelet[2425]: I1213 01:31:58.571629 2425 topology_manager.go:215] "Topology Admit Handler" podUID="963d23554c00f1ff21d7ee6e62d49672" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-f-56be046f0f" Dec 13 01:31:58.573165 kubelet[2425]: I1213 01:31:58.573120 2425 topology_manager.go:215] "Topology Admit Handler" podUID="d8e4930f2d92ecfc4e221868a52a364a" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-f-56be046f0f" Dec 13 01:31:58.582152 systemd[1]: Created slice kubepods-burstable-pod6084332dbea7d099edb1ce0012185d5b.slice - libcontainer container kubepods-burstable-pod6084332dbea7d099edb1ce0012185d5b.slice. Dec 13 01:31:58.592085 systemd[1]: Created slice kubepods-burstable-pod963d23554c00f1ff21d7ee6e62d49672.slice - libcontainer container kubepods-burstable-pod963d23554c00f1ff21d7ee6e62d49672.slice. Dec 13 01:31:58.603971 systemd[1]: Created slice kubepods-burstable-podd8e4930f2d92ecfc4e221868a52a364a.slice - libcontainer container kubepods-burstable-podd8e4930f2d92ecfc4e221868a52a364a.slice. Dec 13 01:31:58.653264 kubelet[2425]: I1213 01:31:58.653013 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/963d23554c00f1ff21d7ee6e62d49672-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-f-56be046f0f\" (UID: \"963d23554c00f1ff21d7ee6e62d49672\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-56be046f0f" Dec 13 01:31:58.653264 kubelet[2425]: I1213 01:31:58.653079 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/963d23554c00f1ff21d7ee6e62d49672-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-f-56be046f0f\" (UID: \"963d23554c00f1ff21d7ee6e62d49672\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-56be046f0f" Dec 13 01:31:58.653264 kubelet[2425]: I1213 01:31:58.653144 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/963d23554c00f1ff21d7ee6e62d49672-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-f-56be046f0f\" (UID: \"963d23554c00f1ff21d7ee6e62d49672\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-56be046f0f" Dec 13 01:31:58.655301 kubelet[2425]: E1213 01:31:58.655280 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.236.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-f-56be046f0f?timeout=10s\": dial tcp 188.245.236.49:6443: connect: connection refused" interval="400ms" Dec 13 01:31:58.753744 kubelet[2425]: I1213 01:31:58.753666 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/963d23554c00f1ff21d7ee6e62d49672-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-f-56be046f0f\" (UID: \"963d23554c00f1ff21d7ee6e62d49672\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-56be046f0f" Dec 13 01:31:58.753744 kubelet[2425]: I1213 01:31:58.753738 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6084332dbea7d099edb1ce0012185d5b-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-f-56be046f0f\" (UID: \"6084332dbea7d099edb1ce0012185d5b\") " pod="kube-system/kube-apiserver-ci-4081-2-1-f-56be046f0f" Dec 13 01:31:58.753969 kubelet[2425]: I1213 01:31:58.753780 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6084332dbea7d099edb1ce0012185d5b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-f-56be046f0f\" (UID: \"6084332dbea7d099edb1ce0012185d5b\") " pod="kube-system/kube-apiserver-ci-4081-2-1-f-56be046f0f" Dec 13 01:31:58.753969 kubelet[2425]: I1213 01:31:58.753873 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/963d23554c00f1ff21d7ee6e62d49672-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-f-56be046f0f\" (UID: \"963d23554c00f1ff21d7ee6e62d49672\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-56be046f0f" Dec 13 01:31:58.753969 kubelet[2425]: I1213 01:31:58.753912 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d8e4930f2d92ecfc4e221868a52a364a-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-f-56be046f0f\" (UID: \"d8e4930f2d92ecfc4e221868a52a364a\") " pod="kube-system/kube-scheduler-ci-4081-2-1-f-56be046f0f" Dec 13 01:31:58.753969 kubelet[2425]: I1213 01:31:58.753948 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6084332dbea7d099edb1ce0012185d5b-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-f-56be046f0f\" (UID: \"6084332dbea7d099edb1ce0012185d5b\") " pod="kube-system/kube-apiserver-ci-4081-2-1-f-56be046f0f" Dec 13 01:31:58.756114 kubelet[2425]: I1213 01:31:58.756062 2425 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-f-56be046f0f" Dec 13 01:31:58.756483 kubelet[2425]: E1213 01:31:58.756459 2425 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.236.49:6443/api/v1/nodes\": dial tcp 188.245.236.49:6443: connect: connection refused" node="ci-4081-2-1-f-56be046f0f" Dec 13 01:31:58.893125 containerd[1480]: time="2024-12-13T01:31:58.893063480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-f-56be046f0f,Uid:6084332dbea7d099edb1ce0012185d5b,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:58.905800 containerd[1480]: time="2024-12-13T01:31:58.905674169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-f-56be046f0f,Uid:963d23554c00f1ff21d7ee6e62d49672,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:58.906660 containerd[1480]: time="2024-12-13T01:31:58.906501961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-f-56be046f0f,Uid:d8e4930f2d92ecfc4e221868a52a364a,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:59.056621 kubelet[2425]: E1213 01:31:59.056576 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.236.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-f-56be046f0f?timeout=10s\": dial tcp 188.245.236.49:6443: connect: connection refused" interval="800ms" Dec 13 01:31:59.159140 kubelet[2425]: I1213 01:31:59.159021 2425 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-f-56be046f0f" Dec 13 01:31:59.159595 kubelet[2425]: E1213 01:31:59.159347 2425 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.236.49:6443/api/v1/nodes\": dial tcp 188.245.236.49:6443: connect: connection refused" node="ci-4081-2-1-f-56be046f0f" Dec 13 01:31:59.379347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2970978897.mount: Deactivated successfully. Dec 13 01:31:59.386415 containerd[1480]: time="2024-12-13T01:31:59.386347021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:59.389244 containerd[1480]: time="2024-12-13T01:31:59.387895615Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:59.392786 containerd[1480]: time="2024-12-13T01:31:59.392736995Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:31:59.394262 containerd[1480]: time="2024-12-13T01:31:59.394212802Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:59.395507 containerd[1480]: time="2024-12-13T01:31:59.395287307Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Dec 13 01:31:59.395868 containerd[1480]: time="2024-12-13T01:31:59.395816940Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:59.396506 containerd[1480]: time="2024-12-13T01:31:59.396465726Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:31:59.398826 containerd[1480]: time="2024-12-13T01:31:59.398782101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:59.400747 containerd[1480]: time="2024-12-13T01:31:59.400337357Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 494.58936ms" Dec 13 01:31:59.403249 containerd[1480]: time="2024-12-13T01:31:59.403131116Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 496.5408ms" Dec 13 01:31:59.403602 containerd[1480]: time="2024-12-13T01:31:59.403397275Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 510.238025ms" Dec 13 01:31:59.432708 kubelet[2425]: W1213 01:31:59.431062 2425 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://188.245.236.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:59.432708 kubelet[2425]: E1213 01:31:59.431124 2425 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.236.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:59.528776 containerd[1480]: time="2024-12-13T01:31:59.528660518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:59.528999 containerd[1480]: time="2024-12-13T01:31:59.528793236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:59.528999 containerd[1480]: time="2024-12-13T01:31:59.528853700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:59.529494 containerd[1480]: time="2024-12-13T01:31:59.529449016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:59.530592 containerd[1480]: time="2024-12-13T01:31:59.530219863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:59.530592 containerd[1480]: time="2024-12-13T01:31:59.530397737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:59.531111 containerd[1480]: time="2024-12-13T01:31:59.531054809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:59.531860 containerd[1480]: time="2024-12-13T01:31:59.531799354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:59.540558 containerd[1480]: time="2024-12-13T01:31:59.537966038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:59.540558 containerd[1480]: time="2024-12-13T01:31:59.538015983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:59.540558 containerd[1480]: time="2024-12-13T01:31:59.538032443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:59.540558 containerd[1480]: time="2024-12-13T01:31:59.538110540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:59.565802 systemd[1]: Started cri-containerd-c1bcf2eaed929b3f21e52a99bcfb6d3426df0b87d2677f7f77ebda6b29f3d496.scope - libcontainer container c1bcf2eaed929b3f21e52a99bcfb6d3426df0b87d2677f7f77ebda6b29f3d496. Dec 13 01:31:59.570789 systemd[1]: Started cri-containerd-7de23ec0fe18fa37180c92b57e25dd9471747b1d7a1475869e0d7b3e4d04eeeb.scope - libcontainer container 7de23ec0fe18fa37180c92b57e25dd9471747b1d7a1475869e0d7b3e4d04eeeb. Dec 13 01:31:59.573317 systemd[1]: Started cri-containerd-ba59479bc8df2fc4cf1cf0ef48ff5fdae78e67d75fc77d88f2e4ce51a3020843.scope - libcontainer container ba59479bc8df2fc4cf1cf0ef48ff5fdae78e67d75fc77d88f2e4ce51a3020843. Dec 13 01:31:59.628547 containerd[1480]: time="2024-12-13T01:31:59.628416248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-f-56be046f0f,Uid:963d23554c00f1ff21d7ee6e62d49672,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1bcf2eaed929b3f21e52a99bcfb6d3426df0b87d2677f7f77ebda6b29f3d496\"" Dec 13 01:31:59.635485 containerd[1480]: time="2024-12-13T01:31:59.635342917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-f-56be046f0f,Uid:6084332dbea7d099edb1ce0012185d5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7de23ec0fe18fa37180c92b57e25dd9471747b1d7a1475869e0d7b3e4d04eeeb\"" Dec 13 01:31:59.638798 containerd[1480]: time="2024-12-13T01:31:59.638459552Z" level=info msg="CreateContainer within sandbox \"c1bcf2eaed929b3f21e52a99bcfb6d3426df0b87d2677f7f77ebda6b29f3d496\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:31:59.640185 containerd[1480]: time="2024-12-13T01:31:59.639977719Z" level=info msg="CreateContainer within sandbox \"7de23ec0fe18fa37180c92b57e25dd9471747b1d7a1475869e0d7b3e4d04eeeb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:31:59.657741 containerd[1480]: time="2024-12-13T01:31:59.657691797Z" level=info msg="CreateContainer within sandbox \"c1bcf2eaed929b3f21e52a99bcfb6d3426df0b87d2677f7f77ebda6b29f3d496\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d1cfd649cc550f5e41c93918ec58db3651461f476ade6fbeffa4ac3927ef2311\"" Dec 13 01:31:59.659546 containerd[1480]: time="2024-12-13T01:31:59.658828830Z" level=info msg="StartContainer for \"d1cfd649cc550f5e41c93918ec58db3651461f476ade6fbeffa4ac3927ef2311\"" Dec 13 01:31:59.660672 containerd[1480]: time="2024-12-13T01:31:59.660651247Z" level=info msg="CreateContainer within sandbox \"7de23ec0fe18fa37180c92b57e25dd9471747b1d7a1475869e0d7b3e4d04eeeb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b2f029d7ab83c28bb57936fcdfdcfde637319749aef5e385a39516816d656986\"" Dec 13 01:31:59.661228 containerd[1480]: time="2024-12-13T01:31:59.661209945Z" level=info msg="StartContainer for \"b2f029d7ab83c28bb57936fcdfdcfde637319749aef5e385a39516816d656986\"" Dec 13 01:31:59.664080 containerd[1480]: time="2024-12-13T01:31:59.664049731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-f-56be046f0f,Uid:d8e4930f2d92ecfc4e221868a52a364a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba59479bc8df2fc4cf1cf0ef48ff5fdae78e67d75fc77d88f2e4ce51a3020843\"" Dec 13 01:31:59.667006 containerd[1480]: time="2024-12-13T01:31:59.666962663Z" level=info msg="CreateContainer within sandbox \"ba59479bc8df2fc4cf1cf0ef48ff5fdae78e67d75fc77d88f2e4ce51a3020843\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:31:59.680308 containerd[1480]: time="2024-12-13T01:31:59.680273485Z" level=info msg="CreateContainer within sandbox \"ba59479bc8df2fc4cf1cf0ef48ff5fdae78e67d75fc77d88f2e4ce51a3020843\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"89d705dff35facef02bfad7c251c3da3488e839151ece9fc84514780629c4422\"" Dec 13 01:31:59.681997 containerd[1480]: time="2024-12-13T01:31:59.681952473Z" level=info msg="StartContainer for \"89d705dff35facef02bfad7c251c3da3488e839151ece9fc84514780629c4422\"" Dec 13 01:31:59.698215 systemd[1]: Started cri-containerd-b2f029d7ab83c28bb57936fcdfdcfde637319749aef5e385a39516816d656986.scope - libcontainer container b2f029d7ab83c28bb57936fcdfdcfde637319749aef5e385a39516816d656986. Dec 13 01:31:59.706637 systemd[1]: Started cri-containerd-d1cfd649cc550f5e41c93918ec58db3651461f476ade6fbeffa4ac3927ef2311.scope - libcontainer container d1cfd649cc550f5e41c93918ec58db3651461f476ade6fbeffa4ac3927ef2311. Dec 13 01:31:59.728826 systemd[1]: Started cri-containerd-89d705dff35facef02bfad7c251c3da3488e839151ece9fc84514780629c4422.scope - libcontainer container 89d705dff35facef02bfad7c251c3da3488e839151ece9fc84514780629c4422. Dec 13 01:31:59.759354 containerd[1480]: time="2024-12-13T01:31:59.759300442Z" level=info msg="StartContainer for \"b2f029d7ab83c28bb57936fcdfdcfde637319749aef5e385a39516816d656986\" returns successfully" Dec 13 01:31:59.775572 containerd[1480]: time="2024-12-13T01:31:59.775482518Z" level=info msg="StartContainer for \"d1cfd649cc550f5e41c93918ec58db3651461f476ade6fbeffa4ac3927ef2311\" returns successfully" Dec 13 01:31:59.801417 containerd[1480]: time="2024-12-13T01:31:59.801373941Z" level=info msg="StartContainer for \"89d705dff35facef02bfad7c251c3da3488e839151ece9fc84514780629c4422\" returns successfully" Dec 13 01:31:59.832373 kubelet[2425]: W1213 01:31:59.832289 2425 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://188.245.236.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-f-56be046f0f&limit=500&resourceVersion=0": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:59.832373 kubelet[2425]: E1213 01:31:59.832349 2425 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.236.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-f-56be046f0f&limit=500&resourceVersion=0": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:59.857164 kubelet[2425]: E1213 01:31:59.857128 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.236.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-f-56be046f0f?timeout=10s\": dial tcp 188.245.236.49:6443: connect: connection refused" interval="1.6s" Dec 13 01:31:59.904788 kubelet[2425]: W1213 01:31:59.904726 2425 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://188.245.236.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:59.904788 kubelet[2425]: E1213 01:31:59.904764 2425 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.236.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:59.916452 kubelet[2425]: W1213 01:31:59.916360 2425 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://188.245.236.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:59.916452 kubelet[2425]: E1213 01:31:59.916411 2425 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.236.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.236.49:6443: connect: connection refused Dec 13 01:31:59.963148 kubelet[2425]: I1213 01:31:59.962314 2425 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-f-56be046f0f" Dec 13 01:31:59.964015 kubelet[2425]: E1213 01:31:59.963975 2425 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.236.49:6443/api/v1/nodes\": dial tcp 188.245.236.49:6443: connect: connection refused" node="ci-4081-2-1-f-56be046f0f" Dec 13 01:32:01.463774 kubelet[2425]: E1213 01:32:01.463717 2425 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-f-56be046f0f\" not found" node="ci-4081-2-1-f-56be046f0f" Dec 13 01:32:01.567358 kubelet[2425]: I1213 01:32:01.567321 2425 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-f-56be046f0f" Dec 13 01:32:01.577599 kubelet[2425]: I1213 01:32:01.577546 2425 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-f-56be046f0f" Dec 13 01:32:01.587633 kubelet[2425]: E1213 01:32:01.587571 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-f-56be046f0f\" not found" Dec 13 01:32:01.687816 kubelet[2425]: E1213 01:32:01.687732 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-f-56be046f0f\" not found" Dec 13 01:32:01.788589 kubelet[2425]: E1213 01:32:01.788437 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-f-56be046f0f\" not found" Dec 13 01:32:01.889098 kubelet[2425]: E1213 01:32:01.889020 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-f-56be046f0f\" not found" Dec 13 01:32:01.990103 kubelet[2425]: E1213 01:32:01.990045 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-f-56be046f0f\" not found" Dec 13 01:32:02.090957 kubelet[2425]: E1213 01:32:02.090797 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-f-56be046f0f\" not found" Dec 13 01:32:02.191778 kubelet[2425]: E1213 01:32:02.191711 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-f-56be046f0f\" not found" Dec 13 01:32:02.292622 kubelet[2425]: E1213 01:32:02.292574 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-f-56be046f0f\" not found" Dec 13 01:32:02.393653 kubelet[2425]: E1213 01:32:02.393607 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-f-56be046f0f\" not found" Dec 13 01:32:02.494357 kubelet[2425]: E1213 01:32:02.494306 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-f-56be046f0f\" not found" Dec 13 01:32:02.595436 kubelet[2425]: E1213 01:32:02.595388 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-f-56be046f0f\" not found" Dec 13 01:32:02.696164 kubelet[2425]: E1213 01:32:02.696017 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-f-56be046f0f\" not found" Dec 13 01:32:03.437591 kubelet[2425]: I1213 01:32:03.437508 2425 apiserver.go:52] "Watching apiserver" Dec 13 01:32:03.452281 kubelet[2425]: I1213 01:32:03.452221 2425 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:32:04.165605 systemd[1]: Reloading requested from client PID 2704 ('systemctl') (unit session-7.scope)... Dec 13 01:32:04.165625 systemd[1]: Reloading... Dec 13 01:32:04.271615 zram_generator::config[2750]: No configuration found. Dec 13 01:32:04.365110 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:04.441151 systemd[1]: Reloading finished in 275 ms. Dec 13 01:32:04.486621 kubelet[2425]: I1213 01:32:04.486577 2425 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:04.487090 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:04.498954 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:32:04.499210 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:04.505859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:04.623688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:04.624727 (kubelet)[2795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:32:04.707747 kubelet[2795]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:04.707747 kubelet[2795]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:32:04.707747 kubelet[2795]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:04.707747 kubelet[2795]: I1213 01:32:04.707030 2795 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:32:04.718549 kubelet[2795]: I1213 01:32:04.718022 2795 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:32:04.718549 kubelet[2795]: I1213 01:32:04.718056 2795 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:32:04.718549 kubelet[2795]: I1213 01:32:04.718287 2795 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:32:04.719737 kubelet[2795]: I1213 01:32:04.719693 2795 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:32:04.726160 kubelet[2795]: I1213 01:32:04.726135 2795 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:04.732403 kubelet[2795]: I1213 01:32:04.732263 2795 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:32:04.732583 kubelet[2795]: I1213 01:32:04.732497 2795 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:32:04.732672 kubelet[2795]: I1213 01:32:04.732655 2795 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:32:04.732838 kubelet[2795]: I1213 01:32:04.732681 2795 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:32:04.732838 kubelet[2795]: I1213 01:32:04.732690 2795 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:32:04.732838 kubelet[2795]: I1213 01:32:04.732726 2795 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:04.732838 kubelet[2795]: I1213 01:32:04.732828 2795 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:32:04.732838 kubelet[2795]: I1213 01:32:04.732842 2795 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:32:04.736599 kubelet[2795]: I1213 01:32:04.736578 2795 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:32:04.740225 kubelet[2795]: I1213 01:32:04.739575 2795 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:32:04.741337 kubelet[2795]: I1213 01:32:04.740331 2795 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:32:04.741337 kubelet[2795]: I1213 01:32:04.740504 2795 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:32:04.741589 kubelet[2795]: I1213 01:32:04.741574 2795 server.go:1256] "Started kubelet" Dec 13 01:32:04.748663 kubelet[2795]: I1213 01:32:04.748238 2795 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:32:04.749776 kubelet[2795]: I1213 01:32:04.749688 2795 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:32:04.751985 kubelet[2795]: I1213 01:32:04.751842 2795 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:32:04.755352 kubelet[2795]: I1213 01:32:04.755334 2795 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:32:04.755677 kubelet[2795]: I1213 01:32:04.755661 2795 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:32:04.759256 sudo[2809]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:32:04.760241 sudo[2809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:32:04.766142 kubelet[2795]: I1213 01:32:04.765039 2795 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:32:04.766611 kubelet[2795]: I1213 01:32:04.766574 2795 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:32:04.766821 kubelet[2795]: I1213 01:32:04.766809 2795 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:32:04.768733 kubelet[2795]: I1213 01:32:04.768166 2795 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:32:04.776749 kubelet[2795]: I1213 01:32:04.776674 2795 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:32:04.776749 kubelet[2795]: I1213 01:32:04.776694 2795 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:32:04.781790 kubelet[2795]: I1213 01:32:04.781769 2795 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:32:04.782878 kubelet[2795]: I1213 01:32:04.782865 2795 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:32:04.782954 kubelet[2795]: I1213 01:32:04.782945 2795 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:32:04.783218 kubelet[2795]: I1213 01:32:04.783015 2795 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:32:04.783218 kubelet[2795]: E1213 01:32:04.783060 2795 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:32:04.792894 kubelet[2795]: E1213 01:32:04.792860 2795 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:32:04.833952 kubelet[2795]: I1213 01:32:04.833927 2795 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:32:04.834394 kubelet[2795]: I1213 01:32:04.834174 2795 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:32:04.834394 kubelet[2795]: I1213 01:32:04.834194 2795 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:04.834394 kubelet[2795]: I1213 01:32:04.834339 2795 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:32:04.834394 kubelet[2795]: I1213 01:32:04.834358 2795 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:32:04.834394 kubelet[2795]: I1213 01:32:04.834365 2795 policy_none.go:49] "None policy: Start" Dec 13 01:32:04.835333 kubelet[2795]: I1213 01:32:04.835297 2795 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:32:04.835577 kubelet[2795]: I1213 01:32:04.835403 2795 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:32:04.835768 kubelet[2795]: I1213 01:32:04.835754 2795 state_mem.go:75] "Updated machine memory state" Dec 13 01:32:04.842257 kubelet[2795]: I1213 01:32:04.841701 2795 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:32:04.842257 kubelet[2795]: I1213 01:32:04.841935 2795 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:32:04.876424 kubelet[2795]: I1213 01:32:04.876400 2795 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-f-56be046f0f" Dec 13 01:32:04.883421 kubelet[2795]: I1213 01:32:04.883366 2795 topology_manager.go:215] "Topology Admit Handler" podUID="d8e4930f2d92ecfc4e221868a52a364a" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-f-56be046f0f" Dec 13 01:32:04.883593 kubelet[2795]: I1213 01:32:04.883438 2795 topology_manager.go:215] "Topology Admit Handler" podUID="6084332dbea7d099edb1ce0012185d5b" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-f-56be046f0f" Dec 13 01:32:04.883593 kubelet[2795]: I1213 01:32:04.883468 2795 topology_manager.go:215] "Topology Admit Handler" podUID="963d23554c00f1ff21d7ee6e62d49672" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-f-56be046f0f" Dec 13 01:32:04.890160 kubelet[2795]: I1213 01:32:04.889851 2795 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-f-56be046f0f" Dec 13 01:32:04.890160 kubelet[2795]: I1213 01:32:04.889920 2795 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-f-56be046f0f" Dec 13 01:32:05.068750 kubelet[2795]: I1213 01:32:05.068297 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6084332dbea7d099edb1ce0012185d5b-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-f-56be046f0f\" (UID: \"6084332dbea7d099edb1ce0012185d5b\") " pod="kube-system/kube-apiserver-ci-4081-2-1-f-56be046f0f" Dec 13 01:32:05.068750 kubelet[2795]: I1213 01:32:05.068635 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/963d23554c00f1ff21d7ee6e62d49672-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-f-56be046f0f\" (UID: \"963d23554c00f1ff21d7ee6e62d49672\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-56be046f0f" Dec 13 01:32:05.068750 kubelet[2795]: I1213 01:32:05.068684 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/963d23554c00f1ff21d7ee6e62d49672-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-f-56be046f0f\" (UID: \"963d23554c00f1ff21d7ee6e62d49672\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-56be046f0f" Dec 13 01:32:05.068750 kubelet[2795]: I1213 01:32:05.068706 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d8e4930f2d92ecfc4e221868a52a364a-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-f-56be046f0f\" (UID: \"d8e4930f2d92ecfc4e221868a52a364a\") " pod="kube-system/kube-scheduler-ci-4081-2-1-f-56be046f0f" Dec 13 01:32:05.068750 kubelet[2795]: I1213 01:32:05.068727 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6084332dbea7d099edb1ce0012185d5b-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-f-56be046f0f\" (UID: \"6084332dbea7d099edb1ce0012185d5b\") " pod="kube-system/kube-apiserver-ci-4081-2-1-f-56be046f0f" Dec 13 01:32:05.068960 kubelet[2795]: I1213 01:32:05.068744 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6084332dbea7d099edb1ce0012185d5b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-f-56be046f0f\" (UID: \"6084332dbea7d099edb1ce0012185d5b\") " pod="kube-system/kube-apiserver-ci-4081-2-1-f-56be046f0f" Dec 13 01:32:05.068960 kubelet[2795]: I1213 01:32:05.068781 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/963d23554c00f1ff21d7ee6e62d49672-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-f-56be046f0f\" (UID: \"963d23554c00f1ff21d7ee6e62d49672\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-56be046f0f" Dec 13 01:32:05.068960 kubelet[2795]: I1213 01:32:05.068798 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/963d23554c00f1ff21d7ee6e62d49672-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-f-56be046f0f\" (UID: \"963d23554c00f1ff21d7ee6e62d49672\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-56be046f0f" Dec 13 01:32:05.068960 kubelet[2795]: I1213 01:32:05.068817 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/963d23554c00f1ff21d7ee6e62d49672-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-f-56be046f0f\" (UID: \"963d23554c00f1ff21d7ee6e62d49672\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-56be046f0f" Dec 13 01:32:05.388919 sudo[2809]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:05.740777 kubelet[2795]: I1213 01:32:05.740638 2795 apiserver.go:52] "Watching apiserver" Dec 13 01:32:05.767060 kubelet[2795]: I1213 01:32:05.767022 2795 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:32:05.841064 kubelet[2795]: I1213 01:32:05.839897 2795 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-f-56be046f0f" podStartSLOduration=1.839851873 podStartE2EDuration="1.839851873s" podCreationTimestamp="2024-12-13 01:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:05.831197924 +0000 UTC m=+1.200880001" watchObservedRunningTime="2024-12-13 01:32:05.839851873 +0000 UTC m=+1.209533950" Dec 13 01:32:05.849315 kubelet[2795]: I1213 01:32:05.849154 2795 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-f-56be046f0f" podStartSLOduration=1.8491218680000001 podStartE2EDuration="1.849121868s" podCreationTimestamp="2024-12-13 01:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:05.848665592 +0000 UTC m=+1.218347670" watchObservedRunningTime="2024-12-13 01:32:05.849121868 +0000 UTC m=+1.218803946" Dec 13 01:32:05.849315 kubelet[2795]: I1213 01:32:05.849262 2795 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-f-56be046f0f" podStartSLOduration=1.849247744 podStartE2EDuration="1.849247744s" podCreationTimestamp="2024-12-13 01:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:05.840469731 +0000 UTC m=+1.210151809" watchObservedRunningTime="2024-12-13 01:32:05.849247744 +0000 UTC m=+1.218929822" Dec 13 01:32:06.875103 sudo[1899]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:07.037787 sshd[1896]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:07.042365 systemd[1]: sshd@7-188.245.236.49:22-147.75.109.163:42634.service: Deactivated successfully. Dec 13 01:32:07.045188 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:32:07.045432 systemd[1]: session-7.scope: Consumed 4.496s CPU time, 187.3M memory peak, 0B memory swap peak. Dec 13 01:32:07.047824 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:32:07.049345 systemd-logind[1467]: Removed session 7. Dec 13 01:32:18.129998 kubelet[2795]: I1213 01:32:18.129972 2795 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:32:18.130871 containerd[1480]: time="2024-12-13T01:32:18.130768776Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:32:18.131204 kubelet[2795]: I1213 01:32:18.131087 2795 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:32:18.666635 kubelet[2795]: I1213 01:32:18.666478 2795 topology_manager.go:215] "Topology Admit Handler" podUID="b055d900-8c98-448b-93ff-7ab3647df3eb" podNamespace="kube-system" podName="kube-proxy-wsfxj" Dec 13 01:32:18.681352 systemd[1]: Created slice kubepods-besteffort-podb055d900_8c98_448b_93ff_7ab3647df3eb.slice - libcontainer container kubepods-besteffort-podb055d900_8c98_448b_93ff_7ab3647df3eb.slice. Dec 13 01:32:18.688704 kubelet[2795]: I1213 01:32:18.687800 2795 topology_manager.go:215] "Topology Admit Handler" podUID="41c57c0e-0002-4599-8f32-dbe0cd6709c1" podNamespace="kube-system" podName="cilium-z9qz2" Dec 13 01:32:18.697841 systemd[1]: Created slice kubepods-burstable-pod41c57c0e_0002_4599_8f32_dbe0cd6709c1.slice - libcontainer container kubepods-burstable-pod41c57c0e_0002_4599_8f32_dbe0cd6709c1.slice. Dec 13 01:32:18.760931 kubelet[2795]: I1213 01:32:18.760889 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-lib-modules\") pod \"cilium-z9qz2\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " pod="kube-system/cilium-z9qz2" Dec 13 01:32:18.760931 kubelet[2795]: I1213 01:32:18.760932 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b055d900-8c98-448b-93ff-7ab3647df3eb-kube-proxy\") pod \"kube-proxy-wsfxj\" (UID: \"b055d900-8c98-448b-93ff-7ab3647df3eb\") " pod="kube-system/kube-proxy-wsfxj" Dec 13 01:32:18.761111 kubelet[2795]: I1213 01:32:18.760955 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-bpf-maps\") pod \"cilium-z9qz2\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " pod="kube-system/cilium-z9qz2" Dec 13 01:32:18.761111 kubelet[2795]: I1213 01:32:18.760972 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-xtables-lock\") pod \"cilium-z9qz2\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " pod="kube-system/cilium-z9qz2" Dec 13 01:32:18.761111 kubelet[2795]: I1213 01:32:18.760990 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41c57c0e-0002-4599-8f32-dbe0cd6709c1-clustermesh-secrets\") pod \"cilium-z9qz2\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " pod="kube-system/cilium-z9qz2" Dec 13 01:32:18.761111 kubelet[2795]: I1213 01:32:18.761006 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-host-proc-sys-net\") pod \"cilium-z9qz2\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " pod="kube-system/cilium-z9qz2" Dec 13 01:32:18.761111 kubelet[2795]: I1213 01:32:18.761022 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b055d900-8c98-448b-93ff-7ab3647df3eb-lib-modules\") pod \"kube-proxy-wsfxj\" (UID: \"b055d900-8c98-448b-93ff-7ab3647df3eb\") " pod="kube-system/kube-proxy-wsfxj" Dec 13 01:32:18.761324 kubelet[2795]: I1213 01:32:18.761039 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwnm4\" (UniqueName: \"kubernetes.io/projected/b055d900-8c98-448b-93ff-7ab3647df3eb-kube-api-access-gwnm4\") pod \"kube-proxy-wsfxj\" (UID: \"b055d900-8c98-448b-93ff-7ab3647df3eb\") " pod="kube-system/kube-proxy-wsfxj" Dec 13 01:32:18.761324 kubelet[2795]: I1213 01:32:18.761055 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-cilium-cgroup\") pod \"cilium-z9qz2\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " pod="kube-system/cilium-z9qz2" Dec 13 01:32:18.761324 kubelet[2795]: I1213 01:32:18.761072 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-host-proc-sys-kernel\") pod \"cilium-z9qz2\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " pod="kube-system/cilium-z9qz2" Dec 13 01:32:18.761324 kubelet[2795]: I1213 01:32:18.761093 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-etc-cni-netd\") pod \"cilium-z9qz2\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " pod="kube-system/cilium-z9qz2" Dec 13 01:32:18.761324 kubelet[2795]: I1213 01:32:18.761108 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41c57c0e-0002-4599-8f32-dbe0cd6709c1-hubble-tls\") pod \"cilium-z9qz2\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " pod="kube-system/cilium-z9qz2" Dec 13 01:32:18.761324 kubelet[2795]: I1213 01:32:18.761126 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-hostproc\") pod \"cilium-z9qz2\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " pod="kube-system/cilium-z9qz2" Dec 13 01:32:18.761565 kubelet[2795]: I1213 01:32:18.761141 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b055d900-8c98-448b-93ff-7ab3647df3eb-xtables-lock\") pod \"kube-proxy-wsfxj\" (UID: \"b055d900-8c98-448b-93ff-7ab3647df3eb\") " pod="kube-system/kube-proxy-wsfxj" Dec 13 01:32:18.761565 kubelet[2795]: I1213 01:32:18.761157 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-cni-path\") pod \"cilium-z9qz2\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " pod="kube-system/cilium-z9qz2" Dec 13 01:32:18.761565 kubelet[2795]: I1213 01:32:18.761173 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41c57c0e-0002-4599-8f32-dbe0cd6709c1-cilium-config-path\") pod \"cilium-z9qz2\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " pod="kube-system/cilium-z9qz2" Dec 13 01:32:18.761565 kubelet[2795]: I1213 01:32:18.761189 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-cilium-run\") pod \"cilium-z9qz2\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " pod="kube-system/cilium-z9qz2" Dec 13 01:32:18.761565 kubelet[2795]: I1213 01:32:18.761206 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59rj5\" (UniqueName: \"kubernetes.io/projected/41c57c0e-0002-4599-8f32-dbe0cd6709c1-kube-api-access-59rj5\") pod \"cilium-z9qz2\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " pod="kube-system/cilium-z9qz2" Dec 13 01:32:18.890473 kubelet[2795]: E1213 01:32:18.890207 2795 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:32:18.890473 kubelet[2795]: E1213 01:32:18.890243 2795 projected.go:200] Error preparing data for projected volume kube-api-access-gwnm4 for pod kube-system/kube-proxy-wsfxj: configmap "kube-root-ca.crt" not found Dec 13 01:32:18.890473 kubelet[2795]: E1213 01:32:18.890303 2795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b055d900-8c98-448b-93ff-7ab3647df3eb-kube-api-access-gwnm4 podName:b055d900-8c98-448b-93ff-7ab3647df3eb nodeName:}" failed. No retries permitted until 2024-12-13 01:32:19.39028572 +0000 UTC m=+14.759967798 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwnm4" (UniqueName: "kubernetes.io/projected/b055d900-8c98-448b-93ff-7ab3647df3eb-kube-api-access-gwnm4") pod "kube-proxy-wsfxj" (UID: "b055d900-8c98-448b-93ff-7ab3647df3eb") : configmap "kube-root-ca.crt" not found Dec 13 01:32:18.890931 kubelet[2795]: E1213 01:32:18.890916 2795 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:32:18.891002 kubelet[2795]: E1213 01:32:18.890992 2795 projected.go:200] Error preparing data for projected volume kube-api-access-59rj5 for pod kube-system/cilium-z9qz2: configmap "kube-root-ca.crt" not found Dec 13 01:32:18.891083 kubelet[2795]: E1213 01:32:18.891073 2795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41c57c0e-0002-4599-8f32-dbe0cd6709c1-kube-api-access-59rj5 podName:41c57c0e-0002-4599-8f32-dbe0cd6709c1 nodeName:}" failed. No retries permitted until 2024-12-13 01:32:19.391061746 +0000 UTC m=+14.760743834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-59rj5" (UniqueName: "kubernetes.io/projected/41c57c0e-0002-4599-8f32-dbe0cd6709c1-kube-api-access-59rj5") pod "cilium-z9qz2" (UID: "41c57c0e-0002-4599-8f32-dbe0cd6709c1") : configmap "kube-root-ca.crt" not found Dec 13 01:32:19.212160 kubelet[2795]: I1213 01:32:19.212117 2795 topology_manager.go:215] "Topology Admit Handler" podUID="7a8e2421-4c78-4311-88d0-2a5024143529" podNamespace="kube-system" podName="cilium-operator-5cc964979-gbj2g" Dec 13 01:32:19.226007 systemd[1]: Created slice kubepods-besteffort-pod7a8e2421_4c78_4311_88d0_2a5024143529.slice - libcontainer container kubepods-besteffort-pod7a8e2421_4c78_4311_88d0_2a5024143529.slice. Dec 13 01:32:19.264505 kubelet[2795]: I1213 01:32:19.264450 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a8e2421-4c78-4311-88d0-2a5024143529-cilium-config-path\") pod \"cilium-operator-5cc964979-gbj2g\" (UID: \"7a8e2421-4c78-4311-88d0-2a5024143529\") " pod="kube-system/cilium-operator-5cc964979-gbj2g" Dec 13 01:32:19.264652 kubelet[2795]: I1213 01:32:19.264536 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6ltf\" (UniqueName: \"kubernetes.io/projected/7a8e2421-4c78-4311-88d0-2a5024143529-kube-api-access-f6ltf\") pod \"cilium-operator-5cc964979-gbj2g\" (UID: \"7a8e2421-4c78-4311-88d0-2a5024143529\") " pod="kube-system/cilium-operator-5cc964979-gbj2g" Dec 13 01:32:19.530985 containerd[1480]: time="2024-12-13T01:32:19.530629977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gbj2g,Uid:7a8e2421-4c78-4311-88d0-2a5024143529,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:19.555691 containerd[1480]: time="2024-12-13T01:32:19.555506324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:19.555691 containerd[1480]: time="2024-12-13T01:32:19.555631779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:19.555691 containerd[1480]: time="2024-12-13T01:32:19.555647439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:19.555989 containerd[1480]: time="2024-12-13T01:32:19.555752906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:19.574740 systemd[1]: Started cri-containerd-65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc.scope - libcontainer container 65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc. Dec 13 01:32:19.591368 containerd[1480]: time="2024-12-13T01:32:19.591295357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wsfxj,Uid:b055d900-8c98-448b-93ff-7ab3647df3eb,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:19.602996 containerd[1480]: time="2024-12-13T01:32:19.602949505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z9qz2,Uid:41c57c0e-0002-4599-8f32-dbe0cd6709c1,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:19.621949 containerd[1480]: time="2024-12-13T01:32:19.621612518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:19.622551 containerd[1480]: time="2024-12-13T01:32:19.622466601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:19.622730 containerd[1480]: time="2024-12-13T01:32:19.622537403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:19.624194 containerd[1480]: time="2024-12-13T01:32:19.624114370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:19.630119 containerd[1480]: time="2024-12-13T01:32:19.630029234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gbj2g,Uid:7a8e2421-4c78-4311-88d0-2a5024143529,Namespace:kube-system,Attempt:0,} returns sandbox id \"65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc\"" Dec 13 01:32:19.632911 containerd[1480]: time="2024-12-13T01:32:19.632782497Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:32:19.649725 containerd[1480]: time="2024-12-13T01:32:19.648952655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:19.650275 containerd[1480]: time="2024-12-13T01:32:19.649765791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:19.650275 containerd[1480]: time="2024-12-13T01:32:19.649788233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:19.650578 containerd[1480]: time="2024-12-13T01:32:19.650394901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:19.654922 systemd[1]: Started cri-containerd-be59810fa02f6f2dbfc91569c992bca26cf604251173f6f9155f40ea04c07d1a.scope - libcontainer container be59810fa02f6f2dbfc91569c992bca26cf604251173f6f9155f40ea04c07d1a. Dec 13 01:32:19.680930 systemd[1]: Started cri-containerd-22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42.scope - libcontainer container 22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42. Dec 13 01:32:19.692476 containerd[1480]: time="2024-12-13T01:32:19.692431241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wsfxj,Uid:b055d900-8c98-448b-93ff-7ab3647df3eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"be59810fa02f6f2dbfc91569c992bca26cf604251173f6f9155f40ea04c07d1a\"" Dec 13 01:32:19.696678 containerd[1480]: time="2024-12-13T01:32:19.696643482Z" level=info msg="CreateContainer within sandbox \"be59810fa02f6f2dbfc91569c992bca26cf604251173f6f9155f40ea04c07d1a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:32:19.718754 containerd[1480]: time="2024-12-13T01:32:19.718701061Z" level=info msg="CreateContainer within sandbox \"be59810fa02f6f2dbfc91569c992bca26cf604251173f6f9155f40ea04c07d1a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fe9823d3224e8971f8000e80320b9d9a9a1ab48a76707ed363e202217ada638d\"" Dec 13 01:32:19.719300 containerd[1480]: time="2024-12-13T01:32:19.719275829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z9qz2,Uid:41c57c0e-0002-4599-8f32-dbe0cd6709c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\"" Dec 13 01:32:19.719698 containerd[1480]: time="2024-12-13T01:32:19.719681891Z" level=info msg="StartContainer for \"fe9823d3224e8971f8000e80320b9d9a9a1ab48a76707ed363e202217ada638d\"" Dec 13 01:32:19.750748 systemd[1]: Started cri-containerd-fe9823d3224e8971f8000e80320b9d9a9a1ab48a76707ed363e202217ada638d.scope - libcontainer container fe9823d3224e8971f8000e80320b9d9a9a1ab48a76707ed363e202217ada638d. Dec 13 01:32:19.781948 containerd[1480]: time="2024-12-13T01:32:19.781369288Z" level=info msg="StartContainer for \"fe9823d3224e8971f8000e80320b9d9a9a1ab48a76707ed363e202217ada638d\" returns successfully" Dec 13 01:32:19.848668 kubelet[2795]: I1213 01:32:19.848090 2795 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wsfxj" podStartSLOduration=1.848057434 podStartE2EDuration="1.848057434s" podCreationTimestamp="2024-12-13 01:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:19.848016486 +0000 UTC m=+15.217698564" watchObservedRunningTime="2024-12-13 01:32:19.848057434 +0000 UTC m=+15.217739512" Dec 13 01:32:21.782284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2505049911.mount: Deactivated successfully. Dec 13 01:32:22.237292 containerd[1480]: time="2024-12-13T01:32:22.237243749Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:22.238164 containerd[1480]: time="2024-12-13T01:32:22.238021557Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907213" Dec 13 01:32:22.238826 containerd[1480]: time="2024-12-13T01:32:22.238664543Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:22.240237 containerd[1480]: time="2024-12-13T01:32:22.239810703Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.606986607s" Dec 13 01:32:22.240237 containerd[1480]: time="2024-12-13T01:32:22.239848564Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:32:22.242412 containerd[1480]: time="2024-12-13T01:32:22.242183142Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:32:22.244101 containerd[1480]: time="2024-12-13T01:32:22.243907958Z" level=info msg="CreateContainer within sandbox \"65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:32:22.257907 containerd[1480]: time="2024-12-13T01:32:22.257375957Z" level=info msg="CreateContainer within sandbox \"65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9402ca5f5b3a2fd8c9e47cc92a0474a60ead2daa03e8863dc433dcd6d612c874\"" Dec 13 01:32:22.257416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3747169132.mount: Deactivated successfully. Dec 13 01:32:22.258924 containerd[1480]: time="2024-12-13T01:32:22.258283990Z" level=info msg="StartContainer for \"9402ca5f5b3a2fd8c9e47cc92a0474a60ead2daa03e8863dc433dcd6d612c874\"" Dec 13 01:32:22.287983 systemd[1]: Started cri-containerd-9402ca5f5b3a2fd8c9e47cc92a0474a60ead2daa03e8863dc433dcd6d612c874.scope - libcontainer container 9402ca5f5b3a2fd8c9e47cc92a0474a60ead2daa03e8863dc433dcd6d612c874. Dec 13 01:32:22.311347 containerd[1480]: time="2024-12-13T01:32:22.311301929Z" level=info msg="StartContainer for \"9402ca5f5b3a2fd8c9e47cc92a0474a60ead2daa03e8863dc433dcd6d612c874\" returns successfully" Dec 13 01:32:22.896323 kubelet[2795]: I1213 01:32:22.896279 2795 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-gbj2g" podStartSLOduration=1.286348142 podStartE2EDuration="3.896246028s" podCreationTimestamp="2024-12-13 01:32:19 +0000 UTC" firstStartedPulling="2024-12-13 01:32:19.632083167 +0000 UTC m=+15.001765245" lastFinishedPulling="2024-12-13 01:32:22.241981053 +0000 UTC m=+17.611663131" observedRunningTime="2024-12-13 01:32:22.895363803 +0000 UTC m=+18.265045881" watchObservedRunningTime="2024-12-13 01:32:22.896246028 +0000 UTC m=+18.265928106" Dec 13 01:32:26.324683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3855766185.mount: Deactivated successfully. Dec 13 01:32:27.479017 update_engine[1471]: I20241213 01:32:27.478813 1471 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 01:32:27.479017 update_engine[1471]: I20241213 01:32:27.478873 1471 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 01:32:27.482368 update_engine[1471]: I20241213 01:32:27.482342 1471 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 01:32:27.482863 update_engine[1471]: I20241213 01:32:27.482789 1471 omaha_request_params.cc:62] Current group set to stable Dec 13 01:32:27.482904 update_engine[1471]: I20241213 01:32:27.482890 1471 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 01:32:27.482904 update_engine[1471]: I20241213 01:32:27.482899 1471 update_attempter.cc:643] Scheduling an action processor start. Dec 13 01:32:27.483419 update_engine[1471]: I20241213 01:32:27.482916 1471 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:32:27.483419 update_engine[1471]: I20241213 01:32:27.482942 1471 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 01:32:27.483419 update_engine[1471]: I20241213 01:32:27.482991 1471 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:32:27.483419 update_engine[1471]: I20241213 01:32:27.483000 1471 omaha_request_action.cc:272] Request: Dec 13 01:32:27.483419 update_engine[1471]: Dec 13 01:32:27.483419 update_engine[1471]: Dec 13 01:32:27.483419 update_engine[1471]: Dec 13 01:32:27.483419 update_engine[1471]: Dec 13 01:32:27.483419 update_engine[1471]: Dec 13 01:32:27.483419 update_engine[1471]: Dec 13 01:32:27.483419 update_engine[1471]: Dec 13 01:32:27.483419 update_engine[1471]: Dec 13 01:32:27.483419 update_engine[1471]: I20241213 01:32:27.483006 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:32:27.490699 update_engine[1471]: I20241213 01:32:27.489583 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:32:27.490699 update_engine[1471]: I20241213 01:32:27.489835 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:32:27.491320 update_engine[1471]: E20241213 01:32:27.491242 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:32:27.491320 update_engine[1471]: I20241213 01:32:27.491298 1471 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 01:32:27.492255 locksmithd[1491]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 01:32:27.843317 containerd[1480]: time="2024-12-13T01:32:27.842840781Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:27.844412 containerd[1480]: time="2024-12-13T01:32:27.844365952Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735379" Dec 13 01:32:27.844817 containerd[1480]: time="2024-12-13T01:32:27.844760040Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:27.845933 containerd[1480]: time="2024-12-13T01:32:27.845836339Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.603610688s" Dec 13 01:32:27.845933 containerd[1480]: time="2024-12-13T01:32:27.845862088Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:32:27.847881 containerd[1480]: time="2024-12-13T01:32:27.847854876Z" level=info msg="CreateContainer within sandbox \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:32:27.904488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866510261.mount: Deactivated successfully. Dec 13 01:32:27.914497 containerd[1480]: time="2024-12-13T01:32:27.914459299Z" level=info msg="CreateContainer within sandbox \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0\"" Dec 13 01:32:27.915184 containerd[1480]: time="2024-12-13T01:32:27.914968193Z" level=info msg="StartContainer for \"406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0\"" Dec 13 01:32:28.060331 systemd[1]: run-containerd-runc-k8s.io-406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0-runc.3HH7s0.mount: Deactivated successfully. Dec 13 01:32:28.070664 systemd[1]: Started cri-containerd-406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0.scope - libcontainer container 406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0. Dec 13 01:32:28.108493 containerd[1480]: time="2024-12-13T01:32:28.108342436Z" level=info msg="StartContainer for \"406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0\" returns successfully" Dec 13 01:32:28.110210 systemd[1]: cri-containerd-406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0.scope: Deactivated successfully. Dec 13 01:32:28.199759 containerd[1480]: time="2024-12-13T01:32:28.188632319Z" level=info msg="shim disconnected" id=406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0 namespace=k8s.io Dec 13 01:32:28.199759 containerd[1480]: time="2024-12-13T01:32:28.199748288Z" level=warning msg="cleaning up after shim disconnected" id=406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0 namespace=k8s.io Dec 13 01:32:28.199759 containerd[1480]: time="2024-12-13T01:32:28.199759289Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:28.865650 containerd[1480]: time="2024-12-13T01:32:28.865471774Z" level=info msg="CreateContainer within sandbox \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:32:28.876061 containerd[1480]: time="2024-12-13T01:32:28.876013085Z" level=info msg="CreateContainer within sandbox \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4\"" Dec 13 01:32:28.879543 containerd[1480]: time="2024-12-13T01:32:28.879498112Z" level=info msg="StartContainer for \"6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4\"" Dec 13 01:32:28.903114 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0-rootfs.mount: Deactivated successfully. Dec 13 01:32:28.929648 systemd[1]: Started cri-containerd-6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4.scope - libcontainer container 6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4. Dec 13 01:32:28.952354 containerd[1480]: time="2024-12-13T01:32:28.952321681Z" level=info msg="StartContainer for \"6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4\" returns successfully" Dec 13 01:32:28.966683 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:32:28.967024 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:32:28.967094 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:32:28.971974 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:32:28.973021 systemd[1]: cri-containerd-6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4.scope: Deactivated successfully. Dec 13 01:32:28.994776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4-rootfs.mount: Deactivated successfully. Dec 13 01:32:29.001184 containerd[1480]: time="2024-12-13T01:32:29.000977249Z" level=info msg="shim disconnected" id=6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4 namespace=k8s.io Dec 13 01:32:29.001184 containerd[1480]: time="2024-12-13T01:32:29.001026021Z" level=warning msg="cleaning up after shim disconnected" id=6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4 namespace=k8s.io Dec 13 01:32:29.001184 containerd[1480]: time="2024-12-13T01:32:29.001034487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:29.010538 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:32:29.015078 containerd[1480]: time="2024-12-13T01:32:29.015037581Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:32:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:32:29.867124 containerd[1480]: time="2024-12-13T01:32:29.866912759Z" level=info msg="CreateContainer within sandbox \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:32:29.889402 containerd[1480]: time="2024-12-13T01:32:29.889291523Z" level=info msg="CreateContainer within sandbox \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425\"" Dec 13 01:32:29.890811 containerd[1480]: time="2024-12-13T01:32:29.890772010Z" level=info msg="StartContainer for \"9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425\"" Dec 13 01:32:29.927724 systemd[1]: Started cri-containerd-9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425.scope - libcontainer container 9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425. Dec 13 01:32:29.959163 containerd[1480]: time="2024-12-13T01:32:29.958990932Z" level=info msg="StartContainer for \"9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425\" returns successfully" Dec 13 01:32:29.967346 systemd[1]: cri-containerd-9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425.scope: Deactivated successfully. Dec 13 01:32:29.985697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425-rootfs.mount: Deactivated successfully. Dec 13 01:32:29.991045 containerd[1480]: time="2024-12-13T01:32:29.990990521Z" level=info msg="shim disconnected" id=9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425 namespace=k8s.io Dec 13 01:32:29.991045 containerd[1480]: time="2024-12-13T01:32:29.991040645Z" level=warning msg="cleaning up after shim disconnected" id=9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425 namespace=k8s.io Dec 13 01:32:29.991045 containerd[1480]: time="2024-12-13T01:32:29.991049172Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:30.874380 containerd[1480]: time="2024-12-13T01:32:30.874332045Z" level=info msg="CreateContainer within sandbox \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:32:30.889980 containerd[1480]: time="2024-12-13T01:32:30.888767510Z" level=info msg="CreateContainer within sandbox \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8\"" Dec 13 01:32:30.891775 containerd[1480]: time="2024-12-13T01:32:30.891744294Z" level=info msg="StartContainer for \"7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8\"" Dec 13 01:32:30.930882 systemd[1]: Started cri-containerd-7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8.scope - libcontainer container 7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8. Dec 13 01:32:30.964105 systemd[1]: cri-containerd-7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8.scope: Deactivated successfully. Dec 13 01:32:30.966396 containerd[1480]: time="2024-12-13T01:32:30.966228427Z" level=info msg="StartContainer for \"7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8\" returns successfully" Dec 13 01:32:30.989060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8-rootfs.mount: Deactivated successfully. Dec 13 01:32:30.995002 containerd[1480]: time="2024-12-13T01:32:30.994932165Z" level=info msg="shim disconnected" id=7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8 namespace=k8s.io Dec 13 01:32:30.995002 containerd[1480]: time="2024-12-13T01:32:30.994995845Z" level=warning msg="cleaning up after shim disconnected" id=7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8 namespace=k8s.io Dec 13 01:32:30.996741 containerd[1480]: time="2024-12-13T01:32:30.995007556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:31.878594 containerd[1480]: time="2024-12-13T01:32:31.878545554Z" level=info msg="CreateContainer within sandbox \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:32:31.899081 containerd[1480]: time="2024-12-13T01:32:31.899035947Z" level=info msg="CreateContainer within sandbox \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e\"" Dec 13 01:32:31.903271 containerd[1480]: time="2024-12-13T01:32:31.903120668Z" level=info msg="StartContainer for \"2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e\"" Dec 13 01:32:31.937714 systemd[1]: Started cri-containerd-2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e.scope - libcontainer container 2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e. Dec 13 01:32:31.976266 containerd[1480]: time="2024-12-13T01:32:31.976219954Z" level=info msg="StartContainer for \"2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e\" returns successfully" Dec 13 01:32:32.160488 kubelet[2795]: I1213 01:32:32.160177 2795 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:32:32.186930 kubelet[2795]: I1213 01:32:32.186854 2795 topology_manager.go:215] "Topology Admit Handler" podUID="27322cdd-c802-4340-a0ef-9dd92ed31f79" podNamespace="kube-system" podName="coredns-76f75df574-jqpk5" Dec 13 01:32:32.190537 kubelet[2795]: I1213 01:32:32.190038 2795 topology_manager.go:215] "Topology Admit Handler" podUID="4eedc239-8173-41d0-ba74-895176d221ec" podNamespace="kube-system" podName="coredns-76f75df574-zq9vj" Dec 13 01:32:32.202344 systemd[1]: Created slice kubepods-burstable-pod27322cdd_c802_4340_a0ef_9dd92ed31f79.slice - libcontainer container kubepods-burstable-pod27322cdd_c802_4340_a0ef_9dd92ed31f79.slice. Dec 13 01:32:32.212709 systemd[1]: Created slice kubepods-burstable-pod4eedc239_8173_41d0_ba74_895176d221ec.slice - libcontainer container kubepods-burstable-pod4eedc239_8173_41d0_ba74_895176d221ec.slice. Dec 13 01:32:32.349984 kubelet[2795]: I1213 01:32:32.349878 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4eedc239-8173-41d0-ba74-895176d221ec-config-volume\") pod \"coredns-76f75df574-zq9vj\" (UID: \"4eedc239-8173-41d0-ba74-895176d221ec\") " pod="kube-system/coredns-76f75df574-zq9vj" Dec 13 01:32:32.350345 kubelet[2795]: I1213 01:32:32.350314 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb8fw\" (UniqueName: \"kubernetes.io/projected/4eedc239-8173-41d0-ba74-895176d221ec-kube-api-access-kb8fw\") pod \"coredns-76f75df574-zq9vj\" (UID: \"4eedc239-8173-41d0-ba74-895176d221ec\") " pod="kube-system/coredns-76f75df574-zq9vj" Dec 13 01:32:32.350598 kubelet[2795]: I1213 01:32:32.350559 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs5bq\" (UniqueName: \"kubernetes.io/projected/27322cdd-c802-4340-a0ef-9dd92ed31f79-kube-api-access-fs5bq\") pod \"coredns-76f75df574-jqpk5\" (UID: \"27322cdd-c802-4340-a0ef-9dd92ed31f79\") " pod="kube-system/coredns-76f75df574-jqpk5" Dec 13 01:32:32.350835 kubelet[2795]: I1213 01:32:32.350815 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27322cdd-c802-4340-a0ef-9dd92ed31f79-config-volume\") pod \"coredns-76f75df574-jqpk5\" (UID: \"27322cdd-c802-4340-a0ef-9dd92ed31f79\") " pod="kube-system/coredns-76f75df574-jqpk5" Dec 13 01:32:32.512153 containerd[1480]: time="2024-12-13T01:32:32.511691933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jqpk5,Uid:27322cdd-c802-4340-a0ef-9dd92ed31f79,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:32.517668 containerd[1480]: time="2024-12-13T01:32:32.517536225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zq9vj,Uid:4eedc239-8173-41d0-ba74-895176d221ec,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:34.186658 systemd-networkd[1376]: cilium_host: Link UP Dec 13 01:32:34.187787 systemd-networkd[1376]: cilium_net: Link UP Dec 13 01:32:34.189927 systemd-networkd[1376]: cilium_net: Gained carrier Dec 13 01:32:34.190192 systemd-networkd[1376]: cilium_host: Gained carrier Dec 13 01:32:34.299167 systemd-networkd[1376]: cilium_vxlan: Link UP Dec 13 01:32:34.299178 systemd-networkd[1376]: cilium_vxlan: Gained carrier Dec 13 01:32:34.420291 systemd-networkd[1376]: cilium_net: Gained IPv6LL Dec 13 01:32:34.658715 kernel: NET: Registered PF_ALG protocol family Dec 13 01:32:34.683826 systemd-networkd[1376]: cilium_host: Gained IPv6LL Dec 13 01:32:35.301678 systemd-networkd[1376]: lxc_health: Link UP Dec 13 01:32:35.306709 systemd-networkd[1376]: lxc_health: Gained carrier Dec 13 01:32:35.396361 systemd-networkd[1376]: cilium_vxlan: Gained IPv6LL Dec 13 01:32:35.586495 systemd-networkd[1376]: lxc4edee596571a: Link UP Dec 13 01:32:35.593689 kernel: eth0: renamed from tmp080a1 Dec 13 01:32:35.602614 systemd-networkd[1376]: lxcfb90f91c82e7: Link UP Dec 13 01:32:35.612915 kernel: eth0: renamed from tmpcabca Dec 13 01:32:35.618167 systemd-networkd[1376]: lxc4edee596571a: Gained carrier Dec 13 01:32:35.626745 systemd-networkd[1376]: lxcfb90f91c82e7: Gained carrier Dec 13 01:32:35.662417 kubelet[2795]: I1213 01:32:35.662361 2795 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-z9qz2" podStartSLOduration=9.538984574 podStartE2EDuration="17.662315888s" podCreationTimestamp="2024-12-13 01:32:18 +0000 UTC" firstStartedPulling="2024-12-13 01:32:19.722749183 +0000 UTC m=+15.092431262" lastFinishedPulling="2024-12-13 01:32:27.846080498 +0000 UTC m=+23.215762576" observedRunningTime="2024-12-13 01:32:32.896392992 +0000 UTC m=+28.266075080" watchObservedRunningTime="2024-12-13 01:32:35.662315888 +0000 UTC m=+31.031997976" Dec 13 01:32:36.803785 systemd-networkd[1376]: lxc_health: Gained IPv6LL Dec 13 01:32:36.868304 systemd-networkd[1376]: lxcfb90f91c82e7: Gained IPv6LL Dec 13 01:32:37.123941 systemd-networkd[1376]: lxc4edee596571a: Gained IPv6LL Dec 13 01:32:37.458419 update_engine[1471]: I20241213 01:32:37.457582 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:32:37.458419 update_engine[1471]: I20241213 01:32:37.457834 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:32:37.462483 update_engine[1471]: I20241213 01:32:37.462339 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:32:37.462862 update_engine[1471]: E20241213 01:32:37.462795 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:32:37.462862 update_engine[1471]: I20241213 01:32:37.462841 1471 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 01:32:37.505701 kubelet[2795]: I1213 01:32:37.505501 2795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:32:38.811791 containerd[1480]: time="2024-12-13T01:32:38.811510797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:38.812145 containerd[1480]: time="2024-12-13T01:32:38.811982522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:38.812145 containerd[1480]: time="2024-12-13T01:32:38.812022698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:38.815048 containerd[1480]: time="2024-12-13T01:32:38.813793979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:38.825546 containerd[1480]: time="2024-12-13T01:32:38.824415933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:38.825546 containerd[1480]: time="2024-12-13T01:32:38.824468832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:38.825546 containerd[1480]: time="2024-12-13T01:32:38.824482248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:38.825546 containerd[1480]: time="2024-12-13T01:32:38.824562148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:38.863798 systemd[1]: Started cri-containerd-cabca4226a2869420ace6ce85e5f114e3f47ff8955a657fde85531ef7d7f3676.scope - libcontainer container cabca4226a2869420ace6ce85e5f114e3f47ff8955a657fde85531ef7d7f3676. Dec 13 01:32:38.876049 systemd[1]: Started cri-containerd-080a1ea86ab262ccbadc752472eecb10a50ace874d01c738557558fff4bab1bb.scope - libcontainer container 080a1ea86ab262ccbadc752472eecb10a50ace874d01c738557558fff4bab1bb. Dec 13 01:32:38.935579 containerd[1480]: time="2024-12-13T01:32:38.934328783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zq9vj,Uid:4eedc239-8173-41d0-ba74-895176d221ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"080a1ea86ab262ccbadc752472eecb10a50ace874d01c738557558fff4bab1bb\"" Dec 13 01:32:38.941079 containerd[1480]: time="2024-12-13T01:32:38.940668153Z" level=info msg="CreateContainer within sandbox \"080a1ea86ab262ccbadc752472eecb10a50ace874d01c738557558fff4bab1bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:32:38.963601 containerd[1480]: time="2024-12-13T01:32:38.962491207Z" level=info msg="CreateContainer within sandbox \"080a1ea86ab262ccbadc752472eecb10a50ace874d01c738557558fff4bab1bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"21f011c3f6bd4a99533e65670e50c47ea98bf9d4c16d29fdff30f6d6b90dcf6a\"" Dec 13 01:32:38.966390 containerd[1480]: time="2024-12-13T01:32:38.964679050Z" level=info msg="StartContainer for \"21f011c3f6bd4a99533e65670e50c47ea98bf9d4c16d29fdff30f6d6b90dcf6a\"" Dec 13 01:32:38.993890 containerd[1480]: time="2024-12-13T01:32:38.993821090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jqpk5,Uid:27322cdd-c802-4340-a0ef-9dd92ed31f79,Namespace:kube-system,Attempt:0,} returns sandbox id \"cabca4226a2869420ace6ce85e5f114e3f47ff8955a657fde85531ef7d7f3676\"" Dec 13 01:32:38.999071 containerd[1480]: time="2024-12-13T01:32:38.998889998Z" level=info msg="CreateContainer within sandbox \"cabca4226a2869420ace6ce85e5f114e3f47ff8955a657fde85531ef7d7f3676\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:32:39.012320 containerd[1480]: time="2024-12-13T01:32:39.012275374Z" level=info msg="CreateContainer within sandbox \"cabca4226a2869420ace6ce85e5f114e3f47ff8955a657fde85531ef7d7f3676\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66693ec8675044bf61e6eb55e34196f17608d2a81786d473fca30e51077ecf17\"" Dec 13 01:32:39.012666 systemd[1]: Started cri-containerd-21f011c3f6bd4a99533e65670e50c47ea98bf9d4c16d29fdff30f6d6b90dcf6a.scope - libcontainer container 21f011c3f6bd4a99533e65670e50c47ea98bf9d4c16d29fdff30f6d6b90dcf6a. Dec 13 01:32:39.014657 containerd[1480]: time="2024-12-13T01:32:39.014556683Z" level=info msg="StartContainer for \"66693ec8675044bf61e6eb55e34196f17608d2a81786d473fca30e51077ecf17\"" Dec 13 01:32:39.052154 systemd[1]: Started cri-containerd-66693ec8675044bf61e6eb55e34196f17608d2a81786d473fca30e51077ecf17.scope - libcontainer container 66693ec8675044bf61e6eb55e34196f17608d2a81786d473fca30e51077ecf17. Dec 13 01:32:39.059357 containerd[1480]: time="2024-12-13T01:32:39.058236903Z" level=info msg="StartContainer for \"21f011c3f6bd4a99533e65670e50c47ea98bf9d4c16d29fdff30f6d6b90dcf6a\" returns successfully" Dec 13 01:32:39.097204 containerd[1480]: time="2024-12-13T01:32:39.097094397Z" level=info msg="StartContainer for \"66693ec8675044bf61e6eb55e34196f17608d2a81786d473fca30e51077ecf17\" returns successfully" Dec 13 01:32:39.826623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927223223.mount: Deactivated successfully. Dec 13 01:32:39.908287 kubelet[2795]: I1213 01:32:39.907261 2795 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-jqpk5" podStartSLOduration=20.907216219 podStartE2EDuration="20.907216219s" podCreationTimestamp="2024-12-13 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:39.906637474 +0000 UTC m=+35.276319572" watchObservedRunningTime="2024-12-13 01:32:39.907216219 +0000 UTC m=+35.276898297" Dec 13 01:32:39.937076 kubelet[2795]: I1213 01:32:39.936698 2795 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zq9vj" podStartSLOduration=20.936649086 podStartE2EDuration="20.936649086s" podCreationTimestamp="2024-12-13 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:39.934615461 +0000 UTC m=+35.304297539" watchObservedRunningTime="2024-12-13 01:32:39.936649086 +0000 UTC m=+35.306331174" Dec 13 01:32:47.464552 update_engine[1471]: I20241213 01:32:47.464393 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:32:47.466366 update_engine[1471]: I20241213 01:32:47.466327 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:32:47.466638 update_engine[1471]: I20241213 01:32:47.466591 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:32:47.467218 update_engine[1471]: E20241213 01:32:47.467180 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:32:47.467265 update_engine[1471]: I20241213 01:32:47.467239 1471 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 01:32:57.455765 update_engine[1471]: I20241213 01:32:57.455675 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:32:57.456229 update_engine[1471]: I20241213 01:32:57.455950 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:32:57.456229 update_engine[1471]: I20241213 01:32:57.456192 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:32:57.456955 update_engine[1471]: E20241213 01:32:57.456914 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:32:57.457029 update_engine[1471]: I20241213 01:32:57.456972 1471 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:32:57.457029 update_engine[1471]: I20241213 01:32:57.456984 1471 omaha_request_action.cc:617] Omaha request response: Dec 13 01:32:57.457085 update_engine[1471]: E20241213 01:32:57.457067 1471 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 01:32:57.457117 update_engine[1471]: I20241213 01:32:57.457099 1471 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 01:32:57.457117 update_engine[1471]: I20241213 01:32:57.457108 1471 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:32:57.457165 update_engine[1471]: I20241213 01:32:57.457114 1471 update_attempter.cc:306] Processing Done. Dec 13 01:32:57.457165 update_engine[1471]: E20241213 01:32:57.457131 1471 update_attempter.cc:619] Update failed. Dec 13 01:32:57.457165 update_engine[1471]: I20241213 01:32:57.457139 1471 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 01:32:57.457165 update_engine[1471]: I20241213 01:32:57.457149 1471 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 01:32:57.457165 update_engine[1471]: I20241213 01:32:57.457157 1471 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 01:32:57.457278 update_engine[1471]: I20241213 01:32:57.457250 1471 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:32:57.457623 locksmithd[1491]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 01:32:57.459754 update_engine[1471]: I20241213 01:32:57.459668 1471 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:32:57.459754 update_engine[1471]: I20241213 01:32:57.459692 1471 omaha_request_action.cc:272] Request: Dec 13 01:32:57.459754 update_engine[1471]: Dec 13 01:32:57.459754 update_engine[1471]: Dec 13 01:32:57.459754 update_engine[1471]: Dec 13 01:32:57.459754 update_engine[1471]: Dec 13 01:32:57.459754 update_engine[1471]: Dec 13 01:32:57.459754 update_engine[1471]: Dec 13 01:32:57.459754 update_engine[1471]: I20241213 01:32:57.459702 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:32:57.459974 update_engine[1471]: I20241213 01:32:57.459853 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:32:57.460071 update_engine[1471]: I20241213 01:32:57.460017 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:32:57.460720 update_engine[1471]: E20241213 01:32:57.460680 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:32:57.460779 update_engine[1471]: I20241213 01:32:57.460731 1471 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:32:57.460779 update_engine[1471]: I20241213 01:32:57.460742 1471 omaha_request_action.cc:617] Omaha request response: Dec 13 01:32:57.460779 update_engine[1471]: I20241213 01:32:57.460750 1471 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:32:57.460779 update_engine[1471]: I20241213 01:32:57.460757 1471 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:32:57.460779 update_engine[1471]: I20241213 01:32:57.460764 1471 update_attempter.cc:306] Processing Done. Dec 13 01:32:57.460779 update_engine[1471]: I20241213 01:32:57.460772 1471 update_attempter.cc:310] Error event sent. Dec 13 01:32:57.460916 update_engine[1471]: I20241213 01:32:57.460781 1471 update_check_scheduler.cc:74] Next update check in 44m20s Dec 13 01:32:57.461075 locksmithd[1491]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 01:36:53.472873 systemd[1]: Started sshd@8-188.245.236.49:22-147.75.109.163:55824.service - OpenSSH per-connection server daemon (147.75.109.163:55824). Dec 13 01:36:54.458080 sshd[4196]: Accepted publickey for core from 147.75.109.163 port 55824 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:36:54.460394 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:54.465896 systemd-logind[1467]: New session 8 of user core. Dec 13 01:36:54.471723 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:36:55.533960 sshd[4196]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:55.538508 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:36:55.539470 systemd[1]: sshd@8-188.245.236.49:22-147.75.109.163:55824.service: Deactivated successfully. Dec 13 01:36:55.542623 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:36:55.543777 systemd-logind[1467]: Removed session 8. Dec 13 01:37:00.701306 systemd[1]: Started sshd@9-188.245.236.49:22-147.75.109.163:54166.service - OpenSSH per-connection server daemon (147.75.109.163:54166). Dec 13 01:37:01.672866 sshd[4210]: Accepted publickey for core from 147.75.109.163 port 54166 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:01.675250 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:01.680036 systemd-logind[1467]: New session 9 of user core. Dec 13 01:37:01.688636 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:37:02.429813 sshd[4210]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:02.433087 systemd[1]: sshd@9-188.245.236.49:22-147.75.109.163:54166.service: Deactivated successfully. Dec 13 01:37:02.435484 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:37:02.437355 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:37:02.438702 systemd-logind[1467]: Removed session 9. Dec 13 01:37:07.609925 systemd[1]: Started sshd@10-188.245.236.49:22-147.75.109.163:41126.service - OpenSSH per-connection server daemon (147.75.109.163:41126). Dec 13 01:37:08.591269 sshd[4227]: Accepted publickey for core from 147.75.109.163 port 41126 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:08.593101 sshd[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:08.598496 systemd-logind[1467]: New session 10 of user core. Dec 13 01:37:08.603704 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:37:09.330087 sshd[4227]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:09.348800 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:37:09.349808 systemd[1]: sshd@10-188.245.236.49:22-147.75.109.163:41126.service: Deactivated successfully. Dec 13 01:37:09.352024 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:37:09.353146 systemd-logind[1467]: Removed session 10. Dec 13 01:37:09.498069 systemd[1]: Started sshd@11-188.245.236.49:22-147.75.109.163:41138.service - OpenSSH per-connection server daemon (147.75.109.163:41138). Dec 13 01:37:10.485712 sshd[4242]: Accepted publickey for core from 147.75.109.163 port 41138 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:10.487689 sshd[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:10.492499 systemd-logind[1467]: New session 11 of user core. Dec 13 01:37:10.501705 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:37:11.314201 sshd[4242]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:11.317825 systemd[1]: sshd@11-188.245.236.49:22-147.75.109.163:41138.service: Deactivated successfully. Dec 13 01:37:11.320771 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:37:11.323404 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:37:11.325254 systemd-logind[1467]: Removed session 11. Dec 13 01:37:11.495807 systemd[1]: Started sshd@12-188.245.236.49:22-147.75.109.163:41154.service - OpenSSH per-connection server daemon (147.75.109.163:41154). Dec 13 01:37:12.475805 sshd[4253]: Accepted publickey for core from 147.75.109.163 port 41154 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:12.477630 sshd[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:12.482629 systemd-logind[1467]: New session 12 of user core. Dec 13 01:37:12.496729 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:37:13.225225 sshd[4253]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:13.229056 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:37:13.229888 systemd[1]: sshd@12-188.245.236.49:22-147.75.109.163:41154.service: Deactivated successfully. Dec 13 01:37:13.232248 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:37:13.234022 systemd-logind[1467]: Removed session 12. Dec 13 01:37:18.394767 systemd[1]: Started sshd@13-188.245.236.49:22-147.75.109.163:43570.service - OpenSSH per-connection server daemon (147.75.109.163:43570). Dec 13 01:37:19.373206 sshd[4266]: Accepted publickey for core from 147.75.109.163 port 43570 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:19.375114 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:19.379766 systemd-logind[1467]: New session 13 of user core. Dec 13 01:37:19.389698 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:37:20.136492 sshd[4266]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:20.142018 systemd[1]: sshd@13-188.245.236.49:22-147.75.109.163:43570.service: Deactivated successfully. Dec 13 01:37:20.144262 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:37:20.145801 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:37:20.146771 systemd-logind[1467]: Removed session 13. Dec 13 01:37:20.308832 systemd[1]: Started sshd@14-188.245.236.49:22-147.75.109.163:43574.service - OpenSSH per-connection server daemon (147.75.109.163:43574). Dec 13 01:37:21.293338 sshd[4281]: Accepted publickey for core from 147.75.109.163 port 43574 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:21.295272 sshd[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:21.301701 systemd-logind[1467]: New session 14 of user core. Dec 13 01:37:21.310175 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:37:22.250314 sshd[4281]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:22.254317 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:37:22.254962 systemd[1]: sshd@14-188.245.236.49:22-147.75.109.163:43574.service: Deactivated successfully. Dec 13 01:37:22.257130 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:37:22.258421 systemd-logind[1467]: Removed session 14. Dec 13 01:37:22.421644 systemd[1]: Started sshd@15-188.245.236.49:22-147.75.109.163:43580.service - OpenSSH per-connection server daemon (147.75.109.163:43580). Dec 13 01:37:23.411596 sshd[4294]: Accepted publickey for core from 147.75.109.163 port 43580 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:23.413612 sshd[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:23.419320 systemd-logind[1467]: New session 15 of user core. Dec 13 01:37:23.424724 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:37:25.608227 sshd[4294]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:25.616461 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:37:25.617078 systemd[1]: sshd@15-188.245.236.49:22-147.75.109.163:43580.service: Deactivated successfully. Dec 13 01:37:25.619760 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:37:25.620878 systemd-logind[1467]: Removed session 15. Dec 13 01:37:25.781898 systemd[1]: Started sshd@16-188.245.236.49:22-147.75.109.163:43584.service - OpenSSH per-connection server daemon (147.75.109.163:43584). Dec 13 01:37:26.770648 sshd[4312]: Accepted publickey for core from 147.75.109.163 port 43584 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:26.772413 sshd[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:26.777327 systemd-logind[1467]: New session 16 of user core. Dec 13 01:37:26.782660 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:37:27.688776 sshd[4312]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:27.696775 systemd[1]: sshd@16-188.245.236.49:22-147.75.109.163:43584.service: Deactivated successfully. Dec 13 01:37:27.697353 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:37:27.703056 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:37:27.706480 systemd-logind[1467]: Removed session 16. Dec 13 01:37:27.857646 systemd[1]: Started sshd@17-188.245.236.49:22-147.75.109.163:53632.service - OpenSSH per-connection server daemon (147.75.109.163:53632). Dec 13 01:37:28.842838 sshd[4323]: Accepted publickey for core from 147.75.109.163 port 53632 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:28.844815 sshd[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:28.850248 systemd-logind[1467]: New session 17 of user core. Dec 13 01:37:28.855708 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:37:29.579247 sshd[4323]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:29.582861 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:37:29.583591 systemd[1]: sshd@17-188.245.236.49:22-147.75.109.163:53632.service: Deactivated successfully. Dec 13 01:37:29.585358 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:37:29.586257 systemd-logind[1467]: Removed session 17. Dec 13 01:37:34.750673 systemd[1]: Started sshd@18-188.245.236.49:22-147.75.109.163:53640.service - OpenSSH per-connection server daemon (147.75.109.163:53640). Dec 13 01:37:35.740288 sshd[4339]: Accepted publickey for core from 147.75.109.163 port 53640 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:35.742030 sshd[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:35.746555 systemd-logind[1467]: New session 18 of user core. Dec 13 01:37:35.751684 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:37:36.480705 sshd[4339]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:36.484926 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:37:36.485929 systemd[1]: sshd@18-188.245.236.49:22-147.75.109.163:53640.service: Deactivated successfully. Dec 13 01:37:36.488459 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:37:36.489819 systemd-logind[1467]: Removed session 18. Dec 13 01:37:41.650820 systemd[1]: Started sshd@19-188.245.236.49:22-147.75.109.163:52580.service - OpenSSH per-connection server daemon (147.75.109.163:52580). Dec 13 01:37:42.612099 sshd[4352]: Accepted publickey for core from 147.75.109.163 port 52580 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:42.613848 sshd[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:42.618283 systemd-logind[1467]: New session 19 of user core. Dec 13 01:37:42.623658 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:37:43.339778 sshd[4352]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:43.343326 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:37:43.344449 systemd[1]: sshd@19-188.245.236.49:22-147.75.109.163:52580.service: Deactivated successfully. Dec 13 01:37:43.347757 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:37:43.349251 systemd-logind[1467]: Removed session 19. Dec 13 01:37:43.515801 systemd[1]: Started sshd@20-188.245.236.49:22-147.75.109.163:52588.service - OpenSSH per-connection server daemon (147.75.109.163:52588). Dec 13 01:37:44.484674 sshd[4367]: Accepted publickey for core from 147.75.109.163 port 52588 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:44.486333 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:44.490769 systemd-logind[1467]: New session 20 of user core. Dec 13 01:37:44.497680 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:37:46.317422 containerd[1480]: time="2024-12-13T01:37:46.317366321Z" level=info msg="StopContainer for \"9402ca5f5b3a2fd8c9e47cc92a0474a60ead2daa03e8863dc433dcd6d612c874\" with timeout 30 (s)" Dec 13 01:37:46.318618 containerd[1480]: time="2024-12-13T01:37:46.318187280Z" level=info msg="Stop container \"9402ca5f5b3a2fd8c9e47cc92a0474a60ead2daa03e8863dc433dcd6d612c874\" with signal terminated" Dec 13 01:37:46.370444 systemd[1]: cri-containerd-9402ca5f5b3a2fd8c9e47cc92a0474a60ead2daa03e8863dc433dcd6d612c874.scope: Deactivated successfully. Dec 13 01:37:46.410603 containerd[1480]: time="2024-12-13T01:37:46.408987275Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:37:46.411507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9402ca5f5b3a2fd8c9e47cc92a0474a60ead2daa03e8863dc433dcd6d612c874-rootfs.mount: Deactivated successfully. Dec 13 01:37:46.416358 containerd[1480]: time="2024-12-13T01:37:46.416317124Z" level=info msg="StopContainer for \"2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e\" with timeout 2 (s)" Dec 13 01:37:46.416874 containerd[1480]: time="2024-12-13T01:37:46.416841909Z" level=info msg="Stop container \"2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e\" with signal terminated" Dec 13 01:37:46.419290 containerd[1480]: time="2024-12-13T01:37:46.419083243Z" level=info msg="shim disconnected" id=9402ca5f5b3a2fd8c9e47cc92a0474a60ead2daa03e8863dc433dcd6d612c874 namespace=k8s.io Dec 13 01:37:46.419290 containerd[1480]: time="2024-12-13T01:37:46.419127866Z" level=warning msg="cleaning up after shim disconnected" id=9402ca5f5b3a2fd8c9e47cc92a0474a60ead2daa03e8863dc433dcd6d612c874 namespace=k8s.io Dec 13 01:37:46.419290 containerd[1480]: time="2024-12-13T01:37:46.419140310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:46.429489 systemd-networkd[1376]: lxc_health: Link DOWN Dec 13 01:37:46.429639 systemd-networkd[1376]: lxc_health: Lost carrier Dec 13 01:37:46.447847 containerd[1480]: time="2024-12-13T01:37:46.447434655Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:37:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:37:46.452905 containerd[1480]: time="2024-12-13T01:37:46.452857097Z" level=info msg="StopContainer for \"9402ca5f5b3a2fd8c9e47cc92a0474a60ead2daa03e8863dc433dcd6d612c874\" returns successfully" Dec 13 01:37:46.455552 containerd[1480]: time="2024-12-13T01:37:46.455100855Z" level=info msg="StopPodSandbox for \"65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc\"" Dec 13 01:37:46.455552 containerd[1480]: time="2024-12-13T01:37:46.455142513Z" level=info msg="Container to stop \"9402ca5f5b3a2fd8c9e47cc92a0474a60ead2daa03e8863dc433dcd6d612c874\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:46.458230 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc-shm.mount: Deactivated successfully. Dec 13 01:37:46.463654 systemd[1]: cri-containerd-2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e.scope: Deactivated successfully. Dec 13 01:37:46.463885 systemd[1]: cri-containerd-2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e.scope: Consumed 7.147s CPU time. Dec 13 01:37:46.467911 systemd[1]: cri-containerd-65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc.scope: Deactivated successfully. Dec 13 01:37:46.491420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e-rootfs.mount: Deactivated successfully. Dec 13 01:37:46.498659 containerd[1480]: time="2024-12-13T01:37:46.498587630Z" level=info msg="shim disconnected" id=2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e namespace=k8s.io Dec 13 01:37:46.498659 containerd[1480]: time="2024-12-13T01:37:46.498653273Z" level=warning msg="cleaning up after shim disconnected" id=2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e namespace=k8s.io Dec 13 01:37:46.498659 containerd[1480]: time="2024-12-13T01:37:46.498667940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:46.509481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc-rootfs.mount: Deactivated successfully. Dec 13 01:37:46.518126 containerd[1480]: time="2024-12-13T01:37:46.517637463Z" level=info msg="shim disconnected" id=65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc namespace=k8s.io Dec 13 01:37:46.518126 containerd[1480]: time="2024-12-13T01:37:46.518049296Z" level=warning msg="cleaning up after shim disconnected" id=65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc namespace=k8s.io Dec 13 01:37:46.518126 containerd[1480]: time="2024-12-13T01:37:46.518063813Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:46.533010 containerd[1480]: time="2024-12-13T01:37:46.532875267Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:37:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:37:46.537222 containerd[1480]: time="2024-12-13T01:37:46.537177156Z" level=info msg="StopContainer for \"2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e\" returns successfully" Dec 13 01:37:46.538714 containerd[1480]: time="2024-12-13T01:37:46.538654268Z" level=info msg="StopPodSandbox for \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\"" Dec 13 01:37:46.538886 containerd[1480]: time="2024-12-13T01:37:46.538694242Z" level=info msg="Container to stop \"9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:46.538886 containerd[1480]: time="2024-12-13T01:37:46.538830999Z" level=info msg="Container to stop \"7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:46.538886 containerd[1480]: time="2024-12-13T01:37:46.538848542Z" level=info msg="Container to stop \"2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:46.539204 containerd[1480]: time="2024-12-13T01:37:46.538863149Z" level=info msg="Container to stop \"406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:46.539204 containerd[1480]: time="2024-12-13T01:37:46.539046433Z" level=info msg="Container to stop \"6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:46.547317 systemd[1]: cri-containerd-22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42.scope: Deactivated successfully. Dec 13 01:37:46.549136 containerd[1480]: time="2024-12-13T01:37:46.548574186Z" level=info msg="TearDown network for sandbox \"65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc\" successfully" Dec 13 01:37:46.549136 containerd[1480]: time="2024-12-13T01:37:46.548605645Z" level=info msg="StopPodSandbox for \"65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc\" returns successfully" Dec 13 01:37:46.578142 containerd[1480]: time="2024-12-13T01:37:46.576997944Z" level=info msg="shim disconnected" id=22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42 namespace=k8s.io Dec 13 01:37:46.578142 containerd[1480]: time="2024-12-13T01:37:46.577082131Z" level=warning msg="cleaning up after shim disconnected" id=22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42 namespace=k8s.io Dec 13 01:37:46.578142 containerd[1480]: time="2024-12-13T01:37:46.577094375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:46.593927 containerd[1480]: time="2024-12-13T01:37:46.593819959Z" level=info msg="TearDown network for sandbox \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\" successfully" Dec 13 01:37:46.593927 containerd[1480]: time="2024-12-13T01:37:46.593852740Z" level=info msg="StopPodSandbox for \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\" returns successfully" Dec 13 01:37:46.737183 kubelet[2795]: I1213 01:37:46.737127 2795 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-lib-modules\") pod \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " Dec 13 01:37:46.737183 kubelet[2795]: I1213 01:37:46.737180 2795 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41c57c0e-0002-4599-8f32-dbe0cd6709c1-clustermesh-secrets\") pod \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " Dec 13 01:37:46.738077 kubelet[2795]: I1213 01:37:46.737200 2795 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-cilium-cgroup\") pod \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " Dec 13 01:37:46.738077 kubelet[2795]: I1213 01:37:46.737219 2795 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-hostproc\") pod \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " Dec 13 01:37:46.738077 kubelet[2795]: I1213 01:37:46.737238 2795 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-xtables-lock\") pod \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " Dec 13 01:37:46.738077 kubelet[2795]: I1213 01:37:46.737255 2795 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-cni-path\") pod \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " Dec 13 01:37:46.738077 kubelet[2795]: I1213 01:37:46.737278 2795 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59rj5\" (UniqueName: \"kubernetes.io/projected/41c57c0e-0002-4599-8f32-dbe0cd6709c1-kube-api-access-59rj5\") pod \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " Dec 13 01:37:46.738077 kubelet[2795]: I1213 01:37:46.737295 2795 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-bpf-maps\") pod \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " Dec 13 01:37:46.738243 kubelet[2795]: I1213 01:37:46.737313 2795 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-host-proc-sys-net\") pod \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " Dec 13 01:37:46.738243 kubelet[2795]: I1213 01:37:46.737332 2795 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6ltf\" (UniqueName: \"kubernetes.io/projected/7a8e2421-4c78-4311-88d0-2a5024143529-kube-api-access-f6ltf\") pod \"7a8e2421-4c78-4311-88d0-2a5024143529\" (UID: \"7a8e2421-4c78-4311-88d0-2a5024143529\") " Dec 13 01:37:46.738243 kubelet[2795]: I1213 01:37:46.737351 2795 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a8e2421-4c78-4311-88d0-2a5024143529-cilium-config-path\") pod \"7a8e2421-4c78-4311-88d0-2a5024143529\" (UID: \"7a8e2421-4c78-4311-88d0-2a5024143529\") " Dec 13 01:37:46.738243 kubelet[2795]: I1213 01:37:46.737369 2795 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-etc-cni-netd\") pod \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " Dec 13 01:37:46.738243 kubelet[2795]: I1213 01:37:46.737389 2795 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41c57c0e-0002-4599-8f32-dbe0cd6709c1-hubble-tls\") pod \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " Dec 13 01:37:46.738243 kubelet[2795]: I1213 01:37:46.737408 2795 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-host-proc-sys-kernel\") pod \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " Dec 13 01:37:46.738414 kubelet[2795]: I1213 01:37:46.737427 2795 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41c57c0e-0002-4599-8f32-dbe0cd6709c1-cilium-config-path\") pod \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " Dec 13 01:37:46.738414 kubelet[2795]: I1213 01:37:46.737447 2795 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-cilium-run\") pod \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\" (UID: \"41c57c0e-0002-4599-8f32-dbe0cd6709c1\") " Dec 13 01:37:46.739099 kubelet[2795]: I1213 01:37:46.737587 2795 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "41c57c0e-0002-4599-8f32-dbe0cd6709c1" (UID: "41c57c0e-0002-4599-8f32-dbe0cd6709c1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:46.739099 kubelet[2795]: I1213 01:37:46.738786 2795 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "41c57c0e-0002-4599-8f32-dbe0cd6709c1" (UID: "41c57c0e-0002-4599-8f32-dbe0cd6709c1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:46.740223 kubelet[2795]: I1213 01:37:46.737511 2795 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "41c57c0e-0002-4599-8f32-dbe0cd6709c1" (UID: "41c57c0e-0002-4599-8f32-dbe0cd6709c1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:46.740223 kubelet[2795]: I1213 01:37:46.739914 2795 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "41c57c0e-0002-4599-8f32-dbe0cd6709c1" (UID: "41c57c0e-0002-4599-8f32-dbe0cd6709c1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:46.749119 kubelet[2795]: I1213 01:37:46.748965 2795 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c57c0e-0002-4599-8f32-dbe0cd6709c1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "41c57c0e-0002-4599-8f32-dbe0cd6709c1" (UID: "41c57c0e-0002-4599-8f32-dbe0cd6709c1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:37:46.749119 kubelet[2795]: I1213 01:37:46.749061 2795 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "41c57c0e-0002-4599-8f32-dbe0cd6709c1" (UID: "41c57c0e-0002-4599-8f32-dbe0cd6709c1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:46.749119 kubelet[2795]: I1213 01:37:46.749085 2795 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-hostproc" (OuterVolumeSpecName: "hostproc") pod "41c57c0e-0002-4599-8f32-dbe0cd6709c1" (UID: "41c57c0e-0002-4599-8f32-dbe0cd6709c1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:46.749119 kubelet[2795]: I1213 01:37:46.749102 2795 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "41c57c0e-0002-4599-8f32-dbe0cd6709c1" (UID: "41c57c0e-0002-4599-8f32-dbe0cd6709c1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:46.749119 kubelet[2795]: I1213 01:37:46.749118 2795 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-cni-path" (OuterVolumeSpecName: "cni-path") pod "41c57c0e-0002-4599-8f32-dbe0cd6709c1" (UID: "41c57c0e-0002-4599-8f32-dbe0cd6709c1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:46.749427 kubelet[2795]: I1213 01:37:46.749402 2795 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a8e2421-4c78-4311-88d0-2a5024143529-kube-api-access-f6ltf" (OuterVolumeSpecName: "kube-api-access-f6ltf") pod "7a8e2421-4c78-4311-88d0-2a5024143529" (UID: "7a8e2421-4c78-4311-88d0-2a5024143529"). InnerVolumeSpecName "kube-api-access-f6ltf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:37:46.752597 kubelet[2795]: I1213 01:37:46.751988 2795 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c57c0e-0002-4599-8f32-dbe0cd6709c1-kube-api-access-59rj5" (OuterVolumeSpecName: "kube-api-access-59rj5") pod "41c57c0e-0002-4599-8f32-dbe0cd6709c1" (UID: "41c57c0e-0002-4599-8f32-dbe0cd6709c1"). InnerVolumeSpecName "kube-api-access-59rj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:37:46.753093 kubelet[2795]: I1213 01:37:46.753068 2795 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a8e2421-4c78-4311-88d0-2a5024143529-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7a8e2421-4c78-4311-88d0-2a5024143529" (UID: "7a8e2421-4c78-4311-88d0-2a5024143529"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:37:46.753187 kubelet[2795]: I1213 01:37:46.753170 2795 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "41c57c0e-0002-4599-8f32-dbe0cd6709c1" (UID: "41c57c0e-0002-4599-8f32-dbe0cd6709c1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:46.754753 kubelet[2795]: I1213 01:37:46.754713 2795 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c57c0e-0002-4599-8f32-dbe0cd6709c1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "41c57c0e-0002-4599-8f32-dbe0cd6709c1" (UID: "41c57c0e-0002-4599-8f32-dbe0cd6709c1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:37:46.754813 kubelet[2795]: I1213 01:37:46.754753 2795 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "41c57c0e-0002-4599-8f32-dbe0cd6709c1" (UID: "41c57c0e-0002-4599-8f32-dbe0cd6709c1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:46.756861 kubelet[2795]: I1213 01:37:46.756828 2795 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41c57c0e-0002-4599-8f32-dbe0cd6709c1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "41c57c0e-0002-4599-8f32-dbe0cd6709c1" (UID: "41c57c0e-0002-4599-8f32-dbe0cd6709c1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:37:46.804626 systemd[1]: Removed slice kubepods-besteffort-pod7a8e2421_4c78_4311_88d0_2a5024143529.slice - libcontainer container kubepods-besteffort-pod7a8e2421_4c78_4311_88d0_2a5024143529.slice. Dec 13 01:37:46.806243 systemd[1]: Removed slice kubepods-burstable-pod41c57c0e_0002_4599_8f32_dbe0cd6709c1.slice - libcontainer container kubepods-burstable-pod41c57c0e_0002_4599_8f32_dbe0cd6709c1.slice. Dec 13 01:37:46.806373 systemd[1]: kubepods-burstable-pod41c57c0e_0002_4599_8f32_dbe0cd6709c1.slice: Consumed 7.239s CPU time. Dec 13 01:37:46.843261 kubelet[2795]: I1213 01:37:46.843133 2795 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-etc-cni-netd\") on node \"ci-4081-2-1-f-56be046f0f\" DevicePath \"\"" Dec 13 01:37:46.843261 kubelet[2795]: I1213 01:37:46.843181 2795 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41c57c0e-0002-4599-8f32-dbe0cd6709c1-hubble-tls\") on node \"ci-4081-2-1-f-56be046f0f\" DevicePath \"\"" Dec 13 01:37:46.843261 kubelet[2795]: I1213 01:37:46.843200 2795 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41c57c0e-0002-4599-8f32-dbe0cd6709c1-cilium-config-path\") on node \"ci-4081-2-1-f-56be046f0f\" DevicePath \"\"" Dec 13 01:37:46.843261 kubelet[2795]: I1213 01:37:46.843216 2795 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-cilium-run\") on node \"ci-4081-2-1-f-56be046f0f\" DevicePath \"\"" Dec 13 01:37:46.843261 kubelet[2795]: I1213 01:37:46.843238 2795 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-host-proc-sys-kernel\") on node \"ci-4081-2-1-f-56be046f0f\" DevicePath \"\"" Dec 13 01:37:46.843261 kubelet[2795]: I1213 01:37:46.843254 2795 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-cilium-cgroup\") on node \"ci-4081-2-1-f-56be046f0f\" DevicePath \"\"" Dec 13 01:37:46.843503 kubelet[2795]: I1213 01:37:46.843286 2795 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-lib-modules\") on node \"ci-4081-2-1-f-56be046f0f\" DevicePath \"\"" Dec 13 01:37:46.843503 kubelet[2795]: I1213 01:37:46.843305 2795 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41c57c0e-0002-4599-8f32-dbe0cd6709c1-clustermesh-secrets\") on node \"ci-4081-2-1-f-56be046f0f\" DevicePath \"\"" Dec 13 01:37:46.843503 kubelet[2795]: I1213 01:37:46.843322 2795 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-hostproc\") on node \"ci-4081-2-1-f-56be046f0f\" DevicePath \"\"" Dec 13 01:37:46.843503 kubelet[2795]: I1213 01:37:46.843339 2795 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-xtables-lock\") on node \"ci-4081-2-1-f-56be046f0f\" DevicePath \"\"" Dec 13 01:37:46.843503 kubelet[2795]: I1213 01:37:46.843356 2795 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-cni-path\") on node \"ci-4081-2-1-f-56be046f0f\" DevicePath \"\"" Dec 13 01:37:46.843503 kubelet[2795]: I1213 01:37:46.843373 2795 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f6ltf\" (UniqueName: \"kubernetes.io/projected/7a8e2421-4c78-4311-88d0-2a5024143529-kube-api-access-f6ltf\") on node \"ci-4081-2-1-f-56be046f0f\" DevicePath \"\"" Dec 13 01:37:46.843503 kubelet[2795]: I1213 01:37:46.843391 2795 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a8e2421-4c78-4311-88d0-2a5024143529-cilium-config-path\") on node \"ci-4081-2-1-f-56be046f0f\" DevicePath \"\"" Dec 13 01:37:46.843503 kubelet[2795]: I1213 01:37:46.843408 2795 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-59rj5\" (UniqueName: \"kubernetes.io/projected/41c57c0e-0002-4599-8f32-dbe0cd6709c1-kube-api-access-59rj5\") on node \"ci-4081-2-1-f-56be046f0f\" DevicePath \"\"" Dec 13 01:37:46.843722 kubelet[2795]: I1213 01:37:46.843424 2795 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-bpf-maps\") on node \"ci-4081-2-1-f-56be046f0f\" DevicePath \"\"" Dec 13 01:37:46.843722 kubelet[2795]: I1213 01:37:46.843443 2795 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41c57c0e-0002-4599-8f32-dbe0cd6709c1-host-proc-sys-net\") on node \"ci-4081-2-1-f-56be046f0f\" DevicePath \"\"" Dec 13 01:37:47.366606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42-rootfs.mount: Deactivated successfully. Dec 13 01:37:47.366727 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42-shm.mount: Deactivated successfully. Dec 13 01:37:47.366797 systemd[1]: var-lib-kubelet-pods-41c57c0e\x2d0002\x2d4599\x2d8f32\x2ddbe0cd6709c1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d59rj5.mount: Deactivated successfully. Dec 13 01:37:47.366875 systemd[1]: var-lib-kubelet-pods-7a8e2421\x2d4c78\x2d4311\x2d88d0\x2d2a5024143529-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df6ltf.mount: Deactivated successfully. Dec 13 01:37:47.366940 systemd[1]: var-lib-kubelet-pods-41c57c0e\x2d0002\x2d4599\x2d8f32\x2ddbe0cd6709c1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:37:47.367013 systemd[1]: var-lib-kubelet-pods-41c57c0e\x2d0002\x2d4599\x2d8f32\x2ddbe0cd6709c1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:37:47.523031 kubelet[2795]: I1213 01:37:47.522747 2795 scope.go:117] "RemoveContainer" containerID="9402ca5f5b3a2fd8c9e47cc92a0474a60ead2daa03e8863dc433dcd6d612c874" Dec 13 01:37:47.529211 containerd[1480]: time="2024-12-13T01:37:47.529172017Z" level=info msg="RemoveContainer for \"9402ca5f5b3a2fd8c9e47cc92a0474a60ead2daa03e8863dc433dcd6d612c874\"" Dec 13 01:37:47.534890 containerd[1480]: time="2024-12-13T01:37:47.534833456Z" level=info msg="RemoveContainer for \"9402ca5f5b3a2fd8c9e47cc92a0474a60ead2daa03e8863dc433dcd6d612c874\" returns successfully" Dec 13 01:37:47.535180 kubelet[2795]: I1213 01:37:47.535126 2795 scope.go:117] "RemoveContainer" containerID="2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e" Dec 13 01:37:47.536441 containerd[1480]: time="2024-12-13T01:37:47.536201072Z" level=info msg="RemoveContainer for \"2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e\"" Dec 13 01:37:47.540374 containerd[1480]: time="2024-12-13T01:37:47.540335898Z" level=info msg="RemoveContainer for \"2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e\" returns successfully" Dec 13 01:37:47.540544 kubelet[2795]: I1213 01:37:47.540535 2795 scope.go:117] "RemoveContainer" containerID="7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8" Dec 13 01:37:47.541437 containerd[1480]: time="2024-12-13T01:37:47.541394504Z" level=info msg="RemoveContainer for \"7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8\"" Dec 13 01:37:47.544131 containerd[1480]: time="2024-12-13T01:37:47.544102394Z" level=info msg="RemoveContainer for \"7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8\" returns successfully" Dec 13 01:37:47.544287 kubelet[2795]: I1213 01:37:47.544264 2795 scope.go:117] "RemoveContainer" containerID="9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425" Dec 13 01:37:47.545331 containerd[1480]: time="2024-12-13T01:37:47.545093231Z" level=info msg="RemoveContainer for \"9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425\"" Dec 13 01:37:47.547717 containerd[1480]: time="2024-12-13T01:37:47.547682248Z" level=info msg="RemoveContainer for \"9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425\" returns successfully" Dec 13 01:37:47.547855 kubelet[2795]: I1213 01:37:47.547814 2795 scope.go:117] "RemoveContainer" containerID="6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4" Dec 13 01:37:47.548648 containerd[1480]: time="2024-12-13T01:37:47.548604899Z" level=info msg="RemoveContainer for \"6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4\"" Dec 13 01:37:47.553823 containerd[1480]: time="2024-12-13T01:37:47.553784095Z" level=info msg="RemoveContainer for \"6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4\" returns successfully" Dec 13 01:37:47.554172 kubelet[2795]: I1213 01:37:47.553923 2795 scope.go:117] "RemoveContainer" containerID="406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0" Dec 13 01:37:47.554924 containerd[1480]: time="2024-12-13T01:37:47.554883727Z" level=info msg="RemoveContainer for \"406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0\"" Dec 13 01:37:47.557396 containerd[1480]: time="2024-12-13T01:37:47.557373617Z" level=info msg="RemoveContainer for \"406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0\" returns successfully" Dec 13 01:37:47.557652 kubelet[2795]: I1213 01:37:47.557612 2795 scope.go:117] "RemoveContainer" containerID="2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e" Dec 13 01:37:47.564721 containerd[1480]: time="2024-12-13T01:37:47.559261249Z" level=error msg="ContainerStatus for \"2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e\": not found" Dec 13 01:37:47.567840 kubelet[2795]: E1213 01:37:47.567817 2795 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e\": not found" containerID="2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e" Dec 13 01:37:47.572885 kubelet[2795]: I1213 01:37:47.572852 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e"} err="failed to get container status \"2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e\": rpc error: code = NotFound desc = an error occurred when try to find container \"2917073fec4c202df1b6a56d7161e073aee30f90b853d0080b8c652eb764074e\": not found" Dec 13 01:37:47.572885 kubelet[2795]: I1213 01:37:47.572879 2795 scope.go:117] "RemoveContainer" containerID="7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8" Dec 13 01:37:47.573106 containerd[1480]: time="2024-12-13T01:37:47.573070252Z" level=error msg="ContainerStatus for \"7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8\": not found" Dec 13 01:37:47.573229 kubelet[2795]: E1213 01:37:47.573203 2795 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8\": not found" containerID="7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8" Dec 13 01:37:47.582284 kubelet[2795]: I1213 01:37:47.582166 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8"} err="failed to get container status \"7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ffa702ac93a1250190488fb1f0afcf2357b4cde684148712780e7bff01b61c8\": not found" Dec 13 01:37:47.582284 kubelet[2795]: I1213 01:37:47.582203 2795 scope.go:117] "RemoveContainer" containerID="9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425" Dec 13 01:37:47.582554 containerd[1480]: time="2024-12-13T01:37:47.582491604Z" level=error msg="ContainerStatus for \"9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425\": not found" Dec 13 01:37:47.582806 kubelet[2795]: E1213 01:37:47.582700 2795 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425\": not found" containerID="9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425" Dec 13 01:37:47.582806 kubelet[2795]: I1213 01:37:47.582741 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425"} err="failed to get container status \"9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425\": rpc error: code = NotFound desc = an error occurred when try to find container \"9730efa8ee21f28b4b4b3047f594bf00b9c4e7f8c449be76679c0ed545c69425\": not found" Dec 13 01:37:47.582806 kubelet[2795]: I1213 01:37:47.582752 2795 scope.go:117] "RemoveContainer" containerID="6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4" Dec 13 01:37:47.583150 containerd[1480]: time="2024-12-13T01:37:47.582997865Z" level=error msg="ContainerStatus for \"6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4\": not found" Dec 13 01:37:47.583194 kubelet[2795]: E1213 01:37:47.583121 2795 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4\": not found" containerID="6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4" Dec 13 01:37:47.583355 kubelet[2795]: I1213 01:37:47.583256 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4"} err="failed to get container status \"6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"6355d24ca414606041b29ce0b9853e2c4776c2747d32da1e461292535c7ef3a4\": not found" Dec 13 01:37:47.583355 kubelet[2795]: I1213 01:37:47.583269 2795 scope.go:117] "RemoveContainer" containerID="406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0" Dec 13 01:37:47.583681 containerd[1480]: time="2024-12-13T01:37:47.583547626Z" level=error msg="ContainerStatus for \"406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0\": not found" Dec 13 01:37:47.583727 kubelet[2795]: E1213 01:37:47.583649 2795 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0\": not found" containerID="406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0" Dec 13 01:37:47.583727 kubelet[2795]: I1213 01:37:47.583669 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0"} err="failed to get container status \"406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"406f5d32c582a41573dffbfccee0fb68e861bb1bfabe8066062e653d343d88b0\": not found" Dec 13 01:37:48.413938 sshd[4367]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:48.417725 systemd[1]: sshd@20-188.245.236.49:22-147.75.109.163:52588.service: Deactivated successfully. Dec 13 01:37:48.419857 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:37:48.422146 systemd-logind[1467]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:37:48.423571 systemd-logind[1467]: Removed session 20. Dec 13 01:37:48.589758 systemd[1]: Started sshd@21-188.245.236.49:22-147.75.109.163:58082.service - OpenSSH per-connection server daemon (147.75.109.163:58082). Dec 13 01:37:48.786398 kubelet[2795]: I1213 01:37:48.786259 2795 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="41c57c0e-0002-4599-8f32-dbe0cd6709c1" path="/var/lib/kubelet/pods/41c57c0e-0002-4599-8f32-dbe0cd6709c1/volumes" Dec 13 01:37:48.787876 kubelet[2795]: I1213 01:37:48.787816 2795 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7a8e2421-4c78-4311-88d0-2a5024143529" path="/var/lib/kubelet/pods/7a8e2421-4c78-4311-88d0-2a5024143529/volumes" Dec 13 01:37:49.591950 sshd[4525]: Accepted publickey for core from 147.75.109.163 port 58082 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:49.593578 sshd[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:49.598629 systemd-logind[1467]: New session 21 of user core. Dec 13 01:37:49.602664 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:37:49.930071 kubelet[2795]: E1213 01:37:49.930036 2795 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:37:50.628046 kubelet[2795]: I1213 01:37:50.627998 2795 topology_manager.go:215] "Topology Admit Handler" podUID="0461e66d-116f-43e9-b798-c448ac90a898" podNamespace="kube-system" podName="cilium-sg5h5" Dec 13 01:37:50.631249 kubelet[2795]: E1213 01:37:50.629766 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a8e2421-4c78-4311-88d0-2a5024143529" containerName="cilium-operator" Dec 13 01:37:50.631249 kubelet[2795]: E1213 01:37:50.629829 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="41c57c0e-0002-4599-8f32-dbe0cd6709c1" containerName="mount-bpf-fs" Dec 13 01:37:50.631249 kubelet[2795]: E1213 01:37:50.629849 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="41c57c0e-0002-4599-8f32-dbe0cd6709c1" containerName="mount-cgroup" Dec 13 01:37:50.631249 kubelet[2795]: E1213 01:37:50.629860 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="41c57c0e-0002-4599-8f32-dbe0cd6709c1" containerName="apply-sysctl-overwrites" Dec 13 01:37:50.631249 kubelet[2795]: E1213 01:37:50.629904 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="41c57c0e-0002-4599-8f32-dbe0cd6709c1" containerName="clean-cilium-state" Dec 13 01:37:50.631249 kubelet[2795]: E1213 01:37:50.629910 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="41c57c0e-0002-4599-8f32-dbe0cd6709c1" containerName="cilium-agent" Dec 13 01:37:50.631249 kubelet[2795]: I1213 01:37:50.629952 2795 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a8e2421-4c78-4311-88d0-2a5024143529" containerName="cilium-operator" Dec 13 01:37:50.631249 kubelet[2795]: I1213 01:37:50.629960 2795 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c57c0e-0002-4599-8f32-dbe0cd6709c1" containerName="cilium-agent" Dec 13 01:37:50.647045 systemd[1]: Created slice kubepods-burstable-pod0461e66d_116f_43e9_b798_c448ac90a898.slice - libcontainer container kubepods-burstable-pod0461e66d_116f_43e9_b798_c448ac90a898.slice. Dec 13 01:37:50.676624 kubelet[2795]: I1213 01:37:50.675352 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0461e66d-116f-43e9-b798-c448ac90a898-cilium-config-path\") pod \"cilium-sg5h5\" (UID: \"0461e66d-116f-43e9-b798-c448ac90a898\") " pod="kube-system/cilium-sg5h5" Dec 13 01:37:50.676624 kubelet[2795]: I1213 01:37:50.675413 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0461e66d-116f-43e9-b798-c448ac90a898-cilium-ipsec-secrets\") pod \"cilium-sg5h5\" (UID: \"0461e66d-116f-43e9-b798-c448ac90a898\") " pod="kube-system/cilium-sg5h5" Dec 13 01:37:50.676624 kubelet[2795]: I1213 01:37:50.675461 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0461e66d-116f-43e9-b798-c448ac90a898-cilium-run\") pod \"cilium-sg5h5\" (UID: \"0461e66d-116f-43e9-b798-c448ac90a898\") " pod="kube-system/cilium-sg5h5" Dec 13 01:37:50.676624 kubelet[2795]: I1213 01:37:50.675480 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0461e66d-116f-43e9-b798-c448ac90a898-lib-modules\") pod \"cilium-sg5h5\" (UID: \"0461e66d-116f-43e9-b798-c448ac90a898\") " pod="kube-system/cilium-sg5h5" Dec 13 01:37:50.676624 kubelet[2795]: I1213 01:37:50.675496 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0461e66d-116f-43e9-b798-c448ac90a898-xtables-lock\") pod \"cilium-sg5h5\" (UID: \"0461e66d-116f-43e9-b798-c448ac90a898\") " pod="kube-system/cilium-sg5h5" Dec 13 01:37:50.676624 kubelet[2795]: I1213 01:37:50.675526 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0461e66d-116f-43e9-b798-c448ac90a898-clustermesh-secrets\") pod \"cilium-sg5h5\" (UID: \"0461e66d-116f-43e9-b798-c448ac90a898\") " pod="kube-system/cilium-sg5h5" Dec 13 01:37:50.676963 kubelet[2795]: I1213 01:37:50.675558 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0461e66d-116f-43e9-b798-c448ac90a898-bpf-maps\") pod \"cilium-sg5h5\" (UID: \"0461e66d-116f-43e9-b798-c448ac90a898\") " pod="kube-system/cilium-sg5h5" Dec 13 01:37:50.676963 kubelet[2795]: I1213 01:37:50.675590 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0461e66d-116f-43e9-b798-c448ac90a898-cilium-cgroup\") pod \"cilium-sg5h5\" (UID: \"0461e66d-116f-43e9-b798-c448ac90a898\") " pod="kube-system/cilium-sg5h5" Dec 13 01:37:50.676963 kubelet[2795]: I1213 01:37:50.675617 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0461e66d-116f-43e9-b798-c448ac90a898-etc-cni-netd\") pod \"cilium-sg5h5\" (UID: \"0461e66d-116f-43e9-b798-c448ac90a898\") " pod="kube-system/cilium-sg5h5" Dec 13 01:37:50.676963 kubelet[2795]: I1213 01:37:50.675635 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0461e66d-116f-43e9-b798-c448ac90a898-host-proc-sys-net\") pod \"cilium-sg5h5\" (UID: \"0461e66d-116f-43e9-b798-c448ac90a898\") " pod="kube-system/cilium-sg5h5" Dec 13 01:37:50.676963 kubelet[2795]: I1213 01:37:50.675652 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ksd2\" (UniqueName: \"kubernetes.io/projected/0461e66d-116f-43e9-b798-c448ac90a898-kube-api-access-5ksd2\") pod \"cilium-sg5h5\" (UID: \"0461e66d-116f-43e9-b798-c448ac90a898\") " pod="kube-system/cilium-sg5h5" Dec 13 01:37:50.676963 kubelet[2795]: I1213 01:37:50.675677 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0461e66d-116f-43e9-b798-c448ac90a898-host-proc-sys-kernel\") pod \"cilium-sg5h5\" (UID: \"0461e66d-116f-43e9-b798-c448ac90a898\") " pod="kube-system/cilium-sg5h5" Dec 13 01:37:50.677121 kubelet[2795]: I1213 01:37:50.675698 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0461e66d-116f-43e9-b798-c448ac90a898-hubble-tls\") pod \"cilium-sg5h5\" (UID: \"0461e66d-116f-43e9-b798-c448ac90a898\") " pod="kube-system/cilium-sg5h5" Dec 13 01:37:50.677121 kubelet[2795]: I1213 01:37:50.675725 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0461e66d-116f-43e9-b798-c448ac90a898-hostproc\") pod \"cilium-sg5h5\" (UID: \"0461e66d-116f-43e9-b798-c448ac90a898\") " pod="kube-system/cilium-sg5h5" Dec 13 01:37:50.677121 kubelet[2795]: I1213 01:37:50.675755 2795 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0461e66d-116f-43e9-b798-c448ac90a898-cni-path\") pod \"cilium-sg5h5\" (UID: \"0461e66d-116f-43e9-b798-c448ac90a898\") " pod="kube-system/cilium-sg5h5" Dec 13 01:37:50.766786 sshd[4525]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:50.770845 systemd-logind[1467]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:37:50.771565 systemd[1]: sshd@21-188.245.236.49:22-147.75.109.163:58082.service: Deactivated successfully. Dec 13 01:37:50.773919 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:37:50.774835 systemd-logind[1467]: Removed session 21. Dec 13 01:37:50.940781 systemd[1]: Started sshd@22-188.245.236.49:22-147.75.109.163:58090.service - OpenSSH per-connection server daemon (147.75.109.163:58090). Dec 13 01:37:50.959306 containerd[1480]: time="2024-12-13T01:37:50.959212866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sg5h5,Uid:0461e66d-116f-43e9-b798-c448ac90a898,Namespace:kube-system,Attempt:0,}" Dec 13 01:37:50.986030 containerd[1480]: time="2024-12-13T01:37:50.985706564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:37:50.986030 containerd[1480]: time="2024-12-13T01:37:50.985766707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:37:50.986030 containerd[1480]: time="2024-12-13T01:37:50.985783108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:50.986030 containerd[1480]: time="2024-12-13T01:37:50.985861806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:51.004683 systemd[1]: Started cri-containerd-ba4b688f25d8fd713745c36a223f7cfd577b0550300932eb1f786d45a4bbe0ce.scope - libcontainer container ba4b688f25d8fd713745c36a223f7cfd577b0550300932eb1f786d45a4bbe0ce. Dec 13 01:37:51.028032 containerd[1480]: time="2024-12-13T01:37:51.027990090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sg5h5,Uid:0461e66d-116f-43e9-b798-c448ac90a898,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba4b688f25d8fd713745c36a223f7cfd577b0550300932eb1f786d45a4bbe0ce\"" Dec 13 01:37:51.031181 containerd[1480]: time="2024-12-13T01:37:51.031135561Z" level=info msg="CreateContainer within sandbox \"ba4b688f25d8fd713745c36a223f7cfd577b0550300932eb1f786d45a4bbe0ce\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:37:51.039802 containerd[1480]: time="2024-12-13T01:37:51.039766711Z" level=info msg="CreateContainer within sandbox \"ba4b688f25d8fd713745c36a223f7cfd577b0550300932eb1f786d45a4bbe0ce\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"590e5010cbe3b3bae70f4c65e44824c2d7f4535d10307f201d8cc0dc4c05e8d2\"" Dec 13 01:37:51.040106 containerd[1480]: time="2024-12-13T01:37:51.040070080Z" level=info msg="StartContainer for \"590e5010cbe3b3bae70f4c65e44824c2d7f4535d10307f201d8cc0dc4c05e8d2\"" Dec 13 01:37:51.065669 systemd[1]: Started cri-containerd-590e5010cbe3b3bae70f4c65e44824c2d7f4535d10307f201d8cc0dc4c05e8d2.scope - libcontainer container 590e5010cbe3b3bae70f4c65e44824c2d7f4535d10307f201d8cc0dc4c05e8d2. Dec 13 01:37:51.094048 containerd[1480]: time="2024-12-13T01:37:51.094008840Z" level=info msg="StartContainer for \"590e5010cbe3b3bae70f4c65e44824c2d7f4535d10307f201d8cc0dc4c05e8d2\" returns successfully" Dec 13 01:37:51.103313 systemd[1]: cri-containerd-590e5010cbe3b3bae70f4c65e44824c2d7f4535d10307f201d8cc0dc4c05e8d2.scope: Deactivated successfully. Dec 13 01:37:51.133417 containerd[1480]: time="2024-12-13T01:37:51.133334538Z" level=info msg="shim disconnected" id=590e5010cbe3b3bae70f4c65e44824c2d7f4535d10307f201d8cc0dc4c05e8d2 namespace=k8s.io Dec 13 01:37:51.133417 containerd[1480]: time="2024-12-13T01:37:51.133385474Z" level=warning msg="cleaning up after shim disconnected" id=590e5010cbe3b3bae70f4c65e44824c2d7f4535d10307f201d8cc0dc4c05e8d2 namespace=k8s.io Dec 13 01:37:51.133417 containerd[1480]: time="2024-12-13T01:37:51.133394792Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:51.502105 containerd[1480]: time="2024-12-13T01:37:51.501948985Z" level=info msg="CreateContainer within sandbox \"ba4b688f25d8fd713745c36a223f7cfd577b0550300932eb1f786d45a4bbe0ce\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:37:51.519597 containerd[1480]: time="2024-12-13T01:37:51.519394861Z" level=info msg="CreateContainer within sandbox \"ba4b688f25d8fd713745c36a223f7cfd577b0550300932eb1f786d45a4bbe0ce\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e676a7ef910d1b8976f95dd85a25554172d186945140023619ba44ddb465cc70\"" Dec 13 01:37:51.520234 containerd[1480]: time="2024-12-13T01:37:51.520134087Z" level=info msg="StartContainer for \"e676a7ef910d1b8976f95dd85a25554172d186945140023619ba44ddb465cc70\"" Dec 13 01:37:51.550692 systemd[1]: Started cri-containerd-e676a7ef910d1b8976f95dd85a25554172d186945140023619ba44ddb465cc70.scope - libcontainer container e676a7ef910d1b8976f95dd85a25554172d186945140023619ba44ddb465cc70. Dec 13 01:37:51.578164 containerd[1480]: time="2024-12-13T01:37:51.577938037Z" level=info msg="StartContainer for \"e676a7ef910d1b8976f95dd85a25554172d186945140023619ba44ddb465cc70\" returns successfully" Dec 13 01:37:51.584986 systemd[1]: cri-containerd-e676a7ef910d1b8976f95dd85a25554172d186945140023619ba44ddb465cc70.scope: Deactivated successfully. Dec 13 01:37:51.610126 containerd[1480]: time="2024-12-13T01:37:51.610037680Z" level=info msg="shim disconnected" id=e676a7ef910d1b8976f95dd85a25554172d186945140023619ba44ddb465cc70 namespace=k8s.io Dec 13 01:37:51.610126 containerd[1480]: time="2024-12-13T01:37:51.610108051Z" level=warning msg="cleaning up after shim disconnected" id=e676a7ef910d1b8976f95dd85a25554172d186945140023619ba44ddb465cc70 namespace=k8s.io Dec 13 01:37:51.610126 containerd[1480]: time="2024-12-13T01:37:51.610118922Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:51.925393 sshd[4544]: Accepted publickey for core from 147.75.109.163 port 58090 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:51.927295 sshd[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:51.932864 systemd-logind[1467]: New session 22 of user core. Dec 13 01:37:51.945729 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:37:52.506151 containerd[1480]: time="2024-12-13T01:37:52.506065141Z" level=info msg="CreateContainer within sandbox \"ba4b688f25d8fd713745c36a223f7cfd577b0550300932eb1f786d45a4bbe0ce\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:37:52.528322 containerd[1480]: time="2024-12-13T01:37:52.525619101Z" level=info msg="CreateContainer within sandbox \"ba4b688f25d8fd713745c36a223f7cfd577b0550300932eb1f786d45a4bbe0ce\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9f8c1a4574466bd44dbba3ce247451aa3fab573c6b5539adfee16686d3351112\"" Dec 13 01:37:52.528322 containerd[1480]: time="2024-12-13T01:37:52.526952493Z" level=info msg="StartContainer for \"9f8c1a4574466bd44dbba3ce247451aa3fab573c6b5539adfee16686d3351112\"" Dec 13 01:37:52.526627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3264404815.mount: Deactivated successfully. Dec 13 01:37:52.559772 systemd[1]: Started cri-containerd-9f8c1a4574466bd44dbba3ce247451aa3fab573c6b5539adfee16686d3351112.scope - libcontainer container 9f8c1a4574466bd44dbba3ce247451aa3fab573c6b5539adfee16686d3351112. Dec 13 01:37:52.594483 containerd[1480]: time="2024-12-13T01:37:52.594438684Z" level=info msg="StartContainer for \"9f8c1a4574466bd44dbba3ce247451aa3fab573c6b5539adfee16686d3351112\" returns successfully" Dec 13 01:37:52.601184 systemd[1]: cri-containerd-9f8c1a4574466bd44dbba3ce247451aa3fab573c6b5539adfee16686d3351112.scope: Deactivated successfully. Dec 13 01:37:52.609426 sshd[4544]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:52.614962 systemd[1]: sshd@22-188.245.236.49:22-147.75.109.163:58090.service: Deactivated successfully. Dec 13 01:37:52.618055 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:37:52.620845 systemd-logind[1467]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:37:52.622987 systemd-logind[1467]: Removed session 22. Dec 13 01:37:52.634502 containerd[1480]: time="2024-12-13T01:37:52.634447404Z" level=info msg="shim disconnected" id=9f8c1a4574466bd44dbba3ce247451aa3fab573c6b5539adfee16686d3351112 namespace=k8s.io Dec 13 01:37:52.634502 containerd[1480]: time="2024-12-13T01:37:52.634497760Z" level=warning msg="cleaning up after shim disconnected" id=9f8c1a4574466bd44dbba3ce247451aa3fab573c6b5539adfee16686d3351112 namespace=k8s.io Dec 13 01:37:52.634502 containerd[1480]: time="2024-12-13T01:37:52.634506686Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:52.780768 systemd[1]: Started sshd@23-188.245.236.49:22-147.75.109.163:58102.service - OpenSSH per-connection server daemon (147.75.109.163:58102). Dec 13 01:37:52.788853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f8c1a4574466bd44dbba3ce247451aa3fab573c6b5539adfee16686d3351112-rootfs.mount: Deactivated successfully. Dec 13 01:37:53.172569 kubelet[2795]: I1213 01:37:53.172491 2795 setters.go:568] "Node became not ready" node="ci-4081-2-1-f-56be046f0f" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:37:53Z","lastTransitionTime":"2024-12-13T01:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:37:53.510574 containerd[1480]: time="2024-12-13T01:37:53.509838997Z" level=info msg="CreateContainer within sandbox \"ba4b688f25d8fd713745c36a223f7cfd577b0550300932eb1f786d45a4bbe0ce\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:37:53.529133 containerd[1480]: time="2024-12-13T01:37:53.528813530Z" level=info msg="CreateContainer within sandbox \"ba4b688f25d8fd713745c36a223f7cfd577b0550300932eb1f786d45a4bbe0ce\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ea1b2028c342f9ed7e0bc4c78c47c7249e12ec2bc27b6a5eea35e1c4a2d8d78a\"" Dec 13 01:37:53.529680 containerd[1480]: time="2024-12-13T01:37:53.529646723Z" level=info msg="StartContainer for \"ea1b2028c342f9ed7e0bc4c78c47c7249e12ec2bc27b6a5eea35e1c4a2d8d78a\"" Dec 13 01:37:53.561653 systemd[1]: Started cri-containerd-ea1b2028c342f9ed7e0bc4c78c47c7249e12ec2bc27b6a5eea35e1c4a2d8d78a.scope - libcontainer container ea1b2028c342f9ed7e0bc4c78c47c7249e12ec2bc27b6a5eea35e1c4a2d8d78a. Dec 13 01:37:53.584373 containerd[1480]: time="2024-12-13T01:37:53.583456120Z" level=info msg="StartContainer for \"ea1b2028c342f9ed7e0bc4c78c47c7249e12ec2bc27b6a5eea35e1c4a2d8d78a\" returns successfully" Dec 13 01:37:53.583625 systemd[1]: cri-containerd-ea1b2028c342f9ed7e0bc4c78c47c7249e12ec2bc27b6a5eea35e1c4a2d8d78a.scope: Deactivated successfully. Dec 13 01:37:53.606389 containerd[1480]: time="2024-12-13T01:37:53.606321973Z" level=info msg="shim disconnected" id=ea1b2028c342f9ed7e0bc4c78c47c7249e12ec2bc27b6a5eea35e1c4a2d8d78a namespace=k8s.io Dec 13 01:37:53.606631 containerd[1480]: time="2024-12-13T01:37:53.606440825Z" level=warning msg="cleaning up after shim disconnected" id=ea1b2028c342f9ed7e0bc4c78c47c7249e12ec2bc27b6a5eea35e1c4a2d8d78a namespace=k8s.io Dec 13 01:37:53.606631 containerd[1480]: time="2024-12-13T01:37:53.606452347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:53.754308 sshd[4771]: Accepted publickey for core from 147.75.109.163 port 58102 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:53.756178 sshd[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:53.761165 systemd-logind[1467]: New session 23 of user core. Dec 13 01:37:53.763690 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:37:53.785488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea1b2028c342f9ed7e0bc4c78c47c7249e12ec2bc27b6a5eea35e1c4a2d8d78a-rootfs.mount: Deactivated successfully. Dec 13 01:37:54.514695 containerd[1480]: time="2024-12-13T01:37:54.514382016Z" level=info msg="CreateContainer within sandbox \"ba4b688f25d8fd713745c36a223f7cfd577b0550300932eb1f786d45a4bbe0ce\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:37:54.530869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2762574475.mount: Deactivated successfully. Dec 13 01:37:54.534795 containerd[1480]: time="2024-12-13T01:37:54.534754511Z" level=info msg="CreateContainer within sandbox \"ba4b688f25d8fd713745c36a223f7cfd577b0550300932eb1f786d45a4bbe0ce\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"69315bcc55c25193cc4fc12678bbf782faa3f98aa08cd97efc737c2e904521d8\"" Dec 13 01:37:54.535401 containerd[1480]: time="2024-12-13T01:37:54.535356429Z" level=info msg="StartContainer for \"69315bcc55c25193cc4fc12678bbf782faa3f98aa08cd97efc737c2e904521d8\"" Dec 13 01:37:54.566639 systemd[1]: Started cri-containerd-69315bcc55c25193cc4fc12678bbf782faa3f98aa08cd97efc737c2e904521d8.scope - libcontainer container 69315bcc55c25193cc4fc12678bbf782faa3f98aa08cd97efc737c2e904521d8. Dec 13 01:37:54.594587 containerd[1480]: time="2024-12-13T01:37:54.594460959Z" level=info msg="StartContainer for \"69315bcc55c25193cc4fc12678bbf782faa3f98aa08cd97efc737c2e904521d8\" returns successfully" Dec 13 01:37:54.785266 kubelet[2795]: E1213 01:37:54.784430 2795 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-jqpk5" podUID="27322cdd-c802-4340-a0ef-9dd92ed31f79" Dec 13 01:37:55.119573 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:37:58.045671 systemd-networkd[1376]: lxc_health: Link UP Dec 13 01:37:58.053721 systemd-networkd[1376]: lxc_health: Gained carrier Dec 13 01:37:58.979410 kubelet[2795]: I1213 01:37:58.979194 2795 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-sg5h5" podStartSLOduration=8.979159361 podStartE2EDuration="8.979159361s" podCreationTimestamp="2024-12-13 01:37:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:37:55.528292786 +0000 UTC m=+350.897974864" watchObservedRunningTime="2024-12-13 01:37:58.979159361 +0000 UTC m=+354.348841438" Dec 13 01:37:59.876360 systemd-networkd[1376]: lxc_health: Gained IPv6LL Dec 13 01:38:01.073476 systemd[1]: run-containerd-runc-k8s.io-69315bcc55c25193cc4fc12678bbf782faa3f98aa08cd97efc737c2e904521d8-runc.UtWZ48.mount: Deactivated successfully. Dec 13 01:38:04.801972 containerd[1480]: time="2024-12-13T01:38:04.801920798Z" level=info msg="StopPodSandbox for \"65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc\"" Dec 13 01:38:04.802732 containerd[1480]: time="2024-12-13T01:38:04.802031736Z" level=info msg="TearDown network for sandbox \"65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc\" successfully" Dec 13 01:38:04.802732 containerd[1480]: time="2024-12-13T01:38:04.802049249Z" level=info msg="StopPodSandbox for \"65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc\" returns successfully" Dec 13 01:38:04.802732 containerd[1480]: time="2024-12-13T01:38:04.802457815Z" level=info msg="RemovePodSandbox for \"65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc\"" Dec 13 01:38:04.802732 containerd[1480]: time="2024-12-13T01:38:04.802498110Z" level=info msg="Forcibly stopping sandbox \"65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc\"" Dec 13 01:38:04.802732 containerd[1480]: time="2024-12-13T01:38:04.802586176Z" level=info msg="TearDown network for sandbox \"65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc\" successfully" Dec 13 01:38:04.808295 containerd[1480]: time="2024-12-13T01:38:04.808225705Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:38:04.808295 containerd[1480]: time="2024-12-13T01:38:04.808294354Z" level=info msg="RemovePodSandbox \"65d33bcb7a2414386a06b069c1b8916541a04c30c7b1d3cf4182d0f565c25fcc\" returns successfully" Dec 13 01:38:04.809121 containerd[1480]: time="2024-12-13T01:38:04.808921500Z" level=info msg="StopPodSandbox for \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\"" Dec 13 01:38:04.809121 containerd[1480]: time="2024-12-13T01:38:04.809016618Z" level=info msg="TearDown network for sandbox \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\" successfully" Dec 13 01:38:04.809121 containerd[1480]: time="2024-12-13T01:38:04.809035314Z" level=info msg="StopPodSandbox for \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\" returns successfully" Dec 13 01:38:04.809685 containerd[1480]: time="2024-12-13T01:38:04.809642042Z" level=info msg="RemovePodSandbox for \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\"" Dec 13 01:38:04.809685 containerd[1480]: time="2024-12-13T01:38:04.809674714Z" level=info msg="Forcibly stopping sandbox \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\"" Dec 13 01:38:04.809807 containerd[1480]: time="2024-12-13T01:38:04.809731970Z" level=info msg="TearDown network for sandbox \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\" successfully" Dec 13 01:38:04.814972 containerd[1480]: time="2024-12-13T01:38:04.814915263Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:38:04.814972 containerd[1480]: time="2024-12-13T01:38:04.814961760Z" level=info msg="RemovePodSandbox \"22e67f2695d4e742eb7a8571cdde98d5229f6951e5141efa7de50f14b63f3f42\" returns successfully" Dec 13 01:38:05.406951 kubelet[2795]: E1213 01:38:05.406907 2795 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45266->127.0.0.1:43779: write tcp 127.0.0.1:45266->127.0.0.1:43779: write: broken pipe Dec 13 01:38:05.569025 sshd[4771]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:05.572785 systemd[1]: sshd@23-188.245.236.49:22-147.75.109.163:58102.service: Deactivated successfully. Dec 13 01:38:05.575251 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:38:05.577148 systemd-logind[1467]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:38:05.579381 systemd-logind[1467]: Removed session 23. Dec 13 01:38:20.149109 systemd[1]: cri-containerd-d1cfd649cc550f5e41c93918ec58db3651461f476ade6fbeffa4ac3927ef2311.scope: Deactivated successfully. Dec 13 01:38:20.150024 systemd[1]: cri-containerd-d1cfd649cc550f5e41c93918ec58db3651461f476ade6fbeffa4ac3927ef2311.scope: Consumed 5.284s CPU time, 24.4M memory peak, 0B memory swap peak. Dec 13 01:38:20.192796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1cfd649cc550f5e41c93918ec58db3651461f476ade6fbeffa4ac3927ef2311-rootfs.mount: Deactivated successfully. Dec 13 01:38:20.218620 containerd[1480]: time="2024-12-13T01:38:20.218503203Z" level=info msg="shim disconnected" id=d1cfd649cc550f5e41c93918ec58db3651461f476ade6fbeffa4ac3927ef2311 namespace=k8s.io Dec 13 01:38:20.218620 containerd[1480]: time="2024-12-13T01:38:20.218565730Z" level=warning msg="cleaning up after shim disconnected" id=d1cfd649cc550f5e41c93918ec58db3651461f476ade6fbeffa4ac3927ef2311 namespace=k8s.io Dec 13 01:38:20.218620 containerd[1480]: time="2024-12-13T01:38:20.218574327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:20.563017 kubelet[2795]: I1213 01:38:20.562094 2795 scope.go:117] "RemoveContainer" containerID="d1cfd649cc550f5e41c93918ec58db3651461f476ade6fbeffa4ac3927ef2311" Dec 13 01:38:20.565549 containerd[1480]: time="2024-12-13T01:38:20.565457860Z" level=info msg="CreateContainer within sandbox \"c1bcf2eaed929b3f21e52a99bcfb6d3426df0b87d2677f7f77ebda6b29f3d496\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 01:38:20.580327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3089854128.mount: Deactivated successfully. Dec 13 01:38:20.584053 containerd[1480]: time="2024-12-13T01:38:20.584018767Z" level=info msg="CreateContainer within sandbox \"c1bcf2eaed929b3f21e52a99bcfb6d3426df0b87d2677f7f77ebda6b29f3d496\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"44dc61eb1dcf810f918362bcee3816de7f48d080a30a7a571a24d443d61955c7\"" Dec 13 01:38:20.584414 containerd[1480]: time="2024-12-13T01:38:20.584386266Z" level=info msg="StartContainer for \"44dc61eb1dcf810f918362bcee3816de7f48d080a30a7a571a24d443d61955c7\"" Dec 13 01:38:20.613699 systemd[1]: Started cri-containerd-44dc61eb1dcf810f918362bcee3816de7f48d080a30a7a571a24d443d61955c7.scope - libcontainer container 44dc61eb1dcf810f918362bcee3816de7f48d080a30a7a571a24d443d61955c7. Dec 13 01:38:20.616069 kubelet[2795]: E1213 01:38:20.615893 2795 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38034->10.0.0.2:2379: read: connection timed out" Dec 13 01:38:20.626425 systemd[1]: cri-containerd-89d705dff35facef02bfad7c251c3da3488e839151ece9fc84514780629c4422.scope: Deactivated successfully. Dec 13 01:38:20.626992 systemd[1]: cri-containerd-89d705dff35facef02bfad7c251c3da3488e839151ece9fc84514780629c4422.scope: Consumed 1.653s CPU time, 17.9M memory peak, 0B memory swap peak. Dec 13 01:38:20.654552 containerd[1480]: time="2024-12-13T01:38:20.654367780Z" level=info msg="shim disconnected" id=89d705dff35facef02bfad7c251c3da3488e839151ece9fc84514780629c4422 namespace=k8s.io Dec 13 01:38:20.654552 containerd[1480]: time="2024-12-13T01:38:20.654412894Z" level=warning msg="cleaning up after shim disconnected" id=89d705dff35facef02bfad7c251c3da3488e839151ece9fc84514780629c4422 namespace=k8s.io Dec 13 01:38:20.654552 containerd[1480]: time="2024-12-13T01:38:20.654421370Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:20.671573 containerd[1480]: time="2024-12-13T01:38:20.670930078Z" level=info msg="StartContainer for \"44dc61eb1dcf810f918362bcee3816de7f48d080a30a7a571a24d443d61955c7\" returns successfully" Dec 13 01:38:21.194189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89d705dff35facef02bfad7c251c3da3488e839151ece9fc84514780629c4422-rootfs.mount: Deactivated successfully. Dec 13 01:38:21.565505 kubelet[2795]: I1213 01:38:21.565122 2795 scope.go:117] "RemoveContainer" containerID="89d705dff35facef02bfad7c251c3da3488e839151ece9fc84514780629c4422" Dec 13 01:38:21.569575 containerd[1480]: time="2024-12-13T01:38:21.568245525Z" level=info msg="CreateContainer within sandbox \"ba59479bc8df2fc4cf1cf0ef48ff5fdae78e67d75fc77d88f2e4ce51a3020843\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 01:38:21.584258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1836949913.mount: Deactivated successfully. Dec 13 01:38:21.588691 containerd[1480]: time="2024-12-13T01:38:21.588648858Z" level=info msg="CreateContainer within sandbox \"ba59479bc8df2fc4cf1cf0ef48ff5fdae78e67d75fc77d88f2e4ce51a3020843\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8de90ebed24eb0faf537f967e1db5e2e7c17b7d2f26fd6b066e37ee284c72f03\"" Dec 13 01:38:21.589141 containerd[1480]: time="2024-12-13T01:38:21.589112237Z" level=info msg="StartContainer for \"8de90ebed24eb0faf537f967e1db5e2e7c17b7d2f26fd6b066e37ee284c72f03\"" Dec 13 01:38:21.617644 systemd[1]: Started cri-containerd-8de90ebed24eb0faf537f967e1db5e2e7c17b7d2f26fd6b066e37ee284c72f03.scope - libcontainer container 8de90ebed24eb0faf537f967e1db5e2e7c17b7d2f26fd6b066e37ee284c72f03. Dec 13 01:38:21.660599 containerd[1480]: time="2024-12-13T01:38:21.660491893Z" level=info msg="StartContainer for \"8de90ebed24eb0faf537f967e1db5e2e7c17b7d2f26fd6b066e37ee284c72f03\" returns successfully" Dec 13 01:38:25.551941 kubelet[2795]: E1213 01:38:25.551869 2795 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:37858->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-2-1-f-56be046f0f.181098d485f2048f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-2-1-f-56be046f0f,UID:6084332dbea7d099edb1ce0012185d5b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-f-56be046f0f,},FirstTimestamp:2024-12-13 01:38:15.085360271 +0000 UTC m=+370.455042369,LastTimestamp:2024-12-13 01:38:15.085360271 +0000 UTC m=+370.455042369,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-f-56be046f0f,}"