Dec 13 02:07:43.944874 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 02:07:43.944898 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 02:07:43.944909 kernel: KASLR enabled Dec 13 02:07:43.944915 kernel: efi: EFI v2.7 by EDK II Dec 13 02:07:43.944921 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x1347a1018 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Dec 13 02:07:43.944926 kernel: random: crng init done Dec 13 02:07:43.944934 kernel: ACPI: Early table checksum verification disabled Dec 13 02:07:43.944939 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Dec 13 02:07:43.944946 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Dec 13 02:07:43.944952 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:43.944959 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:43.944965 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:43.944971 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:43.944978 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:43.944985 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:43.944993 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:43.944999 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:43.945005 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:43.945012 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 02:07:43.945018 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Dec 13 02:07:43.945024 kernel: NUMA: Failed to initialise from firmware Dec 13 02:07:43.945031 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 02:07:43.945037 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Dec 13 02:07:43.945043 kernel: Zone ranges: Dec 13 02:07:43.945049 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 02:07:43.945056 kernel: DMA32 empty Dec 13 02:07:43.945063 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Dec 13 02:07:43.945069 kernel: Movable zone start for each node Dec 13 02:07:43.945076 kernel: Early memory node ranges Dec 13 02:07:43.945082 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Dec 13 02:07:43.945089 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Dec 13 02:07:43.945095 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Dec 13 02:07:43.945101 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Dec 13 02:07:43.945107 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Dec 13 02:07:43.945114 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 02:07:43.945120 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Dec 13 02:07:43.945127 kernel: psci: probing for conduit method from ACPI. Dec 13 02:07:43.945134 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 02:07:43.945141 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 02:07:43.945147 kernel: psci: Trusted OS migration not required Dec 13 02:07:43.945156 kernel: psci: SMC Calling Convention v1.1 Dec 13 02:07:43.945163 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 02:07:43.945170 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 02:07:43.945178 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 02:07:43.945185 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 02:07:43.945191 kernel: Detected PIPT I-cache on CPU0 Dec 13 02:07:43.945198 kernel: CPU features: detected: GIC system register CPU interface Dec 13 02:07:43.945204 kernel: CPU features: detected: Hardware dirty bit management Dec 13 02:07:43.945211 kernel: CPU features: detected: Spectre-v4 Dec 13 02:07:43.945218 kernel: CPU features: detected: Spectre-BHB Dec 13 02:07:43.945225 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 02:07:43.945232 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 02:07:43.945239 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 02:07:43.945246 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 02:07:43.945254 kernel: alternatives: applying boot alternatives Dec 13 02:07:43.945262 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 02:07:43.945269 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:07:43.945276 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 02:07:43.945283 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:07:43.945289 kernel: Fallback order for Node 0: 0 Dec 13 02:07:43.945299 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Dec 13 02:07:43.945307 kernel: Policy zone: Normal Dec 13 02:07:43.945315 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:07:43.945322 kernel: software IO TLB: area num 2. Dec 13 02:07:43.945330 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Dec 13 02:07:43.945339 kernel: Memory: 3881592K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 214408K reserved, 0K cma-reserved) Dec 13 02:07:43.945346 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:07:43.945353 kernel: trace event string verifier disabled Dec 13 02:07:43.945359 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 02:07:43.945367 kernel: rcu: RCU event tracing is enabled. Dec 13 02:07:43.945374 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:07:43.945381 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 02:07:43.945388 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:07:43.945395 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:07:43.945401 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:07:43.945408 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 02:07:43.945417 kernel: GICv3: 256 SPIs implemented Dec 13 02:07:43.945423 kernel: GICv3: 0 Extended SPIs implemented Dec 13 02:07:43.945430 kernel: Root IRQ handler: gic_handle_irq Dec 13 02:07:43.945436 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 02:07:43.945444 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 02:07:43.945450 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 02:07:43.945457 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 02:07:43.945464 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 02:07:43.945471 kernel: GICv3: using LPI property table @0x00000001000e0000 Dec 13 02:07:43.945478 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Dec 13 02:07:43.945485 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 02:07:43.945494 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 02:07:43.945501 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 02:07:43.945508 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 02:07:43.945515 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 02:07:43.945522 kernel: Console: colour dummy device 80x25 Dec 13 02:07:43.945528 kernel: ACPI: Core revision 20230628 Dec 13 02:07:43.945553 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 02:07:43.945563 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:07:43.945823 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 02:07:43.945834 kernel: landlock: Up and running. Dec 13 02:07:43.945846 kernel: SELinux: Initializing. Dec 13 02:07:43.945853 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 02:07:43.945860 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 02:07:43.945868 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:07:43.945875 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:07:43.945882 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:07:43.945889 kernel: rcu: Max phase no-delay instances is 400. Dec 13 02:07:43.945896 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 02:07:43.945903 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 02:07:43.945911 kernel: Remapping and enabling EFI services. Dec 13 02:07:43.945918 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:07:43.945925 kernel: Detected PIPT I-cache on CPU1 Dec 13 02:07:43.945932 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 02:07:43.945939 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Dec 13 02:07:43.945946 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 02:07:43.945953 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 02:07:43.945960 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:07:43.945967 kernel: SMP: Total of 2 processors activated. Dec 13 02:07:43.945974 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 02:07:43.945982 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 02:07:43.945989 kernel: CPU features: detected: Common not Private translations Dec 13 02:07:43.946001 kernel: CPU features: detected: CRC32 instructions Dec 13 02:07:43.946010 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 02:07:43.946017 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 02:07:43.946025 kernel: CPU features: detected: LSE atomic instructions Dec 13 02:07:43.946032 kernel: CPU features: detected: Privileged Access Never Dec 13 02:07:43.946039 kernel: CPU features: detected: RAS Extension Support Dec 13 02:07:43.946046 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 02:07:43.946055 kernel: CPU: All CPU(s) started at EL1 Dec 13 02:07:43.946063 kernel: alternatives: applying system-wide alternatives Dec 13 02:07:43.946070 kernel: devtmpfs: initialized Dec 13 02:07:43.946077 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:07:43.946085 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:07:43.946092 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:07:43.946099 kernel: SMBIOS 3.0.0 present. Dec 13 02:07:43.946108 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Dec 13 02:07:43.946115 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:07:43.946122 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 02:07:43.946130 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 02:07:43.946137 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 02:07:43.946144 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:07:43.946152 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Dec 13 02:07:43.946159 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:07:43.946166 kernel: cpuidle: using governor menu Dec 13 02:07:43.946175 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 02:07:43.946182 kernel: ASID allocator initialised with 32768 entries Dec 13 02:07:43.946190 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:07:43.946197 kernel: Serial: AMBA PL011 UART driver Dec 13 02:07:43.946205 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 02:07:43.946212 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 02:07:43.946219 kernel: Modules: 509040 pages in range for PLT usage Dec 13 02:07:43.946227 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:07:43.946234 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 02:07:43.946244 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 02:07:43.946251 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 02:07:43.946259 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:07:43.946266 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 02:07:43.946274 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 02:07:43.946281 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 02:07:43.946288 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:07:43.946296 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:07:43.946303 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:07:43.946311 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:07:43.946318 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 02:07:43.946325 kernel: ACPI: Interpreter enabled Dec 13 02:07:43.946332 kernel: ACPI: Using GIC for interrupt routing Dec 13 02:07:43.946340 kernel: ACPI: MCFG table detected, 1 entries Dec 13 02:07:43.946347 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 02:07:43.946354 kernel: printk: console [ttyAMA0] enabled Dec 13 02:07:43.946361 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:07:43.946508 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:07:43.946606 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 02:07:43.946675 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 02:07:43.946739 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 02:07:43.946805 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 02:07:43.946815 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 02:07:43.946822 kernel: PCI host bridge to bus 0000:00 Dec 13 02:07:43.946897 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 02:07:43.946962 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 02:07:43.947021 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 02:07:43.947081 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:07:43.947160 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 02:07:43.947241 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Dec 13 02:07:43.947316 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Dec 13 02:07:43.947443 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 02:07:43.947534 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:43.947679 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Dec 13 02:07:43.947757 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:43.947828 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Dec 13 02:07:43.947899 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:43.949729 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Dec 13 02:07:43.949818 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:43.949890 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Dec 13 02:07:43.949966 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:43.950034 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Dec 13 02:07:43.950110 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:43.950188 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Dec 13 02:07:43.950262 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:43.950329 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Dec 13 02:07:43.950403 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:43.950469 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Dec 13 02:07:43.952609 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:43.952731 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Dec 13 02:07:43.952822 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Dec 13 02:07:43.952889 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Dec 13 02:07:43.952968 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 02:07:43.953036 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Dec 13 02:07:43.953103 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 02:07:43.953171 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 02:07:43.953250 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 02:07:43.953317 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Dec 13 02:07:43.953394 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 02:07:43.953461 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Dec 13 02:07:43.953528 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Dec 13 02:07:43.956773 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 02:07:43.956855 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Dec 13 02:07:43.956941 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 02:07:43.957012 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Dec 13 02:07:43.957090 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 02:07:43.957160 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Dec 13 02:07:43.957229 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 02:07:43.957305 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 02:07:43.957378 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Dec 13 02:07:43.957446 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Dec 13 02:07:43.957514 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 02:07:43.957644 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 02:07:43.957716 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Dec 13 02:07:43.957785 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Dec 13 02:07:43.957865 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 02:07:43.957935 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 02:07:43.958004 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Dec 13 02:07:43.958076 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 02:07:43.958145 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Dec 13 02:07:43.958214 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 02:07:43.958285 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 02:07:43.958352 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Dec 13 02:07:43.958420 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 02:07:43.958490 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 02:07:43.958645 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Dec 13 02:07:43.958715 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Dec 13 02:07:43.958783 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 02:07:43.958850 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Dec 13 02:07:43.958913 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Dec 13 02:07:43.958985 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 02:07:43.959050 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Dec 13 02:07:43.959115 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Dec 13 02:07:43.959182 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 02:07:43.959249 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Dec 13 02:07:43.959316 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Dec 13 02:07:43.959405 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 02:07:43.959485 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Dec 13 02:07:43.959589 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Dec 13 02:07:43.959676 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Dec 13 02:07:43.959743 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 02:07:43.959816 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Dec 13 02:07:43.959887 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 02:07:43.959958 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Dec 13 02:07:43.960028 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 02:07:43.960102 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Dec 13 02:07:43.960170 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 02:07:43.960240 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Dec 13 02:07:43.960307 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 02:07:43.960375 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Dec 13 02:07:43.960444 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 02:07:43.960514 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Dec 13 02:07:43.963626 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 02:07:43.963718 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Dec 13 02:07:43.963789 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 02:07:43.963861 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Dec 13 02:07:43.963932 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 02:07:43.964005 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Dec 13 02:07:43.964078 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Dec 13 02:07:43.964147 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Dec 13 02:07:43.964214 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 02:07:43.964282 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Dec 13 02:07:43.964349 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 02:07:43.964417 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Dec 13 02:07:43.964482 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 02:07:43.964576 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Dec 13 02:07:43.964654 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 02:07:43.964725 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Dec 13 02:07:43.964797 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 02:07:43.964883 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Dec 13 02:07:43.965057 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 02:07:43.965133 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Dec 13 02:07:43.965201 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 02:07:43.965271 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Dec 13 02:07:43.965346 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 02:07:43.965418 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Dec 13 02:07:43.965486 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Dec 13 02:07:43.965584 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Dec 13 02:07:43.965666 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Dec 13 02:07:43.965737 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 02:07:43.965821 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Dec 13 02:07:43.965895 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 02:07:43.965968 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 02:07:43.966035 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Dec 13 02:07:43.966102 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 02:07:43.966177 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Dec 13 02:07:43.966247 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 02:07:43.966317 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 02:07:43.966385 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Dec 13 02:07:43.966452 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 02:07:43.966528 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 02:07:43.966620 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Dec 13 02:07:43.966692 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 02:07:43.966760 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 02:07:43.966831 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Dec 13 02:07:43.966899 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 02:07:43.966976 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 02:07:43.967045 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 02:07:43.967113 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 02:07:43.967181 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Dec 13 02:07:43.967249 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 02:07:43.967325 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Dec 13 02:07:43.967454 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 02:07:43.967530 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 02:07:43.967625 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Dec 13 02:07:43.967695 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 02:07:43.967772 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Dec 13 02:07:43.967843 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Dec 13 02:07:43.967912 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 02:07:43.967980 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 02:07:43.968053 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Dec 13 02:07:43.968120 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 02:07:43.968194 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Dec 13 02:07:43.968265 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Dec 13 02:07:43.968336 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Dec 13 02:07:43.968404 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 02:07:43.968472 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 02:07:43.968551 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Dec 13 02:07:43.968629 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 02:07:43.968699 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 02:07:43.968767 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 02:07:43.968837 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Dec 13 02:07:43.968905 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 02:07:43.968976 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 02:07:43.969043 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Dec 13 02:07:43.969110 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Dec 13 02:07:43.969180 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 02:07:43.969250 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 02:07:43.969311 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 02:07:43.969371 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 02:07:43.969450 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 02:07:43.969514 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Dec 13 02:07:43.972412 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 02:07:43.972514 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Dec 13 02:07:43.972597 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Dec 13 02:07:43.972660 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 02:07:43.972728 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Dec 13 02:07:43.972789 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Dec 13 02:07:43.972849 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 02:07:43.972925 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 13 02:07:43.972988 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Dec 13 02:07:43.973050 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 02:07:43.973129 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Dec 13 02:07:43.973193 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Dec 13 02:07:43.973253 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 02:07:43.973322 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Dec 13 02:07:43.973384 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Dec 13 02:07:43.973444 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 02:07:43.973513 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Dec 13 02:07:43.973618 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Dec 13 02:07:43.973687 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 02:07:43.973758 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Dec 13 02:07:43.973819 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Dec 13 02:07:43.973880 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 02:07:43.973949 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Dec 13 02:07:43.974011 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Dec 13 02:07:43.974072 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 02:07:43.974085 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 02:07:43.974093 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 02:07:43.974101 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 02:07:43.974109 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 02:07:43.974117 kernel: iommu: Default domain type: Translated Dec 13 02:07:43.974124 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 02:07:43.974132 kernel: efivars: Registered efivars operations Dec 13 02:07:43.974140 kernel: vgaarb: loaded Dec 13 02:07:43.974148 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 02:07:43.974157 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:07:43.974165 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:07:43.974173 kernel: pnp: PnP ACPI init Dec 13 02:07:43.974253 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 02:07:43.974265 kernel: pnp: PnP ACPI: found 1 devices Dec 13 02:07:43.974273 kernel: NET: Registered PF_INET protocol family Dec 13 02:07:43.974281 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:07:43.974289 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 02:07:43.974299 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:07:43.974307 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:07:43.974315 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 02:07:43.974323 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 02:07:43.974331 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 02:07:43.974338 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 02:07:43.974346 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:07:43.974426 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Dec 13 02:07:43.974438 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:07:43.974448 kernel: kvm [1]: HYP mode not available Dec 13 02:07:43.974457 kernel: Initialise system trusted keyrings Dec 13 02:07:43.974464 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 02:07:43.974472 kernel: Key type asymmetric registered Dec 13 02:07:43.974480 kernel: Asymmetric key parser 'x509' registered Dec 13 02:07:43.974488 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 02:07:43.974495 kernel: io scheduler mq-deadline registered Dec 13 02:07:43.974503 kernel: io scheduler kyber registered Dec 13 02:07:43.974511 kernel: io scheduler bfq registered Dec 13 02:07:43.974521 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 02:07:43.977554 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Dec 13 02:07:43.977648 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Dec 13 02:07:43.977718 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:43.977790 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Dec 13 02:07:43.977859 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Dec 13 02:07:43.977931 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:43.978002 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Dec 13 02:07:43.978068 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Dec 13 02:07:43.978136 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:43.978217 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Dec 13 02:07:43.978285 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Dec 13 02:07:43.978356 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:43.978425 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Dec 13 02:07:43.978490 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Dec 13 02:07:43.978581 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:43.978655 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Dec 13 02:07:43.978722 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Dec 13 02:07:43.978792 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:43.978860 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Dec 13 02:07:43.978927 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Dec 13 02:07:43.978993 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:43.979064 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Dec 13 02:07:43.979133 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Dec 13 02:07:43.979213 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:43.979225 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Dec 13 02:07:43.979307 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Dec 13 02:07:43.979404 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Dec 13 02:07:43.979489 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:43.979500 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 02:07:43.979509 kernel: ACPI: button: Power Button [PWRB] Dec 13 02:07:43.979520 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 02:07:43.979652 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Dec 13 02:07:43.979733 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Dec 13 02:07:43.979808 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Dec 13 02:07:43.979820 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:07:43.979828 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 02:07:43.979896 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Dec 13 02:07:43.979907 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Dec 13 02:07:43.979915 kernel: thunder_xcv, ver 1.0 Dec 13 02:07:43.979927 kernel: thunder_bgx, ver 1.0 Dec 13 02:07:43.979935 kernel: nicpf, ver 1.0 Dec 13 02:07:43.979943 kernel: nicvf, ver 1.0 Dec 13 02:07:43.980028 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 02:07:43.980094 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T02:07:43 UTC (1734055663) Dec 13 02:07:43.980104 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 02:07:43.980113 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 02:07:43.980120 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 02:07:43.980130 kernel: watchdog: Hard watchdog permanently disabled Dec 13 02:07:43.980138 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:07:43.980146 kernel: Segment Routing with IPv6 Dec 13 02:07:43.980153 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:07:43.980161 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:07:43.980169 kernel: Key type dns_resolver registered Dec 13 02:07:43.980176 kernel: registered taskstats version 1 Dec 13 02:07:43.980184 kernel: Loading compiled-in X.509 certificates Dec 13 02:07:43.980192 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 02:07:43.980201 kernel: Key type .fscrypt registered Dec 13 02:07:43.980209 kernel: Key type fscrypt-provisioning registered Dec 13 02:07:43.980217 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:07:43.980225 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:07:43.980233 kernel: ima: No architecture policies found Dec 13 02:07:43.980240 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 02:07:43.980248 kernel: clk: Disabling unused clocks Dec 13 02:07:43.980256 kernel: Freeing unused kernel memory: 39360K Dec 13 02:07:43.980263 kernel: Run /init as init process Dec 13 02:07:43.980272 kernel: with arguments: Dec 13 02:07:43.980280 kernel: /init Dec 13 02:07:43.980287 kernel: with environment: Dec 13 02:07:43.980295 kernel: HOME=/ Dec 13 02:07:43.980302 kernel: TERM=linux Dec 13 02:07:43.980310 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:07:43.980320 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 02:07:43.980330 systemd[1]: Detected virtualization kvm. Dec 13 02:07:43.980340 systemd[1]: Detected architecture arm64. Dec 13 02:07:43.980349 systemd[1]: Running in initrd. Dec 13 02:07:43.980357 systemd[1]: No hostname configured, using default hostname. Dec 13 02:07:43.980365 systemd[1]: Hostname set to . Dec 13 02:07:43.980374 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:07:43.980383 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:07:43.980392 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:07:43.980403 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:07:43.980415 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 02:07:43.980425 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 02:07:43.980436 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 02:07:43.980447 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 02:07:43.980457 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 02:07:43.980466 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 02:07:43.980475 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:07:43.980484 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:07:43.980492 systemd[1]: Reached target paths.target - Path Units. Dec 13 02:07:43.980500 systemd[1]: Reached target slices.target - Slice Units. Dec 13 02:07:43.980508 systemd[1]: Reached target swap.target - Swaps. Dec 13 02:07:43.980516 systemd[1]: Reached target timers.target - Timer Units. Dec 13 02:07:43.980524 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 02:07:43.980532 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 02:07:43.980565 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 02:07:43.980576 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 02:07:43.980584 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:07:43.980592 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 02:07:43.980601 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:07:43.980609 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 02:07:43.980617 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 02:07:43.980625 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 02:07:43.980634 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 02:07:43.980643 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:07:43.980652 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 02:07:43.980660 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 02:07:43.980668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:07:43.980676 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 02:07:43.980684 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:07:43.980715 systemd-journald[236]: Collecting audit messages is disabled. Dec 13 02:07:43.980737 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:07:43.980746 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 02:07:43.980757 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:07:43.980765 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:07:43.980774 systemd-journald[236]: Journal started Dec 13 02:07:43.980795 systemd-journald[236]: Runtime Journal (/run/log/journal/621c930adc18495792a1467d9a5ea3bc) is 8.0M, max 76.5M, 68.5M free. Dec 13 02:07:43.984692 kernel: Bridge firewalling registered Dec 13 02:07:43.962733 systemd-modules-load[237]: Inserted module 'overlay' Dec 13 02:07:43.985818 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:07:43.986720 systemd-modules-load[237]: Inserted module 'br_netfilter' Dec 13 02:07:43.987888 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 02:07:43.988324 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 02:07:43.989070 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 02:07:43.992985 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:07:43.997025 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 02:07:44.000815 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 02:07:44.011805 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:07:44.019613 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:07:44.021384 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:07:44.022149 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:07:44.028685 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 02:07:44.032726 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 02:07:44.044209 dracut-cmdline[272]: dracut-dracut-053 Dec 13 02:07:44.047703 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 02:07:44.072728 systemd-resolved[273]: Positive Trust Anchors: Dec 13 02:07:44.072748 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:07:44.072787 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 02:07:44.078371 systemd-resolved[273]: Defaulting to hostname 'linux'. Dec 13 02:07:44.079592 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 02:07:44.080328 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:07:44.148575 kernel: SCSI subsystem initialized Dec 13 02:07:44.153582 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:07:44.160591 kernel: iscsi: registered transport (tcp) Dec 13 02:07:44.173616 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:07:44.173708 kernel: QLogic iSCSI HBA Driver Dec 13 02:07:44.228819 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 02:07:44.234719 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 02:07:44.257370 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:07:44.258939 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:07:44.258977 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 02:07:44.314591 kernel: raid6: neonx8 gen() 15670 MB/s Dec 13 02:07:44.331713 kernel: raid6: neonx4 gen() 15382 MB/s Dec 13 02:07:44.348593 kernel: raid6: neonx2 gen() 13199 MB/s Dec 13 02:07:44.365604 kernel: raid6: neonx1 gen() 10453 MB/s Dec 13 02:07:44.382585 kernel: raid6: int64x8 gen() 6912 MB/s Dec 13 02:07:44.399613 kernel: raid6: int64x4 gen() 7291 MB/s Dec 13 02:07:44.416591 kernel: raid6: int64x2 gen() 6105 MB/s Dec 13 02:07:44.433640 kernel: raid6: int64x1 gen() 5039 MB/s Dec 13 02:07:44.433729 kernel: raid6: using algorithm neonx8 gen() 15670 MB/s Dec 13 02:07:44.450584 kernel: raid6: .... xor() 11888 MB/s, rmw enabled Dec 13 02:07:44.450648 kernel: raid6: using neon recovery algorithm Dec 13 02:07:44.454577 kernel: xor: measuring software checksum speed Dec 13 02:07:44.455682 kernel: 8regs : 18436 MB/sec Dec 13 02:07:44.455727 kernel: 32regs : 19669 MB/sec Dec 13 02:07:44.455749 kernel: arm64_neon : 25137 MB/sec Dec 13 02:07:44.455769 kernel: xor: using function: arm64_neon (25137 MB/sec) Dec 13 02:07:44.506634 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 02:07:44.522615 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 02:07:44.528698 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:07:44.553162 systemd-udevd[455]: Using default interface naming scheme 'v255'. Dec 13 02:07:44.556916 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:07:44.566716 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 02:07:44.584019 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Dec 13 02:07:44.616938 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 02:07:44.623708 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 02:07:44.680156 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:07:44.689782 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 02:07:44.710580 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 02:07:44.711938 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 02:07:44.713135 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:07:44.714213 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 02:07:44.719757 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 02:07:44.732037 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 02:07:44.821249 kernel: scsi host0: Virtio SCSI HBA Dec 13 02:07:44.824588 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 02:07:44.824629 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 02:07:44.831685 kernel: ACPI: bus type USB registered Dec 13 02:07:44.831750 kernel: usbcore: registered new interface driver usbfs Dec 13 02:07:44.831764 kernel: usbcore: registered new interface driver hub Dec 13 02:07:44.832764 kernel: usbcore: registered new device driver usb Dec 13 02:07:44.845729 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:07:44.845857 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:07:44.846626 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:07:44.848834 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:07:44.848988 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:07:44.849716 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:07:44.861378 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:07:44.865121 kernel: sr 0:0:0:0: Power-on or device reset occurred Dec 13 02:07:44.875218 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Dec 13 02:07:44.875405 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 02:07:44.875419 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Dec 13 02:07:44.879562 kernel: sd 0:0:0:1: Power-on or device reset occurred Dec 13 02:07:44.891270 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 02:07:44.891428 kernel: sd 0:0:0:1: [sda] Write Protect is off Dec 13 02:07:44.891521 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Dec 13 02:07:44.891623 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 02:07:44.891709 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 02:07:44.894846 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 02:07:44.894954 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 02:07:44.895044 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:07:44.895056 kernel: GPT:17805311 != 80003071 Dec 13 02:07:44.895065 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:07:44.895075 kernel: GPT:17805311 != 80003071 Dec 13 02:07:44.895084 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:07:44.895094 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:07:44.895106 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 02:07:44.895190 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 02:07:44.895279 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 02:07:44.896615 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Dec 13 02:07:44.896744 kernel: hub 1-0:1.0: USB hub found Dec 13 02:07:44.896861 kernel: hub 1-0:1.0: 4 ports detected Dec 13 02:07:44.896956 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 02:07:44.897090 kernel: hub 2-0:1.0: USB hub found Dec 13 02:07:44.897186 kernel: hub 2-0:1.0: 4 ports detected Dec 13 02:07:44.879854 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:07:44.889866 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:07:44.916133 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:07:44.932591 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (500) Dec 13 02:07:44.936448 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 02:07:44.944578 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (504) Dec 13 02:07:44.952979 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 02:07:44.962307 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 02:07:44.968022 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 02:07:44.968642 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 02:07:44.977730 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 02:07:44.983077 disk-uuid[573]: Primary Header is updated. Dec 13 02:07:44.983077 disk-uuid[573]: Secondary Entries is updated. Dec 13 02:07:44.983077 disk-uuid[573]: Secondary Header is updated. Dec 13 02:07:44.997015 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:07:45.002566 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:07:45.130578 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 02:07:45.373612 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Dec 13 02:07:45.514575 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Dec 13 02:07:45.514639 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 02:07:45.516618 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Dec 13 02:07:45.570334 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Dec 13 02:07:45.571204 kernel: usbcore: registered new interface driver usbhid Dec 13 02:07:45.571251 kernel: usbhid: USB HID core driver Dec 13 02:07:46.003824 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:07:46.006180 disk-uuid[574]: The operation has completed successfully. Dec 13 02:07:46.069177 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:07:46.069277 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 02:07:46.080726 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 02:07:46.086511 sh[588]: Success Dec 13 02:07:46.111656 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 02:07:46.153180 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 02:07:46.163907 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 02:07:46.166571 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 02:07:46.187024 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 02:07:46.187091 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 02:07:46.187115 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 02:07:46.187650 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 02:07:46.188561 kernel: BTRFS info (device dm-0): using free space tree Dec 13 02:07:46.194585 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 02:07:46.196278 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 02:07:46.198205 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 02:07:46.204712 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 02:07:46.207713 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 02:07:46.219588 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 02:07:46.219638 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 02:07:46.219650 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:07:46.224673 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:07:46.224726 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 02:07:46.234820 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 02:07:46.234940 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:07:46.239367 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 02:07:46.244720 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 02:07:46.332966 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 02:07:46.344240 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 02:07:46.364486 systemd-networkd[774]: lo: Link UP Dec 13 02:07:46.364500 systemd-networkd[774]: lo: Gained carrier Dec 13 02:07:46.367062 systemd-networkd[774]: Enumeration completed Dec 13 02:07:46.367161 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 02:07:46.367839 systemd[1]: Reached target network.target - Network. Dec 13 02:07:46.369215 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:46.369218 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:07:46.370044 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:46.370047 systemd-networkd[774]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:07:46.371131 systemd-networkd[774]: eth0: Link UP Dec 13 02:07:46.371134 systemd-networkd[774]: eth0: Gained carrier Dec 13 02:07:46.371141 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:46.375880 systemd-networkd[774]: eth1: Link UP Dec 13 02:07:46.375883 systemd-networkd[774]: eth1: Gained carrier Dec 13 02:07:46.375890 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:46.377672 ignition[680]: Ignition 2.19.0 Dec 13 02:07:46.377678 ignition[680]: Stage: fetch-offline Dec 13 02:07:46.377709 ignition[680]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:46.379035 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 02:07:46.377717 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:07:46.377864 ignition[680]: parsed url from cmdline: "" Dec 13 02:07:46.377867 ignition[680]: no config URL provided Dec 13 02:07:46.377871 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:07:46.377877 ignition[680]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:07:46.377882 ignition[680]: failed to fetch config: resource requires networking Dec 13 02:07:46.378031 ignition[680]: Ignition finished successfully Dec 13 02:07:46.389772 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 02:07:46.404840 ignition[778]: Ignition 2.19.0 Dec 13 02:07:46.405715 ignition[778]: Stage: fetch Dec 13 02:07:46.406215 ignition[778]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:46.406260 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:07:46.406635 systemd-networkd[774]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 02:07:46.406487 ignition[778]: parsed url from cmdline: "" Dec 13 02:07:46.406496 ignition[778]: no config URL provided Dec 13 02:07:46.406508 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:07:46.406526 ignition[778]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:07:46.408131 ignition[778]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 02:07:46.409777 ignition[778]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 02:07:46.517659 systemd-networkd[774]: eth0: DHCPv4 address 78.47.218.196/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 02:07:46.609968 ignition[778]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 02:07:46.617950 ignition[778]: GET result: OK Dec 13 02:07:46.618125 ignition[778]: parsing config with SHA512: 4484161c19de43a82b07c686c81261e87a9d8ca76ac226b7b1531f443580ac6c7b68023488065f42e2865b4949d63afe66f4a71d1fefa461a3c119e15ea9dd0d Dec 13 02:07:46.626763 unknown[778]: fetched base config from "system" Dec 13 02:07:46.626786 unknown[778]: fetched base config from "system" Dec 13 02:07:46.627865 ignition[778]: fetch: fetch complete Dec 13 02:07:46.626799 unknown[778]: fetched user config from "hetzner" Dec 13 02:07:46.627872 ignition[778]: fetch: fetch passed Dec 13 02:07:46.627926 ignition[778]: Ignition finished successfully Dec 13 02:07:46.631284 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 02:07:46.638734 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 02:07:46.654392 ignition[785]: Ignition 2.19.0 Dec 13 02:07:46.654409 ignition[785]: Stage: kargs Dec 13 02:07:46.654635 ignition[785]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:46.654648 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:07:46.661092 ignition[785]: kargs: kargs passed Dec 13 02:07:46.661246 ignition[785]: Ignition finished successfully Dec 13 02:07:46.664057 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 02:07:46.670722 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 02:07:46.685417 ignition[791]: Ignition 2.19.0 Dec 13 02:07:46.685438 ignition[791]: Stage: disks Dec 13 02:07:46.685812 ignition[791]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:46.685834 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:07:46.690952 ignition[791]: disks: disks passed Dec 13 02:07:46.691042 ignition[791]: Ignition finished successfully Dec 13 02:07:46.692801 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 02:07:46.694988 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 02:07:46.696777 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 02:07:46.697863 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 02:07:46.699256 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 02:07:46.700195 systemd[1]: Reached target basic.target - Basic System. Dec 13 02:07:46.707713 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 02:07:46.725300 systemd-fsck[799]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 02:07:46.729571 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 02:07:46.735735 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 02:07:46.785581 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 02:07:46.786144 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 02:07:46.787472 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 02:07:46.795706 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 02:07:46.798745 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 02:07:46.802892 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 02:07:46.806798 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:07:46.807816 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 02:07:46.815129 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (807) Dec 13 02:07:46.814872 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 02:07:46.819329 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 02:07:46.819419 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 02:07:46.819433 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:07:46.819833 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 02:07:46.828194 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:07:46.828237 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 02:07:46.831644 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 02:07:46.872134 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:07:46.877565 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:07:46.882820 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:07:46.883907 coreos-metadata[809]: Dec 13 02:07:46.883 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 02:07:46.885452 coreos-metadata[809]: Dec 13 02:07:46.885 INFO Fetch successful Dec 13 02:07:46.885452 coreos-metadata[809]: Dec 13 02:07:46.885 INFO wrote hostname ci-4081-2-1-f-bc189a5809 to /sysroot/etc/hostname Dec 13 02:07:46.888040 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 02:07:46.890296 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:07:46.985388 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 02:07:46.990685 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 02:07:46.995923 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 02:07:47.004563 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 02:07:47.025660 ignition[924]: INFO : Ignition 2.19.0 Dec 13 02:07:47.025660 ignition[924]: INFO : Stage: mount Dec 13 02:07:47.028482 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:47.028482 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:07:47.028482 ignition[924]: INFO : mount: mount passed Dec 13 02:07:47.028482 ignition[924]: INFO : Ignition finished successfully Dec 13 02:07:47.027638 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 02:07:47.031005 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 02:07:47.035681 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 02:07:47.188250 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 02:07:47.194892 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 02:07:47.214613 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (936) Dec 13 02:07:47.216850 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 02:07:47.216886 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 02:07:47.216897 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:07:47.219547 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:07:47.219637 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 02:07:47.222922 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 02:07:47.245037 ignition[953]: INFO : Ignition 2.19.0 Dec 13 02:07:47.245037 ignition[953]: INFO : Stage: files Dec 13 02:07:47.246141 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:47.246141 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:07:47.247347 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:07:47.249093 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:07:47.249093 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:07:47.252871 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:07:47.254099 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:07:47.254099 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:07:47.253302 unknown[953]: wrote ssh authorized keys file for user: core Dec 13 02:07:47.256401 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 02:07:47.256401 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 02:07:47.341267 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 02:07:47.516677 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 02:07:47.516677 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:07:47.519629 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 02:07:47.942888 systemd-networkd[774]: eth1: Gained IPv6LL Dec 13 02:07:48.070909 systemd-networkd[774]: eth0: Gained IPv6LL Dec 13 02:07:48.083502 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 02:07:48.172603 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:07:48.172603 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:07:48.172603 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:07:48.172603 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:07:48.176434 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:07:48.176434 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:07:48.176434 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:07:48.176434 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:07:48.176434 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:07:48.176434 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:07:48.176434 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:07:48.176434 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 02:07:48.176434 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 02:07:48.176434 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 02:07:48.176434 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 02:07:48.693582 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 02:07:48.916508 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 02:07:48.917994 ignition[953]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 02:07:48.917994 ignition[953]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:07:48.920644 ignition[953]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:07:48.920644 ignition[953]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 02:07:48.920644 ignition[953]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 02:07:48.920644 ignition[953]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 02:07:48.920644 ignition[953]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 02:07:48.920644 ignition[953]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 02:07:48.920644 ignition[953]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:07:48.920644 ignition[953]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:07:48.920644 ignition[953]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:07:48.920644 ignition[953]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:07:48.920644 ignition[953]: INFO : files: files passed Dec 13 02:07:48.920644 ignition[953]: INFO : Ignition finished successfully Dec 13 02:07:48.921826 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 02:07:48.932770 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 02:07:48.936034 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 02:07:48.939032 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:07:48.939751 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 02:07:48.947954 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:07:48.947954 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:07:48.950692 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:07:48.953578 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 02:07:48.955281 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 02:07:48.958683 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 02:07:48.990599 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:07:48.990714 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 02:07:48.992493 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 02:07:48.993473 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 02:07:48.994556 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 02:07:48.996013 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 02:07:49.016584 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 02:07:49.023738 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 02:07:49.033637 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:07:49.035228 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:07:49.036059 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 02:07:49.037246 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:07:49.037380 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 02:07:49.039794 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 02:07:49.040381 systemd[1]: Stopped target basic.target - Basic System. Dec 13 02:07:49.041509 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 02:07:49.042623 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 02:07:49.043954 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 02:07:49.045240 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 02:07:49.046330 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 02:07:49.047620 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 02:07:49.048787 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 02:07:49.049688 systemd[1]: Stopped target swap.target - Swaps. Dec 13 02:07:49.050461 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:07:49.050608 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 02:07:49.051734 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:07:49.052305 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:07:49.053313 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 02:07:49.053387 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:07:49.054342 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:07:49.054457 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 02:07:49.055831 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:07:49.055947 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 02:07:49.057089 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:07:49.057178 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 02:07:49.058047 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 02:07:49.058137 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 02:07:49.068768 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 02:07:49.069237 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:07:49.069363 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:07:49.074837 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 02:07:49.076141 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:07:49.077295 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:07:49.079240 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:07:49.079437 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 02:07:49.084787 ignition[1005]: INFO : Ignition 2.19.0 Dec 13 02:07:49.084787 ignition[1005]: INFO : Stage: umount Dec 13 02:07:49.085998 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:49.085998 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:07:49.087767 ignition[1005]: INFO : umount: umount passed Dec 13 02:07:49.087767 ignition[1005]: INFO : Ignition finished successfully Dec 13 02:07:49.090869 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:07:49.091522 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 02:07:49.093975 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:07:49.094733 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 02:07:49.095978 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:07:49.096027 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 02:07:49.096770 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:07:49.096811 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 02:07:49.097762 systemd[1]: Stopped target network.target - Network. Dec 13 02:07:49.104078 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:07:49.104144 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 02:07:49.105019 systemd[1]: Stopped target paths.target - Path Units. Dec 13 02:07:49.105511 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:07:49.110313 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:07:49.111676 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 02:07:49.112187 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 02:07:49.115811 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:07:49.115866 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 02:07:49.117130 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:07:49.117175 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 02:07:49.118654 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:07:49.118714 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 02:07:49.122926 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 02:07:49.122987 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 02:07:49.124087 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 02:07:49.126935 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 02:07:49.127081 systemd-networkd[774]: eth0: DHCPv6 lease lost Dec 13 02:07:49.129627 systemd-networkd[774]: eth1: DHCPv6 lease lost Dec 13 02:07:49.130225 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:07:49.130885 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:07:49.130978 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 02:07:49.135760 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:07:49.135934 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 02:07:49.140227 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:07:49.140320 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 02:07:49.149832 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:07:49.149906 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:07:49.159673 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 02:07:49.160119 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:07:49.160187 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 02:07:49.160871 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:07:49.160912 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:07:49.161437 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:07:49.161471 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 02:07:49.163717 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 02:07:49.163779 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:07:49.165179 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:07:49.167996 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:07:49.168089 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 02:07:49.185364 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:07:49.186122 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:07:49.188535 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:07:49.190162 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 02:07:49.194019 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:07:49.194147 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 02:07:49.196490 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:07:49.196588 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:07:49.198120 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:07:49.198209 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 02:07:49.200513 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:07:49.200586 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 02:07:49.202143 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:07:49.202194 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:07:49.203963 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:07:49.204009 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 02:07:49.210812 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 02:07:49.211441 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 02:07:49.211503 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:07:49.214616 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:07:49.214687 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:07:49.220090 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:07:49.220891 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 02:07:49.221839 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 02:07:49.227868 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 02:07:49.241402 systemd[1]: Switching root. Dec 13 02:07:49.282095 systemd-journald[236]: Journal stopped Dec 13 02:07:50.136380 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Dec 13 02:07:50.136448 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:07:50.136472 kernel: SELinux: policy capability open_perms=1 Dec 13 02:07:50.136484 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:07:50.136494 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:07:50.136504 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:07:50.136521 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:07:50.136531 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:07:50.137636 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:07:50.137664 kernel: audit: type=1403 audit(1734055669.418:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:07:50.137676 systemd[1]: Successfully loaded SELinux policy in 34.379ms. Dec 13 02:07:50.137696 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.391ms. Dec 13 02:07:50.137707 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 02:07:50.137718 systemd[1]: Detected virtualization kvm. Dec 13 02:07:50.137729 systemd[1]: Detected architecture arm64. Dec 13 02:07:50.137746 systemd[1]: Detected first boot. Dec 13 02:07:50.137761 systemd[1]: Hostname set to . Dec 13 02:07:50.137772 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:07:50.137783 zram_generator::config[1048]: No configuration found. Dec 13 02:07:50.137794 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:07:50.137805 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:07:50.137815 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 02:07:50.137826 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:07:50.137839 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 02:07:50.137849 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 02:07:50.137860 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 02:07:50.137870 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 02:07:50.137880 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 02:07:50.137891 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 02:07:50.137902 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 02:07:50.137912 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 02:07:50.137924 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:07:50.137934 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:07:50.137945 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 02:07:50.137956 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 02:07:50.137966 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 02:07:50.137977 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 02:07:50.137987 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 02:07:50.137997 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:07:50.138007 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 02:07:50.138020 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 02:07:50.138030 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 02:07:50.138041 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 02:07:50.138052 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:07:50.138067 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 02:07:50.138077 systemd[1]: Reached target slices.target - Slice Units. Dec 13 02:07:50.138088 systemd[1]: Reached target swap.target - Swaps. Dec 13 02:07:50.138101 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 02:07:50.138112 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 02:07:50.138122 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:07:50.138132 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 02:07:50.138142 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:07:50.138153 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 02:07:50.138163 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 02:07:50.138174 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 02:07:50.138184 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 02:07:50.138196 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 02:07:50.138206 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 02:07:50.138217 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 02:07:50.138228 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:07:50.138238 systemd[1]: Reached target machines.target - Containers. Dec 13 02:07:50.138253 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 02:07:50.138265 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:07:50.138276 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 02:07:50.138287 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 02:07:50.138297 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:07:50.138308 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 02:07:50.138318 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:07:50.138329 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 02:07:50.138339 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:07:50.138352 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:07:50.138363 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:07:50.138373 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 02:07:50.138383 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:07:50.138394 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:07:50.138403 kernel: loop: module loaded Dec 13 02:07:50.138413 kernel: fuse: init (API version 7.39) Dec 13 02:07:50.138423 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 02:07:50.138435 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 02:07:50.138446 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 02:07:50.138457 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 02:07:50.138467 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 02:07:50.138478 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:07:50.138488 systemd[1]: Stopped verity-setup.service. Dec 13 02:07:50.138498 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 02:07:50.138509 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 02:07:50.138519 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 02:07:50.138532 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 02:07:50.138554 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 02:07:50.138565 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 02:07:50.138576 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:07:50.139582 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:07:50.139624 kernel: ACPI: bus type drm_connector registered Dec 13 02:07:50.139636 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 02:07:50.139649 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:50.139660 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:07:50.139675 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:07:50.139687 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 02:07:50.139734 systemd-journald[1115]: Collecting audit messages is disabled. Dec 13 02:07:50.139759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:07:50.139771 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:07:50.139787 systemd-journald[1115]: Journal started Dec 13 02:07:50.139810 systemd-journald[1115]: Runtime Journal (/run/log/journal/621c930adc18495792a1467d9a5ea3bc) is 8.0M, max 76.5M, 68.5M free. Dec 13 02:07:49.893373 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:07:49.913359 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 02:07:49.913775 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:07:50.143585 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 02:07:50.142861 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:07:50.143056 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 02:07:50.144272 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:07:50.146750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:07:50.148036 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 02:07:50.149190 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 02:07:50.150295 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 02:07:50.151888 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 02:07:50.163132 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 02:07:50.174663 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 02:07:50.180746 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 02:07:50.182661 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:07:50.182708 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 02:07:50.186642 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 02:07:50.191054 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 02:07:50.192734 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 02:07:50.194350 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:07:50.198807 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 02:07:50.202841 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 02:07:50.203420 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:07:50.204352 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 02:07:50.205723 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:07:50.207717 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:07:50.213776 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 02:07:50.216775 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 02:07:50.220948 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 02:07:50.221605 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 02:07:50.222367 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 02:07:50.238653 kernel: loop0: detected capacity change from 0 to 8 Dec 13 02:07:50.239466 systemd-journald[1115]: Time spent on flushing to /var/log/journal/621c930adc18495792a1467d9a5ea3bc is 56.877ms for 1129 entries. Dec 13 02:07:50.239466 systemd-journald[1115]: System Journal (/var/log/journal/621c930adc18495792a1467d9a5ea3bc) is 8.0M, max 584.8M, 576.8M free. Dec 13 02:07:50.312999 systemd-journald[1115]: Received client request to flush runtime journal. Dec 13 02:07:50.313043 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:07:50.313073 kernel: loop1: detected capacity change from 0 to 194096 Dec 13 02:07:50.258329 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:07:50.268771 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 02:07:50.274604 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 02:07:50.276257 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 02:07:50.284303 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 02:07:50.294605 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:07:50.321278 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 02:07:50.328992 kernel: loop2: detected capacity change from 0 to 114328 Dec 13 02:07:50.334666 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 02:07:50.343970 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 02:07:50.347901 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:07:50.348623 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 02:07:50.350145 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:07:50.370653 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Dec 13 02:07:50.370670 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Dec 13 02:07:50.374980 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:07:50.383639 kernel: loop3: detected capacity change from 0 to 114432 Dec 13 02:07:50.426568 kernel: loop4: detected capacity change from 0 to 8 Dec 13 02:07:50.430707 kernel: loop5: detected capacity change from 0 to 194096 Dec 13 02:07:50.456170 kernel: loop6: detected capacity change from 0 to 114328 Dec 13 02:07:50.474575 kernel: loop7: detected capacity change from 0 to 114432 Dec 13 02:07:50.490185 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 02:07:50.490625 (sd-merge)[1187]: Merged extensions into '/usr'. Dec 13 02:07:50.500014 systemd[1]: Reloading requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 02:07:50.500035 systemd[1]: Reloading... Dec 13 02:07:50.577138 zram_generator::config[1213]: No configuration found. Dec 13 02:07:50.678198 ldconfig[1157]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:07:50.737515 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:07:50.783024 systemd[1]: Reloading finished in 282 ms. Dec 13 02:07:50.809716 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 02:07:50.812702 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 02:07:50.824108 systemd[1]: Starting ensure-sysext.service... Dec 13 02:07:50.827258 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 02:07:50.841641 systemd[1]: Reloading requested from client PID 1250 ('systemctl') (unit ensure-sysext.service)... Dec 13 02:07:50.841922 systemd[1]: Reloading... Dec 13 02:07:50.863726 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:07:50.864080 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 02:07:50.864763 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:07:50.864980 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Dec 13 02:07:50.865024 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Dec 13 02:07:50.872069 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 02:07:50.872085 systemd-tmpfiles[1251]: Skipping /boot Dec 13 02:07:50.893619 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 02:07:50.893637 systemd-tmpfiles[1251]: Skipping /boot Dec 13 02:07:50.918252 zram_generator::config[1277]: No configuration found. Dec 13 02:07:51.033463 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:07:51.079860 systemd[1]: Reloading finished in 237 ms. Dec 13 02:07:51.097572 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 02:07:51.098805 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:07:51.114760 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 02:07:51.119661 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 02:07:51.130761 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 02:07:51.137816 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 02:07:51.142692 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:07:51.147063 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 02:07:51.154788 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:07:51.163885 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:07:51.173888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:07:51.175885 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:07:51.176479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:07:51.177250 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:51.178030 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:07:51.189832 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 02:07:51.194130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:07:51.198036 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:07:51.199097 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:07:51.202223 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 02:07:51.207867 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:07:51.211069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 02:07:51.211958 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:07:51.213922 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 02:07:51.214832 systemd[1]: Finished ensure-sysext.service. Dec 13 02:07:51.216716 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 02:07:51.233836 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 02:07:51.234747 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:07:51.234910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:07:51.236162 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:07:51.239848 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:07:51.241578 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:07:51.261753 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:51.261903 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:07:51.263121 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:07:51.268896 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:07:51.269072 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 02:07:51.275395 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 02:07:51.279841 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Dec 13 02:07:51.293675 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 02:07:51.296748 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:07:51.314295 augenrules[1355]: No rules Dec 13 02:07:51.315344 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 02:07:51.321086 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 02:07:51.321727 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:07:51.327833 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 02:07:51.436567 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 02:07:51.437289 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 02:07:51.457987 systemd-networkd[1368]: lo: Link UP Dec 13 02:07:51.458001 systemd-networkd[1368]: lo: Gained carrier Dec 13 02:07:51.464257 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 02:07:51.474950 systemd-resolved[1320]: Positive Trust Anchors: Dec 13 02:07:51.477623 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:07:51.477666 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 02:07:51.483494 systemd-networkd[1368]: Enumeration completed Dec 13 02:07:51.483608 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 02:07:51.487649 systemd-resolved[1320]: Using system hostname 'ci-4081-2-1-f-bc189a5809'. Dec 13 02:07:51.488720 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 02:07:51.492011 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 02:07:51.494803 systemd[1]: Reached target network.target - Network. Dec 13 02:07:51.495244 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:07:51.496507 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:51.496515 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:07:51.500384 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:51.500420 systemd-networkd[1368]: eth0: Link UP Dec 13 02:07:51.500423 systemd-networkd[1368]: eth0: Gained carrier Dec 13 02:07:51.500432 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:51.502565 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1384) Dec 13 02:07:51.506557 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1384) Dec 13 02:07:51.537558 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:07:51.547565 systemd-networkd[1368]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:51.547575 systemd-networkd[1368]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:07:51.549165 systemd-networkd[1368]: eth1: Link UP Dec 13 02:07:51.549178 systemd-networkd[1368]: eth1: Gained carrier Dec 13 02:07:51.549195 systemd-networkd[1368]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:51.576624 systemd-networkd[1368]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 02:07:51.578290 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Dec 13 02:07:51.587379 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 13 02:07:51.587504 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:07:51.599764 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:07:51.602366 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:07:51.612785 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:07:51.613338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:07:51.613378 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:07:51.613782 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:51.613944 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:07:51.616599 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:07:51.616751 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:07:51.618015 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:07:51.620582 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:07:51.623024 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:07:51.623090 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:07:51.642916 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:07:51.647622 systemd-networkd[1368]: eth0: DHCPv4 address 78.47.218.196/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 02:07:51.648016 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Dec 13 02:07:51.650177 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Dec 13 02:07:51.674668 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Dec 13 02:07:51.679809 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1381) Dec 13 02:07:51.679902 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 02:07:51.679925 kernel: [drm] features: -context_init Dec 13 02:07:51.683674 kernel: [drm] number of scanouts: 1 Dec 13 02:07:51.683753 kernel: [drm] number of cap sets: 0 Dec 13 02:07:51.687562 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 02:07:51.700804 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 02:07:51.716568 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 02:07:51.723847 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 02:07:51.733812 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 02:07:51.735678 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:07:51.738213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:07:51.749749 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:07:51.751964 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 02:07:51.780682 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:07:51.815902 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 02:07:51.820811 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 02:07:51.842571 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:07:51.872432 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 02:07:51.874627 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:07:51.876259 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 02:07:51.878017 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 02:07:51.879750 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 02:07:51.880697 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 02:07:51.881454 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 02:07:51.882295 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 02:07:51.883127 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:07:51.883164 systemd[1]: Reached target paths.target - Path Units. Dec 13 02:07:51.883741 systemd[1]: Reached target timers.target - Timer Units. Dec 13 02:07:51.886046 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 02:07:51.887930 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 02:07:51.896846 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 02:07:51.899419 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 02:07:51.901161 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 02:07:51.902187 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 02:07:51.903606 systemd[1]: Reached target basic.target - Basic System. Dec 13 02:07:51.904148 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 02:07:51.904180 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 02:07:51.905687 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 02:07:51.910200 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:07:51.911743 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 02:07:51.914656 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 02:07:51.920691 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 02:07:51.924710 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 02:07:51.925744 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 02:07:51.928704 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 02:07:51.931250 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 02:07:51.945712 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 02:07:51.950704 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 02:07:51.952690 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 02:07:51.955454 jq[1443]: false Dec 13 02:07:51.956780 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 02:07:51.957981 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:07:51.958365 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:07:51.962963 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 02:07:51.966647 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 02:07:51.969693 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 02:07:51.972165 dbus-daemon[1442]: [system] SELinux support is enabled Dec 13 02:07:51.974721 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 02:07:51.978891 coreos-metadata[1441]: Dec 13 02:07:51.978 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 02:07:51.979588 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:07:51.979827 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 02:07:51.980237 coreos-metadata[1441]: Dec 13 02:07:51.980 INFO Fetch successful Dec 13 02:07:51.980237 coreos-metadata[1441]: Dec 13 02:07:51.980 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 02:07:51.982588 coreos-metadata[1441]: Dec 13 02:07:51.981 INFO Fetch successful Dec 13 02:07:51.994506 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:07:51.994590 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 02:07:51.995965 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:07:51.996859 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 02:07:52.040696 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:07:52.040884 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 02:07:52.046027 jq[1454]: true Dec 13 02:07:52.053936 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:07:52.054127 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 02:07:52.061946 tar[1456]: linux-arm64/helm Dec 13 02:07:52.073672 extend-filesystems[1444]: Found loop4 Dec 13 02:07:52.073672 extend-filesystems[1444]: Found loop5 Dec 13 02:07:52.073672 extend-filesystems[1444]: Found loop6 Dec 13 02:07:52.073672 extend-filesystems[1444]: Found loop7 Dec 13 02:07:52.073672 extend-filesystems[1444]: Found sda Dec 13 02:07:52.073672 extend-filesystems[1444]: Found sda1 Dec 13 02:07:52.073672 extend-filesystems[1444]: Found sda2 Dec 13 02:07:52.073672 extend-filesystems[1444]: Found sda3 Dec 13 02:07:52.073672 extend-filesystems[1444]: Found usr Dec 13 02:07:52.073672 extend-filesystems[1444]: Found sda4 Dec 13 02:07:52.073672 extend-filesystems[1444]: Found sda6 Dec 13 02:07:52.073672 extend-filesystems[1444]: Found sda7 Dec 13 02:07:52.073672 extend-filesystems[1444]: Found sda9 Dec 13 02:07:52.073672 extend-filesystems[1444]: Checking size of /dev/sda9 Dec 13 02:07:52.074670 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 02:07:52.091897 jq[1480]: true Dec 13 02:07:52.087758 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 02:07:52.088708 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 02:07:52.099137 update_engine[1453]: I20241213 02:07:52.096975 1453 main.cc:92] Flatcar Update Engine starting Dec 13 02:07:52.107783 systemd[1]: Started update-engine.service - Update Engine. Dec 13 02:07:52.110599 update_engine[1453]: I20241213 02:07:52.109858 1453 update_check_scheduler.cc:74] Next update check in 2m15s Dec 13 02:07:52.114024 systemd-logind[1452]: New seat seat0. Dec 13 02:07:52.115870 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 02:07:52.115889 systemd-logind[1452]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Dec 13 02:07:52.121723 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 02:07:52.122343 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 02:07:52.134566 extend-filesystems[1444]: Resized partition /dev/sda9 Dec 13 02:07:52.140589 extend-filesystems[1497]: resize2fs 1.47.1 (20-May-2024) Dec 13 02:07:52.152698 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 02:07:52.263886 bash[1516]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:07:52.261724 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 02:07:52.267416 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1372) Dec 13 02:07:52.275607 systemd[1]: Starting sshkeys.service... Dec 13 02:07:52.315566 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 02:07:52.316302 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 02:07:52.324695 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 02:07:52.335563 extend-filesystems[1497]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 02:07:52.335563 extend-filesystems[1497]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 02:07:52.335563 extend-filesystems[1497]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 02:07:52.338410 extend-filesystems[1444]: Resized filesystem in /dev/sda9 Dec 13 02:07:52.338410 extend-filesystems[1444]: Found sr0 Dec 13 02:07:52.337038 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:07:52.338849 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 02:07:52.376887 coreos-metadata[1523]: Dec 13 02:07:52.375 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 02:07:52.376887 coreos-metadata[1523]: Dec 13 02:07:52.376 INFO Fetch successful Dec 13 02:07:52.378912 unknown[1523]: wrote ssh authorized keys file for user: core Dec 13 02:07:52.420169 update-ssh-keys[1528]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:07:52.420772 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 02:07:52.424434 systemd[1]: Finished sshkeys.service. Dec 13 02:07:52.469639 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:07:52.516960 containerd[1476]: time="2024-12-13T02:07:52.516868200Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 02:07:52.566129 sshd_keygen[1487]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:07:52.582931 containerd[1476]: time="2024-12-13T02:07:52.581641920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:52.583072 containerd[1476]: time="2024-12-13T02:07:52.583032720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:07:52.583072 containerd[1476]: time="2024-12-13T02:07:52.583068520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:07:52.583135 containerd[1476]: time="2024-12-13T02:07:52.583085360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:07:52.583378 containerd[1476]: time="2024-12-13T02:07:52.583259640Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 02:07:52.583378 containerd[1476]: time="2024-12-13T02:07:52.583281920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:52.583453 containerd[1476]: time="2024-12-13T02:07:52.583385200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:07:52.583453 containerd[1476]: time="2024-12-13T02:07:52.583399040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:52.583613 containerd[1476]: time="2024-12-13T02:07:52.583589800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:07:52.583613 containerd[1476]: time="2024-12-13T02:07:52.583609440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:52.583684 containerd[1476]: time="2024-12-13T02:07:52.583624200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:07:52.583684 containerd[1476]: time="2024-12-13T02:07:52.583634040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:52.583720 containerd[1476]: time="2024-12-13T02:07:52.583710000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:52.583918 containerd[1476]: time="2024-12-13T02:07:52.583893800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:52.584103 containerd[1476]: time="2024-12-13T02:07:52.584076160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:07:52.584103 containerd[1476]: time="2024-12-13T02:07:52.584095160Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:07:52.584184 containerd[1476]: time="2024-12-13T02:07:52.584165080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:07:52.584227 containerd[1476]: time="2024-12-13T02:07:52.584212960Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:07:52.588568 containerd[1476]: time="2024-12-13T02:07:52.588519400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:07:52.588690 containerd[1476]: time="2024-12-13T02:07:52.588665160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:07:52.588723 containerd[1476]: time="2024-12-13T02:07:52.588696560Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 02:07:52.589815 containerd[1476]: time="2024-12-13T02:07:52.589584680Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 02:07:52.589815 containerd[1476]: time="2024-12-13T02:07:52.589619880Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:07:52.590207 containerd[1476]: time="2024-12-13T02:07:52.590024080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:07:52.590271 containerd[1476]: time="2024-12-13T02:07:52.590255200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:07:52.590635 containerd[1476]: time="2024-12-13T02:07:52.590348320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 02:07:52.590635 containerd[1476]: time="2024-12-13T02:07:52.590370360Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 02:07:52.590635 containerd[1476]: time="2024-12-13T02:07:52.590384520Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 02:07:52.590635 containerd[1476]: time="2024-12-13T02:07:52.590398480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:07:52.590635 containerd[1476]: time="2024-12-13T02:07:52.590411000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:07:52.590635 containerd[1476]: time="2024-12-13T02:07:52.590422960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:07:52.590635 containerd[1476]: time="2024-12-13T02:07:52.590442760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:07:52.590635 containerd[1476]: time="2024-12-13T02:07:52.590457440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:07:52.590635 containerd[1476]: time="2024-12-13T02:07:52.590469880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:07:52.590635 containerd[1476]: time="2024-12-13T02:07:52.590483120Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:07:52.590635 containerd[1476]: time="2024-12-13T02:07:52.590495320Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:07:52.590635 containerd[1476]: time="2024-12-13T02:07:52.590519040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.590635 containerd[1476]: time="2024-12-13T02:07:52.590533560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.592941 containerd[1476]: time="2024-12-13T02:07:52.592583040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.592941 containerd[1476]: time="2024-12-13T02:07:52.592611840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.592941 containerd[1476]: time="2024-12-13T02:07:52.592625240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.592941 containerd[1476]: time="2024-12-13T02:07:52.592637920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.592941 containerd[1476]: time="2024-12-13T02:07:52.592651400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.592941 containerd[1476]: time="2024-12-13T02:07:52.592664000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.592941 containerd[1476]: time="2024-12-13T02:07:52.592676960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.592941 containerd[1476]: time="2024-12-13T02:07:52.592706080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.592941 containerd[1476]: time="2024-12-13T02:07:52.592719400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.592941 containerd[1476]: time="2024-12-13T02:07:52.592734000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.592941 containerd[1476]: time="2024-12-13T02:07:52.592747240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.592941 containerd[1476]: time="2024-12-13T02:07:52.592762760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 02:07:52.592941 containerd[1476]: time="2024-12-13T02:07:52.592790920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.592941 containerd[1476]: time="2024-12-13T02:07:52.592802960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.592941 containerd[1476]: time="2024-12-13T02:07:52.592814320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:07:52.593420 containerd[1476]: time="2024-12-13T02:07:52.592924400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:07:52.593420 containerd[1476]: time="2024-12-13T02:07:52.592942680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 02:07:52.593420 containerd[1476]: time="2024-12-13T02:07:52.592954600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:07:52.593420 containerd[1476]: time="2024-12-13T02:07:52.592966040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 02:07:52.593420 containerd[1476]: time="2024-12-13T02:07:52.592975240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.593420 containerd[1476]: time="2024-12-13T02:07:52.592987240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 02:07:52.593420 containerd[1476]: time="2024-12-13T02:07:52.592996840Z" level=info msg="NRI interface is disabled by configuration." Dec 13 02:07:52.593420 containerd[1476]: time="2024-12-13T02:07:52.593006440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:07:52.593605 containerd[1476]: time="2024-12-13T02:07:52.593352840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:07:52.593605 containerd[1476]: time="2024-12-13T02:07:52.593409000Z" level=info msg="Connect containerd service" Dec 13 02:07:52.593605 containerd[1476]: time="2024-12-13T02:07:52.593433200Z" level=info msg="using legacy CRI server" Dec 13 02:07:52.593605 containerd[1476]: time="2024-12-13T02:07:52.593440280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 02:07:52.593605 containerd[1476]: time="2024-12-13T02:07:52.593522920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:07:52.597230 containerd[1476]: time="2024-12-13T02:07:52.596776680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:07:52.597230 containerd[1476]: time="2024-12-13T02:07:52.597093240Z" level=info msg="Start subscribing containerd event" Dec 13 02:07:52.597230 containerd[1476]: time="2024-12-13T02:07:52.597151080Z" level=info msg="Start recovering state" Dec 13 02:07:52.598661 containerd[1476]: time="2024-12-13T02:07:52.597296240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:07:52.598661 containerd[1476]: time="2024-12-13T02:07:52.597335400Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:07:52.598661 containerd[1476]: time="2024-12-13T02:07:52.597298000Z" level=info msg="Start event monitor" Dec 13 02:07:52.598661 containerd[1476]: time="2024-12-13T02:07:52.597364680Z" level=info msg="Start snapshots syncer" Dec 13 02:07:52.598661 containerd[1476]: time="2024-12-13T02:07:52.597373160Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:07:52.598661 containerd[1476]: time="2024-12-13T02:07:52.597380560Z" level=info msg="Start streaming server" Dec 13 02:07:52.597607 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 02:07:52.600658 containerd[1476]: time="2024-12-13T02:07:52.600618720Z" level=info msg="containerd successfully booted in 0.086848s" Dec 13 02:07:52.613422 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 02:07:52.622819 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 02:07:52.630716 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:07:52.632602 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 02:07:52.640632 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 02:07:52.660784 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 02:07:52.669195 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 02:07:52.676928 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 02:07:52.677744 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 02:07:52.778238 tar[1456]: linux-arm64/LICENSE Dec 13 02:07:52.778443 tar[1456]: linux-arm64/README.md Dec 13 02:07:52.794613 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 02:07:52.870741 systemd-networkd[1368]: eth0: Gained IPv6LL Dec 13 02:07:52.871629 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Dec 13 02:07:52.875807 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 02:07:52.878865 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 02:07:52.886745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:07:52.889232 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 02:07:52.917835 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 02:07:53.254779 systemd-networkd[1368]: eth1: Gained IPv6LL Dec 13 02:07:53.255517 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Dec 13 02:07:53.541798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:07:53.543745 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 02:07:53.545797 (kubelet)[1574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:07:53.548646 systemd[1]: Startup finished in 774ms (kernel) + 5.682s (initrd) + 4.172s (userspace) = 10.630s. Dec 13 02:07:54.173744 kubelet[1574]: E1213 02:07:54.173688 1574 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:07:54.175781 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:07:54.176057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:08:04.426701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:08:04.432848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:08:04.563214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:08:04.567506 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:08:04.618035 kubelet[1594]: E1213 02:08:04.617958 1594 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:08:04.621204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:08:04.621349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:08:14.872380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:08:14.878951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:08:15.000982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:08:15.006211 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:08:15.053931 kubelet[1610]: E1213 02:08:15.053885 1610 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:08:15.056320 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:08:15.056489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:08:23.683866 systemd-timesyncd[1341]: Contacted time server 178.63.67.56:123 (2.flatcar.pool.ntp.org). Dec 13 02:08:23.683973 systemd-timesyncd[1341]: Initial clock synchronization to Fri 2024-12-13 02:08:24.048524 UTC. Dec 13 02:08:25.308039 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 02:08:25.314015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:08:25.438751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:08:25.450174 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:08:25.494670 kubelet[1626]: E1213 02:08:25.494607 1626 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:08:25.496940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:08:25.497101 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:08:35.736648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 02:08:35.744983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:08:35.884757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:08:35.895914 (kubelet)[1642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:08:35.946471 kubelet[1642]: E1213 02:08:35.946417 1642 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:08:35.949264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:08:35.949437 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:08:36.942695 update_engine[1453]: I20241213 02:08:36.942270 1453 update_attempter.cc:509] Updating boot flags... Dec 13 02:08:37.000069 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1659) Dec 13 02:08:37.049483 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1663) Dec 13 02:08:45.985862 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 02:08:45.992878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:08:46.116728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:08:46.128209 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:08:46.174013 kubelet[1676]: E1213 02:08:46.173955 1676 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:08:46.176676 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:08:46.176848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:08:56.235976 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 02:08:56.246893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:08:56.354805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:08:56.359335 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:08:56.403415 kubelet[1692]: E1213 02:08:56.403349 1692 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:08:56.405999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:08:56.406150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:06.485847 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 02:09:06.492913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:09:06.615872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:09:06.616018 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:09:06.666148 kubelet[1708]: E1213 02:09:06.666082 1708 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:09:06.669034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:09:06.669266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:16.735379 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 02:09:16.744838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:09:16.870157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:09:16.886990 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:09:16.932686 kubelet[1724]: E1213 02:09:16.932609 1724 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:09:16.936339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:09:16.936636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:26.986113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 02:09:26.992973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:09:27.109279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:09:27.124211 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:09:27.177519 kubelet[1738]: E1213 02:09:27.177452 1738 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:09:27.180224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:09:27.180401 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:37.236092 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 02:09:37.243979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:09:37.348799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:09:37.352445 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:09:37.398372 kubelet[1755]: E1213 02:09:37.398255 1755 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:09:37.401176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:09:37.401347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:39.018649 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 02:09:39.027032 systemd[1]: Started sshd@0-78.47.218.196:22-147.75.109.163:40378.service - OpenSSH per-connection server daemon (147.75.109.163:40378). Dec 13 02:09:40.031128 sshd[1764]: Accepted publickey for core from 147.75.109.163 port 40378 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:09:40.032600 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:09:40.046120 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 02:09:40.052579 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 02:09:40.058149 systemd-logind[1452]: New session 1 of user core. Dec 13 02:09:40.066316 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 02:09:40.072912 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 02:09:40.077395 (systemd)[1768]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:40.180261 systemd[1768]: Queued start job for default target default.target. Dec 13 02:09:40.189846 systemd[1768]: Created slice app.slice - User Application Slice. Dec 13 02:09:40.189873 systemd[1768]: Reached target paths.target - Paths. Dec 13 02:09:40.189889 systemd[1768]: Reached target timers.target - Timers. Dec 13 02:09:40.191527 systemd[1768]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 02:09:40.212880 systemd[1768]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 02:09:40.214427 systemd[1768]: Reached target sockets.target - Sockets. Dec 13 02:09:40.214474 systemd[1768]: Reached target basic.target - Basic System. Dec 13 02:09:40.214591 systemd[1768]: Reached target default.target - Main User Target. Dec 13 02:09:40.214661 systemd[1768]: Startup finished in 130ms. Dec 13 02:09:40.214683 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 02:09:40.223898 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 02:09:40.918029 systemd[1]: Started sshd@1-78.47.218.196:22-147.75.109.163:40380.service - OpenSSH per-connection server daemon (147.75.109.163:40380). Dec 13 02:09:41.897217 sshd[1779]: Accepted publickey for core from 147.75.109.163 port 40380 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:09:41.899960 sshd[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:09:41.905815 systemd-logind[1452]: New session 2 of user core. Dec 13 02:09:41.912741 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 02:09:42.579816 sshd[1779]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:42.586180 systemd[1]: sshd@1-78.47.218.196:22-147.75.109.163:40380.service: Deactivated successfully. Dec 13 02:09:42.589481 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:09:42.590320 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:09:42.591185 systemd-logind[1452]: Removed session 2. Dec 13 02:09:42.757895 systemd[1]: Started sshd@2-78.47.218.196:22-147.75.109.163:40394.service - OpenSSH per-connection server daemon (147.75.109.163:40394). Dec 13 02:09:43.737386 sshd[1786]: Accepted publickey for core from 147.75.109.163 port 40394 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:09:43.739755 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:09:43.746272 systemd-logind[1452]: New session 3 of user core. Dec 13 02:09:43.757907 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 02:09:44.413379 sshd[1786]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:44.418669 systemd[1]: sshd@2-78.47.218.196:22-147.75.109.163:40394.service: Deactivated successfully. Dec 13 02:09:44.421218 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:09:44.422410 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:09:44.424300 systemd-logind[1452]: Removed session 3. Dec 13 02:09:44.594652 systemd[1]: Started sshd@3-78.47.218.196:22-147.75.109.163:40404.service - OpenSSH per-connection server daemon (147.75.109.163:40404). Dec 13 02:09:45.574883 sshd[1793]: Accepted publickey for core from 147.75.109.163 port 40404 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:09:45.577461 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:09:45.585961 systemd-logind[1452]: New session 4 of user core. Dec 13 02:09:45.598792 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 02:09:46.260785 sshd[1793]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:46.265986 systemd[1]: sshd@3-78.47.218.196:22-147.75.109.163:40404.service: Deactivated successfully. Dec 13 02:09:46.268985 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:09:46.272209 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:09:46.273856 systemd-logind[1452]: Removed session 4. Dec 13 02:09:46.435061 systemd[1]: Started sshd@4-78.47.218.196:22-147.75.109.163:49074.service - OpenSSH per-connection server daemon (147.75.109.163:49074). Dec 13 02:09:47.408120 sshd[1800]: Accepted publickey for core from 147.75.109.163 port 49074 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:09:47.410535 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:09:47.411950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 13 02:09:47.420026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:09:47.424299 systemd-logind[1452]: New session 5 of user core. Dec 13 02:09:47.428816 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 02:09:47.532115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:09:47.537786 (kubelet)[1811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:09:47.587109 kubelet[1811]: E1213 02:09:47.586997 1811 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:09:47.590166 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:09:47.590395 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:47.951330 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 02:09:47.951636 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:09:47.967070 sudo[1819]: pam_unix(sudo:session): session closed for user root Dec 13 02:09:48.127100 sshd[1800]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:48.132342 systemd[1]: sshd@4-78.47.218.196:22-147.75.109.163:49074.service: Deactivated successfully. Dec 13 02:09:48.135019 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:09:48.137365 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:09:48.139016 systemd-logind[1452]: Removed session 5. Dec 13 02:09:48.301861 systemd[1]: Started sshd@5-78.47.218.196:22-147.75.109.163:49084.service - OpenSSH per-connection server daemon (147.75.109.163:49084). Dec 13 02:09:49.304338 sshd[1824]: Accepted publickey for core from 147.75.109.163 port 49084 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:09:49.306754 sshd[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:09:49.313901 systemd-logind[1452]: New session 6 of user core. Dec 13 02:09:49.318749 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 02:09:49.828851 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 02:09:49.829307 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:09:49.834072 sudo[1828]: pam_unix(sudo:session): session closed for user root Dec 13 02:09:49.839406 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 02:09:49.839781 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:09:49.862316 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 02:09:49.863874 auditctl[1831]: No rules Dec 13 02:09:49.864250 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 02:09:49.864446 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 02:09:49.867131 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 02:09:49.907879 augenrules[1849]: No rules Dec 13 02:09:49.911626 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 02:09:49.913878 sudo[1827]: pam_unix(sudo:session): session closed for user root Dec 13 02:09:50.074833 sshd[1824]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:50.081336 systemd[1]: sshd@5-78.47.218.196:22-147.75.109.163:49084.service: Deactivated successfully. Dec 13 02:09:50.084855 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:09:50.086301 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:09:50.087711 systemd-logind[1452]: Removed session 6. Dec 13 02:09:50.255872 systemd[1]: Started sshd@6-78.47.218.196:22-147.75.109.163:49094.service - OpenSSH per-connection server daemon (147.75.109.163:49094). Dec 13 02:09:51.240109 sshd[1857]: Accepted publickey for core from 147.75.109.163 port 49094 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:09:51.242569 sshd[1857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:09:51.248662 systemd-logind[1452]: New session 7 of user core. Dec 13 02:09:51.263211 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 02:09:51.765535 sudo[1860]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:09:51.765884 sudo[1860]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:09:52.115951 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 02:09:52.116642 (dockerd)[1876]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 02:09:52.412328 dockerd[1876]: time="2024-12-13T02:09:52.411781158Z" level=info msg="Starting up" Dec 13 02:09:52.528115 dockerd[1876]: time="2024-12-13T02:09:52.528062836Z" level=info msg="Loading containers: start." Dec 13 02:09:52.628649 kernel: Initializing XFRM netlink socket Dec 13 02:09:52.710345 systemd-networkd[1368]: docker0: Link UP Dec 13 02:09:52.738067 dockerd[1876]: time="2024-12-13T02:09:52.737954151Z" level=info msg="Loading containers: done." Dec 13 02:09:52.757396 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck68380049-merged.mount: Deactivated successfully. Dec 13 02:09:52.761232 dockerd[1876]: time="2024-12-13T02:09:52.760885271Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:09:52.761232 dockerd[1876]: time="2024-12-13T02:09:52.760996807Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 02:09:52.761232 dockerd[1876]: time="2024-12-13T02:09:52.761093421Z" level=info msg="Daemon has completed initialization" Dec 13 02:09:52.798596 dockerd[1876]: time="2024-12-13T02:09:52.798366003Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:09:52.798958 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 02:09:53.979343 containerd[1476]: time="2024-12-13T02:09:53.979264168Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 02:09:54.661589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1485397923.mount: Deactivated successfully. Dec 13 02:09:55.674659 containerd[1476]: time="2024-12-13T02:09:55.674495723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:55.676387 containerd[1476]: time="2024-12-13T02:09:55.675849475Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864102" Dec 13 02:09:55.677265 containerd[1476]: time="2024-12-13T02:09:55.677208228Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:55.681740 containerd[1476]: time="2024-12-13T02:09:55.681662980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:55.683917 containerd[1476]: time="2024-12-13T02:09:55.683634380Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 1.704303562s" Dec 13 02:09:55.683917 containerd[1476]: time="2024-12-13T02:09:55.683681147Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Dec 13 02:09:55.707123 containerd[1476]: time="2024-12-13T02:09:55.707078588Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 02:09:57.274290 containerd[1476]: time="2024-12-13T02:09:57.274236895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:57.275628 containerd[1476]: time="2024-12-13T02:09:57.275594684Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900714" Dec 13 02:09:57.276521 containerd[1476]: time="2024-12-13T02:09:57.276487689Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:57.279609 containerd[1476]: time="2024-12-13T02:09:57.279560197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:57.281165 containerd[1476]: time="2024-12-13T02:09:57.280913506Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 1.573790191s" Dec 13 02:09:57.281165 containerd[1476]: time="2024-12-13T02:09:57.280946310Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Dec 13 02:09:57.304184 containerd[1476]: time="2024-12-13T02:09:57.304132421Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 02:09:57.736250 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 13 02:09:57.744785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:09:57.860409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:09:57.875878 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:09:57.924478 kubelet[2094]: E1213 02:09:57.924415 2094 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:09:57.926466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:09:57.926646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:58.356822 containerd[1476]: time="2024-12-13T02:09:58.356746155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:58.358349 containerd[1476]: time="2024-12-13T02:09:58.358077219Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164352" Dec 13 02:09:58.359138 containerd[1476]: time="2024-12-13T02:09:58.359084838Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:58.362381 containerd[1476]: time="2024-12-13T02:09:58.362332007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:58.364169 containerd[1476]: time="2024-12-13T02:09:58.363977714Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.059580455s" Dec 13 02:09:58.364169 containerd[1476]: time="2024-12-13T02:09:58.364027321Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Dec 13 02:09:58.386811 containerd[1476]: time="2024-12-13T02:09:58.386737219Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 02:09:59.393639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1778955066.mount: Deactivated successfully. Dec 13 02:09:59.676717 containerd[1476]: time="2024-12-13T02:09:59.676593356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:59.677908 containerd[1476]: time="2024-12-13T02:09:59.677875532Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662037" Dec 13 02:09:59.678921 containerd[1476]: time="2024-12-13T02:09:59.678872989Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:59.681777 containerd[1476]: time="2024-12-13T02:09:59.681718939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:59.682890 containerd[1476]: time="2024-12-13T02:09:59.682746920Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.295944932s" Dec 13 02:09:59.682890 containerd[1476]: time="2024-12-13T02:09:59.682778444Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 02:09:59.705795 containerd[1476]: time="2024-12-13T02:09:59.705509280Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:10:00.338335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount162254481.mount: Deactivated successfully. Dec 13 02:10:00.930695 containerd[1476]: time="2024-12-13T02:10:00.930535188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:00.932529 containerd[1476]: time="2024-12-13T02:10:00.931910538Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Dec 13 02:10:00.933552 containerd[1476]: time="2024-12-13T02:10:00.933478425Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:00.940104 containerd[1476]: time="2024-12-13T02:10:00.940028379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:00.941552 containerd[1476]: time="2024-12-13T02:10:00.941386932Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.235807445s" Dec 13 02:10:00.941552 containerd[1476]: time="2024-12-13T02:10:00.941426967Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 02:10:00.964919 containerd[1476]: time="2024-12-13T02:10:00.964866681Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 02:10:01.559405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2449552798.mount: Deactivated successfully. Dec 13 02:10:01.567278 containerd[1476]: time="2024-12-13T02:10:01.567082512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:01.568158 containerd[1476]: time="2024-12-13T02:10:01.567966329Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Dec 13 02:10:01.569024 containerd[1476]: time="2024-12-13T02:10:01.568967933Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:01.571833 containerd[1476]: time="2024-12-13T02:10:01.571762887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:01.573067 containerd[1476]: time="2024-12-13T02:10:01.572903755Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 607.993159ms" Dec 13 02:10:01.573067 containerd[1476]: time="2024-12-13T02:10:01.572946350Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 02:10:01.597995 containerd[1476]: time="2024-12-13T02:10:01.597922684Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 02:10:02.240518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount336393239.mount: Deactivated successfully. Dec 13 02:10:03.749065 containerd[1476]: time="2024-12-13T02:10:03.747922281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:03.750885 containerd[1476]: time="2024-12-13T02:10:03.750852898Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Dec 13 02:10:03.752288 containerd[1476]: time="2024-12-13T02:10:03.752259032Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:03.756560 containerd[1476]: time="2024-12-13T02:10:03.756508193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:03.758579 containerd[1476]: time="2024-12-13T02:10:03.758513745Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.160546467s" Dec 13 02:10:03.758661 containerd[1476]: time="2024-12-13T02:10:03.758581098Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Dec 13 02:10:06.933163 update_engine[1453]: I20241213 02:10:06.933082 1453 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 02:10:06.933163 update_engine[1453]: I20241213 02:10:06.933138 1453 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 02:10:06.934213 update_engine[1453]: I20241213 02:10:06.933362 1453 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 02:10:06.934213 update_engine[1453]: I20241213 02:10:06.933753 1453 omaha_request_params.cc:62] Current group set to stable Dec 13 02:10:06.934213 update_engine[1453]: I20241213 02:10:06.933847 1453 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 02:10:06.934213 update_engine[1453]: I20241213 02:10:06.933857 1453 update_attempter.cc:643] Scheduling an action processor start. Dec 13 02:10:06.934213 update_engine[1453]: I20241213 02:10:06.933873 1453 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 02:10:06.934213 update_engine[1453]: I20241213 02:10:06.933899 1453 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 02:10:06.934213 update_engine[1453]: I20241213 02:10:06.933945 1453 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 02:10:06.934213 update_engine[1453]: I20241213 02:10:06.933954 1453 omaha_request_action.cc:272] Request: Dec 13 02:10:06.934213 update_engine[1453]: Dec 13 02:10:06.934213 update_engine[1453]: Dec 13 02:10:06.934213 update_engine[1453]: Dec 13 02:10:06.934213 update_engine[1453]: Dec 13 02:10:06.934213 update_engine[1453]: Dec 13 02:10:06.934213 update_engine[1453]: Dec 13 02:10:06.934213 update_engine[1453]: Dec 13 02:10:06.934213 update_engine[1453]: Dec 13 02:10:06.934213 update_engine[1453]: I20241213 02:10:06.933959 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:10:06.936127 update_engine[1453]: I20241213 02:10:06.935617 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:10:06.936127 update_engine[1453]: I20241213 02:10:06.935933 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:10:06.936527 locksmithd[1493]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 02:10:06.937279 update_engine[1453]: E20241213 02:10:06.937057 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:10:06.937279 update_engine[1453]: I20241213 02:10:06.937119 1453 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 02:10:07.538260 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:10:07.545823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:10:07.573586 systemd[1]: Reloading requested from client PID 2286 ('systemctl') (unit session-7.scope)... Dec 13 02:10:07.573607 systemd[1]: Reloading... Dec 13 02:10:07.684574 zram_generator::config[2329]: No configuration found. Dec 13 02:10:07.792616 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:10:07.862697 systemd[1]: Reloading finished in 288 ms. Dec 13 02:10:07.928993 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 02:10:07.929095 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 02:10:07.929387 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:10:07.933707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:10:08.069339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:10:08.082917 (kubelet)[2375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:10:08.125656 kubelet[2375]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:10:08.125656 kubelet[2375]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:10:08.125656 kubelet[2375]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:10:08.126147 kubelet[2375]: I1213 02:10:08.125670 2375 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:10:10.153567 kubelet[2375]: I1213 02:10:10.151990 2375 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 02:10:10.153567 kubelet[2375]: I1213 02:10:10.152018 2375 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:10:10.153567 kubelet[2375]: I1213 02:10:10.152221 2375 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 02:10:10.198410 kubelet[2375]: E1213 02:10:10.198373 2375 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://78.47.218.196:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:10.198914 kubelet[2375]: I1213 02:10:10.198874 2375 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:10:10.217122 kubelet[2375]: I1213 02:10:10.217081 2375 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:10:10.218813 kubelet[2375]: I1213 02:10:10.218678 2375 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:10:10.219136 kubelet[2375]: I1213 02:10:10.218781 2375 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-f-bc189a5809","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:10:10.219324 kubelet[2375]: I1213 02:10:10.219195 2375 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:10:10.219324 kubelet[2375]: I1213 02:10:10.219212 2375 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:10:10.219613 kubelet[2375]: I1213 02:10:10.219562 2375 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:10:10.220940 kubelet[2375]: I1213 02:10:10.220887 2375 kubelet.go:400] "Attempting to sync node with API server" Dec 13 02:10:10.220940 kubelet[2375]: I1213 02:10:10.220916 2375 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:10:10.221078 kubelet[2375]: I1213 02:10:10.221070 2375 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:10:10.222787 kubelet[2375]: I1213 02:10:10.221156 2375 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:10:10.223711 kubelet[2375]: I1213 02:10:10.223679 2375 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:10:10.224318 kubelet[2375]: I1213 02:10:10.224294 2375 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:10:10.224635 kubelet[2375]: W1213 02:10:10.224611 2375 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:10:10.226040 kubelet[2375]: I1213 02:10:10.226009 2375 server.go:1264] "Started kubelet" Dec 13 02:10:10.226530 kubelet[2375]: W1213 02:10:10.226459 2375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.47.218.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-f-bc189a5809&limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:10.226759 kubelet[2375]: E1213 02:10:10.226732 2375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.47.218.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-f-bc189a5809&limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:10.233524 kubelet[2375]: E1213 02:10:10.233013 2375 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://78.47.218.196:6443/api/v1/namespaces/default/events\": dial tcp 78.47.218.196:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-f-bc189a5809.18109a926d38a36c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-f-bc189a5809,UID:ci-4081-2-1-f-bc189a5809,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-f-bc189a5809,},FirstTimestamp:2024-12-13 02:10:10.225972076 +0000 UTC m=+2.139947072,LastTimestamp:2024-12-13 02:10:10.225972076 +0000 UTC m=+2.139947072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-f-bc189a5809,}" Dec 13 02:10:10.233524 kubelet[2375]: W1213 02:10:10.233290 2375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.47.218.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:10.233524 kubelet[2375]: E1213 02:10:10.233332 2375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.47.218.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:10.234902 kubelet[2375]: I1213 02:10:10.234820 2375 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:10:10.235397 kubelet[2375]: I1213 02:10:10.235367 2375 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:10:10.235474 kubelet[2375]: I1213 02:10:10.235421 2375 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:10:10.235784 kubelet[2375]: I1213 02:10:10.235759 2375 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:10:10.236407 kubelet[2375]: I1213 02:10:10.236367 2375 server.go:455] "Adding debug handlers to kubelet server" Dec 13 02:10:10.239984 kubelet[2375]: I1213 02:10:10.239958 2375 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:10:10.240129 kubelet[2375]: I1213 02:10:10.240054 2375 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 02:10:10.241837 kubelet[2375]: I1213 02:10:10.241161 2375 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:10:10.241837 kubelet[2375]: W1213 02:10:10.241535 2375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.47.218.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:10.241837 kubelet[2375]: E1213 02:10:10.241602 2375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.47.218.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:10.241837 kubelet[2375]: E1213 02:10:10.241734 2375 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:10:10.242751 kubelet[2375]: E1213 02:10:10.242488 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.218.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-f-bc189a5809?timeout=10s\": dial tcp 78.47.218.196:6443: connect: connection refused" interval="200ms" Dec 13 02:10:10.243369 kubelet[2375]: I1213 02:10:10.242988 2375 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:10:10.244631 kubelet[2375]: I1213 02:10:10.244159 2375 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:10:10.244631 kubelet[2375]: I1213 02:10:10.244179 2375 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:10:10.255165 kubelet[2375]: I1213 02:10:10.255095 2375 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:10:10.256033 kubelet[2375]: I1213 02:10:10.256004 2375 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:10:10.256172 kubelet[2375]: I1213 02:10:10.256161 2375 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:10:10.256205 kubelet[2375]: I1213 02:10:10.256186 2375 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 02:10:10.256242 kubelet[2375]: E1213 02:10:10.256226 2375 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:10:10.263908 kubelet[2375]: W1213 02:10:10.263786 2375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.47.218.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:10.263908 kubelet[2375]: E1213 02:10:10.263867 2375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.47.218.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:10.276602 kubelet[2375]: I1213 02:10:10.276574 2375 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:10:10.276602 kubelet[2375]: I1213 02:10:10.276592 2375 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:10:10.276602 kubelet[2375]: I1213 02:10:10.276609 2375 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:10:10.278421 kubelet[2375]: I1213 02:10:10.278391 2375 policy_none.go:49] "None policy: Start" Dec 13 02:10:10.279077 kubelet[2375]: I1213 02:10:10.278976 2375 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:10:10.279077 kubelet[2375]: I1213 02:10:10.279001 2375 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:10:10.284362 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 02:10:10.292888 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 02:10:10.296625 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 02:10:10.310679 kubelet[2375]: I1213 02:10:10.309720 2375 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:10:10.310679 kubelet[2375]: I1213 02:10:10.310045 2375 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:10:10.310679 kubelet[2375]: I1213 02:10:10.310259 2375 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:10:10.314072 kubelet[2375]: E1213 02:10:10.314022 2375 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-f-bc189a5809\" not found" Dec 13 02:10:10.342822 kubelet[2375]: I1213 02:10:10.342775 2375 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.343690 kubelet[2375]: E1213 02:10:10.343642 2375 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.218.196:6443/api/v1/nodes\": dial tcp 78.47.218.196:6443: connect: connection refused" node="ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.357066 kubelet[2375]: I1213 02:10:10.356718 2375 topology_manager.go:215] "Topology Admit Handler" podUID="c44ba45e834309dcebc156267ff3326a" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.359168 kubelet[2375]: I1213 02:10:10.359112 2375 topology_manager.go:215] "Topology Admit Handler" podUID="96f8ddc240df660890281f59c56be8bb" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.362233 kubelet[2375]: I1213 02:10:10.361910 2375 topology_manager.go:215] "Topology Admit Handler" podUID="442f06a27509a8d025320d623f745a6c" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.371825 systemd[1]: Created slice kubepods-burstable-podc44ba45e834309dcebc156267ff3326a.slice - libcontainer container kubepods-burstable-podc44ba45e834309dcebc156267ff3326a.slice. Dec 13 02:10:10.391760 systemd[1]: Created slice kubepods-burstable-pod96f8ddc240df660890281f59c56be8bb.slice - libcontainer container kubepods-burstable-pod96f8ddc240df660890281f59c56be8bb.slice. Dec 13 02:10:10.397105 systemd[1]: Created slice kubepods-burstable-pod442f06a27509a8d025320d623f745a6c.slice - libcontainer container kubepods-burstable-pod442f06a27509a8d025320d623f745a6c.slice. Dec 13 02:10:10.441889 kubelet[2375]: I1213 02:10:10.441760 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c44ba45e834309dcebc156267ff3326a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-f-bc189a5809\" (UID: \"c44ba45e834309dcebc156267ff3326a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.442391 kubelet[2375]: I1213 02:10:10.442040 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/96f8ddc240df660890281f59c56be8bb-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-f-bc189a5809\" (UID: \"96f8ddc240df660890281f59c56be8bb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.442391 kubelet[2375]: I1213 02:10:10.442080 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/96f8ddc240df660890281f59c56be8bb-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-f-bc189a5809\" (UID: \"96f8ddc240df660890281f59c56be8bb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.442391 kubelet[2375]: I1213 02:10:10.442138 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c44ba45e834309dcebc156267ff3326a-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-f-bc189a5809\" (UID: \"c44ba45e834309dcebc156267ff3326a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.442391 kubelet[2375]: I1213 02:10:10.442168 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/96f8ddc240df660890281f59c56be8bb-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-f-bc189a5809\" (UID: \"96f8ddc240df660890281f59c56be8bb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.442391 kubelet[2375]: I1213 02:10:10.442198 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/96f8ddc240df660890281f59c56be8bb-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-f-bc189a5809\" (UID: \"96f8ddc240df660890281f59c56be8bb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.442718 kubelet[2375]: I1213 02:10:10.442226 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/96f8ddc240df660890281f59c56be8bb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-f-bc189a5809\" (UID: \"96f8ddc240df660890281f59c56be8bb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.442718 kubelet[2375]: I1213 02:10:10.442253 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/442f06a27509a8d025320d623f745a6c-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-f-bc189a5809\" (UID: \"442f06a27509a8d025320d623f745a6c\") " pod="kube-system/kube-scheduler-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.442718 kubelet[2375]: I1213 02:10:10.442280 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c44ba45e834309dcebc156267ff3326a-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-f-bc189a5809\" (UID: \"c44ba45e834309dcebc156267ff3326a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.443354 kubelet[2375]: E1213 02:10:10.443295 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.218.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-f-bc189a5809?timeout=10s\": dial tcp 78.47.218.196:6443: connect: connection refused" interval="400ms" Dec 13 02:10:10.546833 kubelet[2375]: I1213 02:10:10.546674 2375 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.547438 kubelet[2375]: E1213 02:10:10.547395 2375 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.218.196:6443/api/v1/nodes\": dial tcp 78.47.218.196:6443: connect: connection refused" node="ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.688285 containerd[1476]: time="2024-12-13T02:10:10.688226801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-f-bc189a5809,Uid:c44ba45e834309dcebc156267ff3326a,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:10.697560 containerd[1476]: time="2024-12-13T02:10:10.696910163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-f-bc189a5809,Uid:96f8ddc240df660890281f59c56be8bb,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:10.700649 containerd[1476]: time="2024-12-13T02:10:10.700464135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-f-bc189a5809,Uid:442f06a27509a8d025320d623f745a6c,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:10.844947 kubelet[2375]: E1213 02:10:10.844873 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.218.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-f-bc189a5809?timeout=10s\": dial tcp 78.47.218.196:6443: connect: connection refused" interval="800ms" Dec 13 02:10:10.951080 kubelet[2375]: I1213 02:10:10.950870 2375 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-f-bc189a5809" Dec 13 02:10:10.951382 kubelet[2375]: E1213 02:10:10.951281 2375 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.218.196:6443/api/v1/nodes\": dial tcp 78.47.218.196:6443: connect: connection refused" node="ci-4081-2-1-f-bc189a5809" Dec 13 02:10:11.193608 kubelet[2375]: W1213 02:10:11.193473 2375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.47.218.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-f-bc189a5809&limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:11.193608 kubelet[2375]: E1213 02:10:11.193614 2375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.47.218.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-f-bc189a5809&limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:11.231034 kubelet[2375]: W1213 02:10:11.230816 2375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.47.218.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:11.231034 kubelet[2375]: E1213 02:10:11.230914 2375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.47.218.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:11.252943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4169669144.mount: Deactivated successfully. Dec 13 02:10:11.262126 containerd[1476]: time="2024-12-13T02:10:11.261976563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:10:11.263587 containerd[1476]: time="2024-12-13T02:10:11.263514272Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:10:11.264665 containerd[1476]: time="2024-12-13T02:10:11.264631206Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Dec 13 02:10:11.265921 containerd[1476]: time="2024-12-13T02:10:11.265866532Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:10:11.268352 containerd[1476]: time="2024-12-13T02:10:11.268152557Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:10:11.268352 containerd[1476]: time="2024-12-13T02:10:11.268243632Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:10:11.268352 containerd[1476]: time="2024-12-13T02:10:11.268296868Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:10:11.273303 containerd[1476]: time="2024-12-13T02:10:11.273210697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:10:11.274806 containerd[1476]: time="2024-12-13T02:10:11.274399347Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 573.854217ms" Dec 13 02:10:11.275949 containerd[1476]: time="2024-12-13T02:10:11.275902978Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 578.890901ms" Dec 13 02:10:11.276694 containerd[1476]: time="2024-12-13T02:10:11.276663932Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 588.278382ms" Dec 13 02:10:11.364588 kubelet[2375]: W1213 02:10:11.363806 2375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.47.218.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:11.364588 kubelet[2375]: E1213 02:10:11.363879 2375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.47.218.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:11.413224 containerd[1476]: time="2024-12-13T02:10:11.413125205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:11.413430 containerd[1476]: time="2024-12-13T02:10:11.413241478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:11.413430 containerd[1476]: time="2024-12-13T02:10:11.413274356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:11.413948 containerd[1476]: time="2024-12-13T02:10:11.413824483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:11.416544 containerd[1476]: time="2024-12-13T02:10:11.416295217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:11.416544 containerd[1476]: time="2024-12-13T02:10:11.416338414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:11.416544 containerd[1476]: time="2024-12-13T02:10:11.416357413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:11.416544 containerd[1476]: time="2024-12-13T02:10:11.416428209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:11.421290 containerd[1476]: time="2024-12-13T02:10:11.420604201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:11.421290 containerd[1476]: time="2024-12-13T02:10:11.421251363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:11.421290 containerd[1476]: time="2024-12-13T02:10:11.421266842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:11.423151 containerd[1476]: time="2024-12-13T02:10:11.422674359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:11.442958 systemd[1]: Started cri-containerd-9ec33bf36fa3e3c305f0a327592c04ca89e10400a2dab4d3338f874cb0ddf104.scope - libcontainer container 9ec33bf36fa3e3c305f0a327592c04ca89e10400a2dab4d3338f874cb0ddf104. Dec 13 02:10:11.448469 systemd[1]: Started cri-containerd-0483abc4df4c38c6e7f1069d71f649ee684554f3469ecb81ef4aafc4a6996a43.scope - libcontainer container 0483abc4df4c38c6e7f1069d71f649ee684554f3469ecb81ef4aafc4a6996a43. Dec 13 02:10:11.450250 systemd[1]: Started cri-containerd-9a9aa2b78b426d6551935a21e26f2999d7f77c2e1d660cdef65f7fa81da96405.scope - libcontainer container 9a9aa2b78b426d6551935a21e26f2999d7f77c2e1d660cdef65f7fa81da96405. Dec 13 02:10:11.501850 containerd[1476]: time="2024-12-13T02:10:11.501752832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-f-bc189a5809,Uid:c44ba45e834309dcebc156267ff3326a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ec33bf36fa3e3c305f0a327592c04ca89e10400a2dab4d3338f874cb0ddf104\"" Dec 13 02:10:11.508620 containerd[1476]: time="2024-12-13T02:10:11.508493632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-f-bc189a5809,Uid:96f8ddc240df660890281f59c56be8bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0483abc4df4c38c6e7f1069d71f649ee684554f3469ecb81ef4aafc4a6996a43\"" Dec 13 02:10:11.513094 containerd[1476]: time="2024-12-13T02:10:11.513049202Z" level=info msg="CreateContainer within sandbox \"9ec33bf36fa3e3c305f0a327592c04ca89e10400a2dab4d3338f874cb0ddf104\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:10:11.514608 containerd[1476]: time="2024-12-13T02:10:11.514501076Z" level=info msg="CreateContainer within sandbox \"0483abc4df4c38c6e7f1069d71f649ee684554f3469ecb81ef4aafc4a6996a43\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:10:11.519685 containerd[1476]: time="2024-12-13T02:10:11.519653931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-f-bc189a5809,Uid:442f06a27509a8d025320d623f745a6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a9aa2b78b426d6551935a21e26f2999d7f77c2e1d660cdef65f7fa81da96405\"" Dec 13 02:10:11.522921 containerd[1476]: time="2024-12-13T02:10:11.522873020Z" level=info msg="CreateContainer within sandbox \"9a9aa2b78b426d6551935a21e26f2999d7f77c2e1d660cdef65f7fa81da96405\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:10:11.531946 containerd[1476]: time="2024-12-13T02:10:11.531904365Z" level=info msg="CreateContainer within sandbox \"9ec33bf36fa3e3c305f0a327592c04ca89e10400a2dab4d3338f874cb0ddf104\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"466bf802082e17242295077107cd501479e6c2cd64ca860674b9b6f83f92afde\"" Dec 13 02:10:11.534125 containerd[1476]: time="2024-12-13T02:10:11.533009499Z" level=info msg="StartContainer for \"466bf802082e17242295077107cd501479e6c2cd64ca860674b9b6f83f92afde\"" Dec 13 02:10:11.540923 containerd[1476]: time="2024-12-13T02:10:11.540883273Z" level=info msg="CreateContainer within sandbox \"0483abc4df4c38c6e7f1069d71f649ee684554f3469ecb81ef4aafc4a6996a43\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9da6ac0b65c5cad718845ed07cd5150e8dbdc2aff8c98e9d5cff29b3d7403baf\"" Dec 13 02:10:11.541788 containerd[1476]: time="2024-12-13T02:10:11.541763261Z" level=info msg="StartContainer for \"9da6ac0b65c5cad718845ed07cd5150e8dbdc2aff8c98e9d5cff29b3d7403baf\"" Dec 13 02:10:11.548161 containerd[1476]: time="2024-12-13T02:10:11.548118484Z" level=info msg="CreateContainer within sandbox \"9a9aa2b78b426d6551935a21e26f2999d7f77c2e1d660cdef65f7fa81da96405\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cac66976b9ded3ba8c12798309ab6d48362cacf9a50c44bbb99df181e905478d\"" Dec 13 02:10:11.548764 containerd[1476]: time="2024-12-13T02:10:11.548735887Z" level=info msg="StartContainer for \"cac66976b9ded3ba8c12798309ab6d48362cacf9a50c44bbb99df181e905478d\"" Dec 13 02:10:11.566733 systemd[1]: Started cri-containerd-466bf802082e17242295077107cd501479e6c2cd64ca860674b9b6f83f92afde.scope - libcontainer container 466bf802082e17242295077107cd501479e6c2cd64ca860674b9b6f83f92afde. Dec 13 02:10:11.583740 systemd[1]: Started cri-containerd-9da6ac0b65c5cad718845ed07cd5150e8dbdc2aff8c98e9d5cff29b3d7403baf.scope - libcontainer container 9da6ac0b65c5cad718845ed07cd5150e8dbdc2aff8c98e9d5cff29b3d7403baf. Dec 13 02:10:11.592710 systemd[1]: Started cri-containerd-cac66976b9ded3ba8c12798309ab6d48362cacf9a50c44bbb99df181e905478d.scope - libcontainer container cac66976b9ded3ba8c12798309ab6d48362cacf9a50c44bbb99df181e905478d. Dec 13 02:10:11.628560 containerd[1476]: time="2024-12-13T02:10:11.628382807Z" level=info msg="StartContainer for \"466bf802082e17242295077107cd501479e6c2cd64ca860674b9b6f83f92afde\" returns successfully" Dec 13 02:10:11.650065 kubelet[2375]: E1213 02:10:11.647701 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.218.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-f-bc189a5809?timeout=10s\": dial tcp 78.47.218.196:6443: connect: connection refused" interval="1.6s" Dec 13 02:10:11.652911 containerd[1476]: time="2024-12-13T02:10:11.652768721Z" level=info msg="StartContainer for \"9da6ac0b65c5cad718845ed07cd5150e8dbdc2aff8c98e9d5cff29b3d7403baf\" returns successfully" Dec 13 02:10:11.655760 kubelet[2375]: W1213 02:10:11.654743 2375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.47.218.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:11.655760 kubelet[2375]: E1213 02:10:11.654809 2375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.47.218.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:10:11.676118 containerd[1476]: time="2024-12-13T02:10:11.675865993Z" level=info msg="StartContainer for \"cac66976b9ded3ba8c12798309ab6d48362cacf9a50c44bbb99df181e905478d\" returns successfully" Dec 13 02:10:11.756552 kubelet[2375]: I1213 02:10:11.756418 2375 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-f-bc189a5809" Dec 13 02:10:11.756774 kubelet[2375]: E1213 02:10:11.756744 2375 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.218.196:6443/api/v1/nodes\": dial tcp 78.47.218.196:6443: connect: connection refused" node="ci-4081-2-1-f-bc189a5809" Dec 13 02:10:13.360190 kubelet[2375]: I1213 02:10:13.360155 2375 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-f-bc189a5809" Dec 13 02:10:13.900359 kubelet[2375]: E1213 02:10:13.900321 2375 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-f-bc189a5809\" not found" node="ci-4081-2-1-f-bc189a5809" Dec 13 02:10:13.960508 kubelet[2375]: I1213 02:10:13.960463 2375 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-f-bc189a5809" Dec 13 02:10:14.024812 kubelet[2375]: E1213 02:10:14.024774 2375 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-f-bc189a5809\" not found" Dec 13 02:10:14.225244 kubelet[2375]: I1213 02:10:14.224855 2375 apiserver.go:52] "Watching apiserver" Dec 13 02:10:14.240545 kubelet[2375]: I1213 02:10:14.240495 2375 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 02:10:14.296296 kubelet[2375]: E1213 02:10:14.296253 2375 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-f-bc189a5809\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:16.036647 systemd[1]: Reloading requested from client PID 2648 ('systemctl') (unit session-7.scope)... Dec 13 02:10:16.036670 systemd[1]: Reloading... Dec 13 02:10:16.135492 zram_generator::config[2691]: No configuration found. Dec 13 02:10:16.239561 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:10:16.323916 systemd[1]: Reloading finished in 286 ms. Dec 13 02:10:16.370891 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:10:16.383860 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:10:16.384196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:10:16.384294 systemd[1]: kubelet.service: Consumed 2.589s CPU time, 114.1M memory peak, 0B memory swap peak. Dec 13 02:10:16.390912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:10:16.513307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:10:16.519028 (kubelet)[2733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:10:16.568148 kubelet[2733]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:10:16.568148 kubelet[2733]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:10:16.568148 kubelet[2733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:10:16.568522 kubelet[2733]: I1213 02:10:16.568190 2733 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:10:16.572530 kubelet[2733]: I1213 02:10:16.572219 2733 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 02:10:16.572530 kubelet[2733]: I1213 02:10:16.572242 2733 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:10:16.572530 kubelet[2733]: I1213 02:10:16.572391 2733 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 02:10:16.573731 kubelet[2733]: I1213 02:10:16.573709 2733 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:10:16.576021 kubelet[2733]: I1213 02:10:16.575098 2733 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:10:16.580308 kubelet[2733]: I1213 02:10:16.580289 2733 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:10:16.580617 kubelet[2733]: I1213 02:10:16.580590 2733 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:10:16.580831 kubelet[2733]: I1213 02:10:16.580681 2733 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-f-bc189a5809","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:10:16.581563 kubelet[2733]: I1213 02:10:16.580946 2733 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:10:16.581563 kubelet[2733]: I1213 02:10:16.580961 2733 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:10:16.581563 kubelet[2733]: I1213 02:10:16.581005 2733 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:10:16.581563 kubelet[2733]: I1213 02:10:16.581154 2733 kubelet.go:400] "Attempting to sync node with API server" Dec 13 02:10:16.581563 kubelet[2733]: I1213 02:10:16.581168 2733 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:10:16.581563 kubelet[2733]: I1213 02:10:16.581195 2733 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:10:16.581563 kubelet[2733]: I1213 02:10:16.581211 2733 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:10:16.585173 kubelet[2733]: I1213 02:10:16.585147 2733 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:10:16.585340 kubelet[2733]: I1213 02:10:16.585319 2733 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:10:16.586064 kubelet[2733]: I1213 02:10:16.586034 2733 server.go:1264] "Started kubelet" Dec 13 02:10:16.588451 kubelet[2733]: I1213 02:10:16.588390 2733 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:10:16.588927 kubelet[2733]: I1213 02:10:16.588901 2733 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:10:16.591895 kubelet[2733]: I1213 02:10:16.591877 2733 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:10:16.594605 kubelet[2733]: I1213 02:10:16.594563 2733 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:10:16.595963 kubelet[2733]: I1213 02:10:16.595946 2733 server.go:455] "Adding debug handlers to kubelet server" Dec 13 02:10:16.598930 kubelet[2733]: I1213 02:10:16.598909 2733 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:10:16.603747 kubelet[2733]: I1213 02:10:16.603728 2733 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 02:10:16.604495 kubelet[2733]: I1213 02:10:16.604141 2733 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:10:16.606421 kubelet[2733]: I1213 02:10:16.606392 2733 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:10:16.606520 kubelet[2733]: I1213 02:10:16.606491 2733 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:10:16.608026 kubelet[2733]: I1213 02:10:16.607998 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:10:16.609662 kubelet[2733]: I1213 02:10:16.609643 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:10:16.610138 kubelet[2733]: I1213 02:10:16.609809 2733 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:10:16.610138 kubelet[2733]: I1213 02:10:16.609834 2733 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 02:10:16.610138 kubelet[2733]: E1213 02:10:16.609873 2733 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:10:16.615884 kubelet[2733]: E1213 02:10:16.615847 2733 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:10:16.618556 kubelet[2733]: I1213 02:10:16.616913 2733 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:10:16.678058 kubelet[2733]: I1213 02:10:16.678027 2733 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:10:16.678058 kubelet[2733]: I1213 02:10:16.678047 2733 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:10:16.678058 kubelet[2733]: I1213 02:10:16.678070 2733 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:10:16.678799 kubelet[2733]: I1213 02:10:16.678215 2733 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:10:16.678799 kubelet[2733]: I1213 02:10:16.678226 2733 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:10:16.678799 kubelet[2733]: I1213 02:10:16.678243 2733 policy_none.go:49] "None policy: Start" Dec 13 02:10:16.679199 kubelet[2733]: I1213 02:10:16.679141 2733 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:10:16.679199 kubelet[2733]: I1213 02:10:16.679179 2733 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:10:16.679401 kubelet[2733]: I1213 02:10:16.679332 2733 state_mem.go:75] "Updated machine memory state" Dec 13 02:10:16.683708 kubelet[2733]: I1213 02:10:16.683678 2733 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:10:16.683881 kubelet[2733]: I1213 02:10:16.683832 2733 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:10:16.683956 kubelet[2733]: I1213 02:10:16.683931 2733 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:10:16.703483 kubelet[2733]: I1213 02:10:16.703439 2733 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-f-bc189a5809" Dec 13 02:10:16.711301 kubelet[2733]: I1213 02:10:16.709974 2733 topology_manager.go:215] "Topology Admit Handler" podUID="96f8ddc240df660890281f59c56be8bb" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:16.711301 kubelet[2733]: I1213 02:10:16.710152 2733 topology_manager.go:215] "Topology Admit Handler" podUID="442f06a27509a8d025320d623f745a6c" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:16.711301 kubelet[2733]: I1213 02:10:16.710214 2733 topology_manager.go:215] "Topology Admit Handler" podUID="c44ba45e834309dcebc156267ff3326a" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:16.718505 kubelet[2733]: I1213 02:10:16.718473 2733 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-f-bc189a5809" Dec 13 02:10:16.718741 kubelet[2733]: I1213 02:10:16.718726 2733 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-f-bc189a5809" Dec 13 02:10:16.805002 kubelet[2733]: I1213 02:10:16.804890 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/96f8ddc240df660890281f59c56be8bb-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-f-bc189a5809\" (UID: \"96f8ddc240df660890281f59c56be8bb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:16.805002 kubelet[2733]: I1213 02:10:16.804954 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/96f8ddc240df660890281f59c56be8bb-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-f-bc189a5809\" (UID: \"96f8ddc240df660890281f59c56be8bb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:16.805249 kubelet[2733]: I1213 02:10:16.805038 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/96f8ddc240df660890281f59c56be8bb-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-f-bc189a5809\" (UID: \"96f8ddc240df660890281f59c56be8bb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:16.805249 kubelet[2733]: I1213 02:10:16.805079 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/442f06a27509a8d025320d623f745a6c-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-f-bc189a5809\" (UID: \"442f06a27509a8d025320d623f745a6c\") " pod="kube-system/kube-scheduler-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:16.805249 kubelet[2733]: I1213 02:10:16.805119 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/96f8ddc240df660890281f59c56be8bb-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-f-bc189a5809\" (UID: \"96f8ddc240df660890281f59c56be8bb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:16.805249 kubelet[2733]: I1213 02:10:16.805157 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/96f8ddc240df660890281f59c56be8bb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-f-bc189a5809\" (UID: \"96f8ddc240df660890281f59c56be8bb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:16.805249 kubelet[2733]: I1213 02:10:16.805197 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c44ba45e834309dcebc156267ff3326a-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-f-bc189a5809\" (UID: \"c44ba45e834309dcebc156267ff3326a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:16.805502 kubelet[2733]: I1213 02:10:16.805233 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c44ba45e834309dcebc156267ff3326a-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-f-bc189a5809\" (UID: \"c44ba45e834309dcebc156267ff3326a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:16.805502 kubelet[2733]: I1213 02:10:16.805268 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c44ba45e834309dcebc156267ff3326a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-f-bc189a5809\" (UID: \"c44ba45e834309dcebc156267ff3326a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-f-bc189a5809" Dec 13 02:10:16.919783 update_engine[1453]: I20241213 02:10:16.919593 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:10:16.920176 update_engine[1453]: I20241213 02:10:16.919929 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:10:16.920774 update_engine[1453]: I20241213 02:10:16.920212 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:10:16.921143 update_engine[1453]: E20241213 02:10:16.921020 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:10:16.921143 update_engine[1453]: I20241213 02:10:16.921104 1453 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 02:10:17.038330 sudo[2765]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 02:10:17.039001 sudo[2765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 02:10:17.480599 sudo[2765]: pam_unix(sudo:session): session closed for user root Dec 13 02:10:17.582981 kubelet[2733]: I1213 02:10:17.582604 2733 apiserver.go:52] "Watching apiserver" Dec 13 02:10:17.605105 kubelet[2733]: I1213 02:10:17.605024 2733 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 02:10:17.687802 kubelet[2733]: I1213 02:10:17.687675 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-f-bc189a5809" podStartSLOduration=1.687657948 podStartE2EDuration="1.687657948s" podCreationTimestamp="2024-12-13 02:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:10:17.685977003 +0000 UTC m=+1.159999774" watchObservedRunningTime="2024-12-13 02:10:17.687657948 +0000 UTC m=+1.161680719" Dec 13 02:10:17.710787 kubelet[2733]: I1213 02:10:17.710405 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-f-bc189a5809" podStartSLOduration=1.710385603 podStartE2EDuration="1.710385603s" podCreationTimestamp="2024-12-13 02:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:10:17.698666307 +0000 UTC m=+1.172689078" watchObservedRunningTime="2024-12-13 02:10:17.710385603 +0000 UTC m=+1.184408374" Dec 13 02:10:17.722028 kubelet[2733]: I1213 02:10:17.721805 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-f-bc189a5809" podStartSLOduration=1.72178879 podStartE2EDuration="1.72178879s" podCreationTimestamp="2024-12-13 02:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:10:17.711601564 +0000 UTC m=+1.185624335" watchObservedRunningTime="2024-12-13 02:10:17.72178879 +0000 UTC m=+1.195811561" Dec 13 02:10:19.596483 sudo[1860]: pam_unix(sudo:session): session closed for user root Dec 13 02:10:19.756953 sshd[1857]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:19.763561 systemd[1]: sshd@6-78.47.218.196:22-147.75.109.163:49094.service: Deactivated successfully. Dec 13 02:10:19.766292 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:10:19.766800 systemd[1]: session-7.scope: Consumed 6.184s CPU time, 186.7M memory peak, 0B memory swap peak. Dec 13 02:10:19.768525 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:10:19.770327 systemd-logind[1452]: Removed session 7. Dec 13 02:10:26.926374 update_engine[1453]: I20241213 02:10:26.925594 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:10:26.926374 update_engine[1453]: I20241213 02:10:26.925966 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:10:26.926374 update_engine[1453]: I20241213 02:10:26.926252 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:10:26.927757 update_engine[1453]: E20241213 02:10:26.927709 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:10:26.927996 update_engine[1453]: I20241213 02:10:26.927938 1453 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 02:10:32.064384 kubelet[2733]: I1213 02:10:32.064323 2733 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:10:32.066461 containerd[1476]: time="2024-12-13T02:10:32.065710879Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:10:32.066894 kubelet[2733]: I1213 02:10:32.065888 2733 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:10:32.940875 kubelet[2733]: I1213 02:10:32.940651 2733 topology_manager.go:215] "Topology Admit Handler" podUID="5f17f230-bd5a-4ed9-8039-7a9e6fb387f2" podNamespace="kube-system" podName="kube-proxy-nt7v4" Dec 13 02:10:32.951075 kubelet[2733]: I1213 02:10:32.950808 2733 topology_manager.go:215] "Topology Admit Handler" podUID="bbc78614-2e91-4c8a-a962-739f02408941" podNamespace="kube-system" podName="cilium-c56k2" Dec 13 02:10:32.951400 systemd[1]: Created slice kubepods-besteffort-pod5f17f230_bd5a_4ed9_8039_7a9e6fb387f2.slice - libcontainer container kubepods-besteffort-pod5f17f230_bd5a_4ed9_8039_7a9e6fb387f2.slice. Dec 13 02:10:32.972158 systemd[1]: Created slice kubepods-burstable-podbbc78614_2e91_4c8a_a962_739f02408941.slice - libcontainer container kubepods-burstable-podbbc78614_2e91_4c8a_a962_739f02408941.slice. Dec 13 02:10:33.105690 kubelet[2733]: I1213 02:10:33.105121 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-cilium-cgroup\") pod \"cilium-c56k2\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " pod="kube-system/cilium-c56k2" Dec 13 02:10:33.105690 kubelet[2733]: I1213 02:10:33.105210 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bbc78614-2e91-4c8a-a962-739f02408941-hubble-tls\") pod \"cilium-c56k2\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " pod="kube-system/cilium-c56k2" Dec 13 02:10:33.105690 kubelet[2733]: I1213 02:10:33.105252 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2nb7\" (UniqueName: \"kubernetes.io/projected/bbc78614-2e91-4c8a-a962-739f02408941-kube-api-access-t2nb7\") pod \"cilium-c56k2\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " pod="kube-system/cilium-c56k2" Dec 13 02:10:33.105690 kubelet[2733]: I1213 02:10:33.105289 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f17f230-bd5a-4ed9-8039-7a9e6fb387f2-kube-proxy\") pod \"kube-proxy-nt7v4\" (UID: \"5f17f230-bd5a-4ed9-8039-7a9e6fb387f2\") " pod="kube-system/kube-proxy-nt7v4" Dec 13 02:10:33.105690 kubelet[2733]: I1213 02:10:33.105324 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-cni-path\") pod \"cilium-c56k2\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " pod="kube-system/cilium-c56k2" Dec 13 02:10:33.105690 kubelet[2733]: I1213 02:10:33.105357 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbc78614-2e91-4c8a-a962-739f02408941-cilium-config-path\") pod \"cilium-c56k2\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " pod="kube-system/cilium-c56k2" Dec 13 02:10:33.106645 kubelet[2733]: I1213 02:10:33.105390 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-host-proc-sys-net\") pod \"cilium-c56k2\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " pod="kube-system/cilium-c56k2" Dec 13 02:10:33.106645 kubelet[2733]: I1213 02:10:33.105424 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-cilium-run\") pod \"cilium-c56k2\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " pod="kube-system/cilium-c56k2" Dec 13 02:10:33.106645 kubelet[2733]: I1213 02:10:33.105846 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f17f230-bd5a-4ed9-8039-7a9e6fb387f2-xtables-lock\") pod \"kube-proxy-nt7v4\" (UID: \"5f17f230-bd5a-4ed9-8039-7a9e6fb387f2\") " pod="kube-system/kube-proxy-nt7v4" Dec 13 02:10:33.106645 kubelet[2733]: I1213 02:10:33.105905 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-bpf-maps\") pod \"cilium-c56k2\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " pod="kube-system/cilium-c56k2" Dec 13 02:10:33.106645 kubelet[2733]: I1213 02:10:33.105945 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-host-proc-sys-kernel\") pod \"cilium-c56k2\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " pod="kube-system/cilium-c56k2" Dec 13 02:10:33.106645 kubelet[2733]: I1213 02:10:33.105990 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-hostproc\") pod \"cilium-c56k2\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " pod="kube-system/cilium-c56k2" Dec 13 02:10:33.107323 kubelet[2733]: I1213 02:10:33.106024 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f17f230-bd5a-4ed9-8039-7a9e6fb387f2-lib-modules\") pod \"kube-proxy-nt7v4\" (UID: \"5f17f230-bd5a-4ed9-8039-7a9e6fb387f2\") " pod="kube-system/kube-proxy-nt7v4" Dec 13 02:10:33.107323 kubelet[2733]: I1213 02:10:33.106060 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44lf4\" (UniqueName: \"kubernetes.io/projected/5f17f230-bd5a-4ed9-8039-7a9e6fb387f2-kube-api-access-44lf4\") pod \"kube-proxy-nt7v4\" (UID: \"5f17f230-bd5a-4ed9-8039-7a9e6fb387f2\") " pod="kube-system/kube-proxy-nt7v4" Dec 13 02:10:33.107323 kubelet[2733]: I1213 02:10:33.106101 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bbc78614-2e91-4c8a-a962-739f02408941-clustermesh-secrets\") pod \"cilium-c56k2\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " pod="kube-system/cilium-c56k2" Dec 13 02:10:33.107323 kubelet[2733]: I1213 02:10:33.106136 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-etc-cni-netd\") pod \"cilium-c56k2\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " pod="kube-system/cilium-c56k2" Dec 13 02:10:33.107323 kubelet[2733]: I1213 02:10:33.106170 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-lib-modules\") pod \"cilium-c56k2\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " pod="kube-system/cilium-c56k2" Dec 13 02:10:33.107761 kubelet[2733]: I1213 02:10:33.106209 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-xtables-lock\") pod \"cilium-c56k2\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " pod="kube-system/cilium-c56k2" Dec 13 02:10:33.156946 kubelet[2733]: I1213 02:10:33.156886 2733 topology_manager.go:215] "Topology Admit Handler" podUID="1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112" podNamespace="kube-system" podName="cilium-operator-599987898-ljgh4" Dec 13 02:10:33.164288 systemd[1]: Created slice kubepods-besteffort-pod1a8ad8c5_f57a_4b24_90c4_d5c26f9c4112.slice - libcontainer container kubepods-besteffort-pod1a8ad8c5_f57a_4b24_90c4_d5c26f9c4112.slice. Dec 13 02:10:33.266711 containerd[1476]: time="2024-12-13T02:10:33.266657666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nt7v4,Uid:5f17f230-bd5a-4ed9-8039-7a9e6fb387f2,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:33.280854 containerd[1476]: time="2024-12-13T02:10:33.279295967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c56k2,Uid:bbc78614-2e91-4c8a-a962-739f02408941,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:33.294620 containerd[1476]: time="2024-12-13T02:10:33.292720723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:33.294620 containerd[1476]: time="2024-12-13T02:10:33.292774044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:33.294620 containerd[1476]: time="2024-12-13T02:10:33.292789884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:33.294620 containerd[1476]: time="2024-12-13T02:10:33.292866165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:33.306190 containerd[1476]: time="2024-12-13T02:10:33.305654469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:33.306190 containerd[1476]: time="2024-12-13T02:10:33.305835752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:33.306190 containerd[1476]: time="2024-12-13T02:10:33.305848353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:33.307177 containerd[1476]: time="2024-12-13T02:10:33.306806009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:33.308613 kubelet[2733]: I1213 02:10:33.308025 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gnn4\" (UniqueName: \"kubernetes.io/projected/1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112-kube-api-access-7gnn4\") pod \"cilium-operator-599987898-ljgh4\" (UID: \"1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112\") " pod="kube-system/cilium-operator-599987898-ljgh4" Dec 13 02:10:33.308613 kubelet[2733]: I1213 02:10:33.308062 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112-cilium-config-path\") pod \"cilium-operator-599987898-ljgh4\" (UID: \"1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112\") " pod="kube-system/cilium-operator-599987898-ljgh4" Dec 13 02:10:33.311895 systemd[1]: Started cri-containerd-27c5942ac86d206b59a9a2c6d7363012881b0891c4da515090820d97d7b80b1d.scope - libcontainer container 27c5942ac86d206b59a9a2c6d7363012881b0891c4da515090820d97d7b80b1d. Dec 13 02:10:33.327720 systemd[1]: Started cri-containerd-89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48.scope - libcontainer container 89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48. Dec 13 02:10:33.353060 containerd[1476]: time="2024-12-13T02:10:33.352936978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nt7v4,Uid:5f17f230-bd5a-4ed9-8039-7a9e6fb387f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"27c5942ac86d206b59a9a2c6d7363012881b0891c4da515090820d97d7b80b1d\"" Dec 13 02:10:33.357739 containerd[1476]: time="2024-12-13T02:10:33.357590459Z" level=info msg="CreateContainer within sandbox \"27c5942ac86d206b59a9a2c6d7363012881b0891c4da515090820d97d7b80b1d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:10:33.362250 containerd[1476]: time="2024-12-13T02:10:33.362197780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c56k2,Uid:bbc78614-2e91-4c8a-a962-739f02408941,Namespace:kube-system,Attempt:0,} returns sandbox id \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\"" Dec 13 02:10:33.364817 containerd[1476]: time="2024-12-13T02:10:33.364663903Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:10:33.376050 containerd[1476]: time="2024-12-13T02:10:33.375947501Z" level=info msg="CreateContainer within sandbox \"27c5942ac86d206b59a9a2c6d7363012881b0891c4da515090820d97d7b80b1d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"770c2de0a2fc0c94df78b5ee179a956818ad1ebc90cd3f6f8f20ae1a94ce6fa7\"" Dec 13 02:10:33.376605 containerd[1476]: time="2024-12-13T02:10:33.376560151Z" level=info msg="StartContainer for \"770c2de0a2fc0c94df78b5ee179a956818ad1ebc90cd3f6f8f20ae1a94ce6fa7\"" Dec 13 02:10:33.403819 systemd[1]: Started cri-containerd-770c2de0a2fc0c94df78b5ee179a956818ad1ebc90cd3f6f8f20ae1a94ce6fa7.scope - libcontainer container 770c2de0a2fc0c94df78b5ee179a956818ad1ebc90cd3f6f8f20ae1a94ce6fa7. Dec 13 02:10:33.440927 containerd[1476]: time="2024-12-13T02:10:33.440883798Z" level=info msg="StartContainer for \"770c2de0a2fc0c94df78b5ee179a956818ad1ebc90cd3f6f8f20ae1a94ce6fa7\" returns successfully" Dec 13 02:10:33.470604 containerd[1476]: time="2024-12-13T02:10:33.470453116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ljgh4,Uid:1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:33.503692 containerd[1476]: time="2024-12-13T02:10:33.503405333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:33.503692 containerd[1476]: time="2024-12-13T02:10:33.503475335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:33.504432 containerd[1476]: time="2024-12-13T02:10:33.503521455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:33.504432 containerd[1476]: time="2024-12-13T02:10:33.503678538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:33.525725 systemd[1]: Started cri-containerd-754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134.scope - libcontainer container 754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134. Dec 13 02:10:33.575571 containerd[1476]: time="2024-12-13T02:10:33.575014828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ljgh4,Uid:1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112,Namespace:kube-system,Attempt:0,} returns sandbox id \"754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134\"" Dec 13 02:10:36.922635 update_engine[1453]: I20241213 02:10:36.922572 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:10:36.923693 update_engine[1453]: I20241213 02:10:36.922811 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:10:36.923693 update_engine[1453]: I20241213 02:10:36.922993 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:10:36.923743 update_engine[1453]: E20241213 02:10:36.923719 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:10:36.923776 update_engine[1453]: I20241213 02:10:36.923764 1453 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 02:10:36.923802 update_engine[1453]: I20241213 02:10:36.923774 1453 omaha_request_action.cc:617] Omaha request response: Dec 13 02:10:36.923860 update_engine[1453]: E20241213 02:10:36.923831 1453 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 02:10:36.923894 update_engine[1453]: I20241213 02:10:36.923863 1453 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 02:10:36.923894 update_engine[1453]: I20241213 02:10:36.923871 1453 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 02:10:36.923894 update_engine[1453]: I20241213 02:10:36.923875 1453 update_attempter.cc:306] Processing Done. Dec 13 02:10:36.923894 update_engine[1453]: E20241213 02:10:36.923890 1453 update_attempter.cc:619] Update failed. Dec 13 02:10:36.923985 update_engine[1453]: I20241213 02:10:36.923895 1453 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 02:10:36.923985 update_engine[1453]: I20241213 02:10:36.923900 1453 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 02:10:36.923985 update_engine[1453]: I20241213 02:10:36.923905 1453 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 02:10:36.924230 locksmithd[1493]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 02:10:36.924449 update_engine[1453]: I20241213 02:10:36.924235 1453 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 02:10:36.924449 update_engine[1453]: I20241213 02:10:36.924281 1453 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 02:10:36.924449 update_engine[1453]: I20241213 02:10:36.924288 1453 omaha_request_action.cc:272] Request: Dec 13 02:10:36.924449 update_engine[1453]: Dec 13 02:10:36.924449 update_engine[1453]: Dec 13 02:10:36.924449 update_engine[1453]: Dec 13 02:10:36.924449 update_engine[1453]: Dec 13 02:10:36.924449 update_engine[1453]: Dec 13 02:10:36.924449 update_engine[1453]: Dec 13 02:10:36.924449 update_engine[1453]: I20241213 02:10:36.924295 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:10:36.924449 update_engine[1453]: I20241213 02:10:36.924427 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:10:36.924717 update_engine[1453]: I20241213 02:10:36.924577 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:10:36.925254 update_engine[1453]: E20241213 02:10:36.925212 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:10:36.925319 update_engine[1453]: I20241213 02:10:36.925274 1453 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 02:10:36.925319 update_engine[1453]: I20241213 02:10:36.925284 1453 omaha_request_action.cc:617] Omaha request response: Dec 13 02:10:36.925319 update_engine[1453]: I20241213 02:10:36.925290 1453 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 02:10:36.925319 update_engine[1453]: I20241213 02:10:36.925295 1453 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 02:10:36.925319 update_engine[1453]: I20241213 02:10:36.925300 1453 update_attempter.cc:306] Processing Done. Dec 13 02:10:36.925319 update_engine[1453]: I20241213 02:10:36.925307 1453 update_attempter.cc:310] Error event sent. Dec 13 02:10:36.925319 update_engine[1453]: I20241213 02:10:36.925314 1453 update_check_scheduler.cc:74] Next update check in 41m27s Dec 13 02:10:36.925716 locksmithd[1493]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 02:10:37.107690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3430243812.mount: Deactivated successfully. Dec 13 02:10:38.425913 containerd[1476]: time="2024-12-13T02:10:38.425835917Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:38.427742 containerd[1476]: time="2024-12-13T02:10:38.427704131Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651518" Dec 13 02:10:38.430576 containerd[1476]: time="2024-12-13T02:10:38.428730400Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:38.430576 containerd[1476]: time="2024-12-13T02:10:38.430477770Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.065775267s" Dec 13 02:10:38.430576 containerd[1476]: time="2024-12-13T02:10:38.430519851Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 02:10:38.433576 containerd[1476]: time="2024-12-13T02:10:38.432746315Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:10:38.434665 containerd[1476]: time="2024-12-13T02:10:38.434025712Z" level=info msg="CreateContainer within sandbox \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:10:38.449817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1889137234.mount: Deactivated successfully. Dec 13 02:10:38.453847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2972256376.mount: Deactivated successfully. Dec 13 02:10:38.458629 containerd[1476]: time="2024-12-13T02:10:38.458586256Z" level=info msg="CreateContainer within sandbox \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477\"" Dec 13 02:10:38.459609 containerd[1476]: time="2024-12-13T02:10:38.459572764Z" level=info msg="StartContainer for \"b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477\"" Dec 13 02:10:38.485915 systemd[1]: Started cri-containerd-b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477.scope - libcontainer container b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477. Dec 13 02:10:38.524825 containerd[1476]: time="2024-12-13T02:10:38.522852338Z" level=info msg="StartContainer for \"b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477\" returns successfully" Dec 13 02:10:38.557947 systemd[1]: cri-containerd-b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477.scope: Deactivated successfully. Dec 13 02:10:38.738291 kubelet[2733]: I1213 02:10:38.737868 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nt7v4" podStartSLOduration=6.737843622 podStartE2EDuration="6.737843622s" podCreationTimestamp="2024-12-13 02:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:10:33.718756306 +0000 UTC m=+17.192779197" watchObservedRunningTime="2024-12-13 02:10:38.737843622 +0000 UTC m=+22.211866433" Dec 13 02:10:38.742001 containerd[1476]: time="2024-12-13T02:10:38.741927219Z" level=info msg="shim disconnected" id=b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477 namespace=k8s.io Dec 13 02:10:38.742446 containerd[1476]: time="2024-12-13T02:10:38.742216347Z" level=warning msg="cleaning up after shim disconnected" id=b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477 namespace=k8s.io Dec 13 02:10:38.742446 containerd[1476]: time="2024-12-13T02:10:38.742249508Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:10:39.447609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477-rootfs.mount: Deactivated successfully. Dec 13 02:10:39.723004 containerd[1476]: time="2024-12-13T02:10:39.722852244Z" level=info msg="CreateContainer within sandbox \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:10:39.739077 containerd[1476]: time="2024-12-13T02:10:39.739035461Z" level=info msg="CreateContainer within sandbox \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5\"" Dec 13 02:10:39.740424 containerd[1476]: time="2024-12-13T02:10:39.739622599Z" level=info msg="StartContainer for \"41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5\"" Dec 13 02:10:39.782874 systemd[1]: Started cri-containerd-41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5.scope - libcontainer container 41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5. Dec 13 02:10:39.811408 containerd[1476]: time="2024-12-13T02:10:39.811357560Z" level=info msg="StartContainer for \"41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5\" returns successfully" Dec 13 02:10:39.832119 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:10:39.832388 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:10:39.832463 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:10:39.839098 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:10:39.839291 systemd[1]: cri-containerd-41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5.scope: Deactivated successfully. Dec 13 02:10:39.862987 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:10:39.867434 containerd[1476]: time="2024-12-13T02:10:39.867356879Z" level=info msg="shim disconnected" id=41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5 namespace=k8s.io Dec 13 02:10:39.867434 containerd[1476]: time="2024-12-13T02:10:39.867432002Z" level=warning msg="cleaning up after shim disconnected" id=41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5 namespace=k8s.io Dec 13 02:10:39.867434 containerd[1476]: time="2024-12-13T02:10:39.867441442Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:10:40.445653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5-rootfs.mount: Deactivated successfully. Dec 13 02:10:40.731504 containerd[1476]: time="2024-12-13T02:10:40.731388994Z" level=info msg="CreateContainer within sandbox \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:10:40.747286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2378165250.mount: Deactivated successfully. Dec 13 02:10:40.756570 containerd[1476]: time="2024-12-13T02:10:40.756072160Z" level=info msg="CreateContainer within sandbox \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc\"" Dec 13 02:10:40.757237 containerd[1476]: time="2024-12-13T02:10:40.757213158Z" level=info msg="StartContainer for \"7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc\"" Dec 13 02:10:40.801889 systemd[1]: Started cri-containerd-7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc.scope - libcontainer container 7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc. Dec 13 02:10:40.840499 containerd[1476]: time="2024-12-13T02:10:40.839407442Z" level=info msg="StartContainer for \"7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc\" returns successfully" Dec 13 02:10:40.860424 systemd[1]: cri-containerd-7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc.scope: Deactivated successfully. Dec 13 02:10:40.934248 containerd[1476]: time="2024-12-13T02:10:40.934169496Z" level=info msg="shim disconnected" id=7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc namespace=k8s.io Dec 13 02:10:40.934248 containerd[1476]: time="2024-12-13T02:10:40.934227058Z" level=warning msg="cleaning up after shim disconnected" id=7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc namespace=k8s.io Dec 13 02:10:40.934248 containerd[1476]: time="2024-12-13T02:10:40.934235379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:10:40.962527 containerd[1476]: time="2024-12-13T02:10:40.962438100Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:40.964219 containerd[1476]: time="2024-12-13T02:10:40.964146155Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138302" Dec 13 02:10:40.964683 containerd[1476]: time="2024-12-13T02:10:40.964648092Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:40.966910 containerd[1476]: time="2024-12-13T02:10:40.966670918Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.533867801s" Dec 13 02:10:40.966910 containerd[1476]: time="2024-12-13T02:10:40.966742960Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 02:10:40.969399 containerd[1476]: time="2024-12-13T02:10:40.969344565Z" level=info msg="CreateContainer within sandbox \"754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:10:40.982880 containerd[1476]: time="2024-12-13T02:10:40.982740843Z" level=info msg="CreateContainer within sandbox \"754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d\"" Dec 13 02:10:40.984961 containerd[1476]: time="2024-12-13T02:10:40.984134048Z" level=info msg="StartContainer for \"2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d\"" Dec 13 02:10:41.012765 systemd[1]: Started cri-containerd-2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d.scope - libcontainer container 2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d. Dec 13 02:10:41.039554 containerd[1476]: time="2024-12-13T02:10:41.039494170Z" level=info msg="StartContainer for \"2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d\" returns successfully" Dec 13 02:10:41.449265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc-rootfs.mount: Deactivated successfully. Dec 13 02:10:41.740115 containerd[1476]: time="2024-12-13T02:10:41.740053540Z" level=info msg="CreateContainer within sandbox \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:10:41.763034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2008857275.mount: Deactivated successfully. Dec 13 02:10:41.767687 containerd[1476]: time="2024-12-13T02:10:41.767637013Z" level=info msg="CreateContainer within sandbox \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224\"" Dec 13 02:10:41.768502 containerd[1476]: time="2024-12-13T02:10:41.768460641Z" level=info msg="StartContainer for \"b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224\"" Dec 13 02:10:41.814719 systemd[1]: Started cri-containerd-b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224.scope - libcontainer container b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224. Dec 13 02:10:41.861181 systemd[1]: cri-containerd-b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224.scope: Deactivated successfully. Dec 13 02:10:41.867847 containerd[1476]: time="2024-12-13T02:10:41.867771633Z" level=info msg="StartContainer for \"b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224\" returns successfully" Dec 13 02:10:41.868605 containerd[1476]: time="2024-12-13T02:10:41.866797840Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbc78614_2e91_4c8a_a962_739f02408941.slice/cri-containerd-b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224.scope/memory.events\": no such file or directory" Dec 13 02:10:41.911149 containerd[1476]: time="2024-12-13T02:10:41.911086850Z" level=info msg="shim disconnected" id=b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224 namespace=k8s.io Dec 13 02:10:41.911149 containerd[1476]: time="2024-12-13T02:10:41.911140372Z" level=warning msg="cleaning up after shim disconnected" id=b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224 namespace=k8s.io Dec 13 02:10:41.911432 containerd[1476]: time="2024-12-13T02:10:41.911150212Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:10:42.447992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224-rootfs.mount: Deactivated successfully. Dec 13 02:10:42.750649 containerd[1476]: time="2024-12-13T02:10:42.750506279Z" level=info msg="CreateContainer within sandbox \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:10:42.776396 containerd[1476]: time="2024-12-13T02:10:42.776057609Z" level=info msg="CreateContainer within sandbox \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c\"" Dec 13 02:10:42.776636 kubelet[2733]: I1213 02:10:42.776255 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-ljgh4" podStartSLOduration=2.386488706 podStartE2EDuration="9.776204455s" podCreationTimestamp="2024-12-13 02:10:33 +0000 UTC" firstStartedPulling="2024-12-13 02:10:33.577853598 +0000 UTC m=+17.051876369" lastFinishedPulling="2024-12-13 02:10:40.967569347 +0000 UTC m=+24.441592118" observedRunningTime="2024-12-13 02:10:41.889780274 +0000 UTC m=+25.363803045" watchObservedRunningTime="2024-12-13 02:10:42.776204455 +0000 UTC m=+26.250227226" Dec 13 02:10:42.785580 containerd[1476]: time="2024-12-13T02:10:42.781685294Z" level=info msg="StartContainer for \"b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c\"" Dec 13 02:10:42.819737 systemd[1]: Started cri-containerd-b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c.scope - libcontainer container b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c. Dec 13 02:10:42.849881 containerd[1476]: time="2024-12-13T02:10:42.849839895Z" level=info msg="StartContainer for \"b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c\" returns successfully" Dec 13 02:10:43.034237 kubelet[2733]: I1213 02:10:43.034088 2733 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:10:43.065041 kubelet[2733]: I1213 02:10:43.064979 2733 topology_manager.go:215] "Topology Admit Handler" podUID="eda6154b-7643-412c-87d8-7833af9a246c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6zjwt" Dec 13 02:10:43.066788 kubelet[2733]: I1213 02:10:43.066689 2733 topology_manager.go:215] "Topology Admit Handler" podUID="1b9e94a4-1670-4b92-b9ea-1a26f7805f1a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jr4k6" Dec 13 02:10:43.084815 kubelet[2733]: I1213 02:10:43.084746 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkmwz\" (UniqueName: \"kubernetes.io/projected/1b9e94a4-1670-4b92-b9ea-1a26f7805f1a-kube-api-access-gkmwz\") pod \"coredns-7db6d8ff4d-jr4k6\" (UID: \"1b9e94a4-1670-4b92-b9ea-1a26f7805f1a\") " pod="kube-system/coredns-7db6d8ff4d-jr4k6" Dec 13 02:10:43.084815 kubelet[2733]: I1213 02:10:43.084796 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eda6154b-7643-412c-87d8-7833af9a246c-config-volume\") pod \"coredns-7db6d8ff4d-6zjwt\" (UID: \"eda6154b-7643-412c-87d8-7833af9a246c\") " pod="kube-system/coredns-7db6d8ff4d-6zjwt" Dec 13 02:10:43.084961 kubelet[2733]: I1213 02:10:43.084827 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjqld\" (UniqueName: \"kubernetes.io/projected/eda6154b-7643-412c-87d8-7833af9a246c-kube-api-access-pjqld\") pod \"coredns-7db6d8ff4d-6zjwt\" (UID: \"eda6154b-7643-412c-87d8-7833af9a246c\") " pod="kube-system/coredns-7db6d8ff4d-6zjwt" Dec 13 02:10:43.084961 kubelet[2733]: I1213 02:10:43.084846 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b9e94a4-1670-4b92-b9ea-1a26f7805f1a-config-volume\") pod \"coredns-7db6d8ff4d-jr4k6\" (UID: \"1b9e94a4-1670-4b92-b9ea-1a26f7805f1a\") " pod="kube-system/coredns-7db6d8ff4d-jr4k6" Dec 13 02:10:43.086426 systemd[1]: Created slice kubepods-burstable-podeda6154b_7643_412c_87d8_7833af9a246c.slice - libcontainer container kubepods-burstable-podeda6154b_7643_412c_87d8_7833af9a246c.slice. Dec 13 02:10:43.103566 systemd[1]: Created slice kubepods-burstable-pod1b9e94a4_1670_4b92_b9ea_1a26f7805f1a.slice - libcontainer container kubepods-burstable-pod1b9e94a4_1670_4b92_b9ea_1a26f7805f1a.slice. Dec 13 02:10:43.394859 containerd[1476]: time="2024-12-13T02:10:43.394736592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6zjwt,Uid:eda6154b-7643-412c-87d8-7833af9a246c,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:43.416909 containerd[1476]: time="2024-12-13T02:10:43.416712911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jr4k6,Uid:1b9e94a4-1670-4b92-b9ea-1a26f7805f1a,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:43.771751 kubelet[2733]: I1213 02:10:43.771177 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c56k2" podStartSLOduration=6.703087883 podStartE2EDuration="11.771160765s" podCreationTimestamp="2024-12-13 02:10:32 +0000 UTC" firstStartedPulling="2024-12-13 02:10:33.363769047 +0000 UTC m=+16.837791778" lastFinishedPulling="2024-12-13 02:10:38.431841889 +0000 UTC m=+21.905864660" observedRunningTime="2024-12-13 02:10:43.769331895 +0000 UTC m=+27.243354706" watchObservedRunningTime="2024-12-13 02:10:43.771160765 +0000 UTC m=+27.245183536" Dec 13 02:10:45.237498 systemd-networkd[1368]: cilium_host: Link UP Dec 13 02:10:45.240139 systemd-networkd[1368]: cilium_net: Link UP Dec 13 02:10:45.240469 systemd-networkd[1368]: cilium_net: Gained carrier Dec 13 02:10:45.241637 systemd-networkd[1368]: cilium_host: Gained carrier Dec 13 02:10:45.377772 systemd-networkd[1368]: cilium_vxlan: Link UP Dec 13 02:10:45.377960 systemd-networkd[1368]: cilium_vxlan: Gained carrier Dec 13 02:10:45.717580 kernel: NET: Registered PF_ALG protocol family Dec 13 02:10:45.798887 systemd-networkd[1368]: cilium_host: Gained IPv6LL Dec 13 02:10:46.182806 systemd-networkd[1368]: cilium_net: Gained IPv6LL Dec 13 02:10:46.452214 systemd-networkd[1368]: lxc_health: Link UP Dec 13 02:10:46.460107 systemd-networkd[1368]: lxc_health: Gained carrier Dec 13 02:10:47.008944 systemd-networkd[1368]: lxcd80867e674cf: Link UP Dec 13 02:10:47.022514 kernel: eth0: renamed from tmp3f9d6 Dec 13 02:10:47.027585 kernel: eth0: renamed from tmpa6789 Dec 13 02:10:47.035109 systemd-networkd[1368]: lxc3bf43b73b687: Link UP Dec 13 02:10:47.037486 systemd-networkd[1368]: cilium_vxlan: Gained IPv6LL Dec 13 02:10:47.039519 systemd-networkd[1368]: lxc3bf43b73b687: Gained carrier Dec 13 02:10:47.039733 systemd-networkd[1368]: lxcd80867e674cf: Gained carrier Dec 13 02:10:47.655058 systemd-networkd[1368]: lxc_health: Gained IPv6LL Dec 13 02:10:48.102913 systemd-networkd[1368]: lxc3bf43b73b687: Gained IPv6LL Dec 13 02:10:48.486814 systemd-networkd[1368]: lxcd80867e674cf: Gained IPv6LL Dec 13 02:10:50.791323 containerd[1476]: time="2024-12-13T02:10:50.790584906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:50.791323 containerd[1476]: time="2024-12-13T02:10:50.791291061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:50.791883 containerd[1476]: time="2024-12-13T02:10:50.791328503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:50.791883 containerd[1476]: time="2024-12-13T02:10:50.791508232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:50.822778 systemd[1]: Started cri-containerd-3f9d69f0f1f2387aa95435a7dc2491c03c9d290ebdd5c9f2f97d9b1644cdc56d.scope - libcontainer container 3f9d69f0f1f2387aa95435a7dc2491c03c9d290ebdd5c9f2f97d9b1644cdc56d. Dec 13 02:10:50.828286 containerd[1476]: time="2024-12-13T02:10:50.828045030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:50.828286 containerd[1476]: time="2024-12-13T02:10:50.828108513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:50.828286 containerd[1476]: time="2024-12-13T02:10:50.828128594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:50.829564 containerd[1476]: time="2024-12-13T02:10:50.829457219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:50.864241 systemd[1]: Started cri-containerd-a6789622b675f49293b2cf7216179bf637df71fbf2fbd2eab23175e1f4d71da0.scope - libcontainer container a6789622b675f49293b2cf7216179bf637df71fbf2fbd2eab23175e1f4d71da0. Dec 13 02:10:50.883412 containerd[1476]: time="2024-12-13T02:10:50.883369312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jr4k6,Uid:1b9e94a4-1670-4b92-b9ea-1a26f7805f1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f9d69f0f1f2387aa95435a7dc2491c03c9d290ebdd5c9f2f97d9b1644cdc56d\"" Dec 13 02:10:50.888875 containerd[1476]: time="2024-12-13T02:10:50.888751456Z" level=info msg="CreateContainer within sandbox \"3f9d69f0f1f2387aa95435a7dc2491c03c9d290ebdd5c9f2f97d9b1644cdc56d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:10:50.917420 containerd[1476]: time="2024-12-13T02:10:50.917350744Z" level=info msg="CreateContainer within sandbox \"3f9d69f0f1f2387aa95435a7dc2491c03c9d290ebdd5c9f2f97d9b1644cdc56d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0abfc84cb67d95263af33edbf59d087953da657df6bc6911b43ffbc8139843f2\"" Dec 13 02:10:50.918802 containerd[1476]: time="2024-12-13T02:10:50.918773094Z" level=info msg="StartContainer for \"0abfc84cb67d95263af33edbf59d087953da657df6bc6911b43ffbc8139843f2\"" Dec 13 02:10:50.944783 containerd[1476]: time="2024-12-13T02:10:50.944388354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6zjwt,Uid:eda6154b-7643-412c-87d8-7833af9a246c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6789622b675f49293b2cf7216179bf637df71fbf2fbd2eab23175e1f4d71da0\"" Dec 13 02:10:50.949660 containerd[1476]: time="2024-12-13T02:10:50.949348838Z" level=info msg="CreateContainer within sandbox \"a6789622b675f49293b2cf7216179bf637df71fbf2fbd2eab23175e1f4d71da0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:10:50.976033 containerd[1476]: time="2024-12-13T02:10:50.974776569Z" level=info msg="CreateContainer within sandbox \"a6789622b675f49293b2cf7216179bf637df71fbf2fbd2eab23175e1f4d71da0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb0beff4bf0c63121412c55cd1254e7f9ce67e825b7ff40895b9025a49481aed\"" Dec 13 02:10:50.975061 systemd[1]: Started cri-containerd-0abfc84cb67d95263af33edbf59d087953da657df6bc6911b43ffbc8139843f2.scope - libcontainer container 0abfc84cb67d95263af33edbf59d087953da657df6bc6911b43ffbc8139843f2. Dec 13 02:10:50.977549 containerd[1476]: time="2024-12-13T02:10:50.977359336Z" level=info msg="StartContainer for \"bb0beff4bf0c63121412c55cd1254e7f9ce67e825b7ff40895b9025a49481aed\"" Dec 13 02:10:51.014747 systemd[1]: Started cri-containerd-bb0beff4bf0c63121412c55cd1254e7f9ce67e825b7ff40895b9025a49481aed.scope - libcontainer container bb0beff4bf0c63121412c55cd1254e7f9ce67e825b7ff40895b9025a49481aed. Dec 13 02:10:51.030508 containerd[1476]: time="2024-12-13T02:10:51.030390345Z" level=info msg="StartContainer for \"0abfc84cb67d95263af33edbf59d087953da657df6bc6911b43ffbc8139843f2\" returns successfully" Dec 13 02:10:51.065686 containerd[1476]: time="2024-12-13T02:10:51.063257408Z" level=info msg="StartContainer for \"bb0beff4bf0c63121412c55cd1254e7f9ce67e825b7ff40895b9025a49481aed\" returns successfully" Dec 13 02:10:51.795704 kubelet[2733]: I1213 02:10:51.795339 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jr4k6" podStartSLOduration=18.79530848 podStartE2EDuration="18.79530848s" podCreationTimestamp="2024-12-13 02:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:10:51.792996443 +0000 UTC m=+35.267019214" watchObservedRunningTime="2024-12-13 02:10:51.79530848 +0000 UTC m=+35.269331251" Dec 13 02:10:54.360083 kubelet[2733]: I1213 02:10:54.359779 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:10:54.385362 kubelet[2733]: I1213 02:10:54.385279 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6zjwt" podStartSLOduration=21.385181328 podStartE2EDuration="21.385181328s" podCreationTimestamp="2024-12-13 02:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:10:51.835160656 +0000 UTC m=+35.309183427" watchObservedRunningTime="2024-12-13 02:10:54.385181328 +0000 UTC m=+37.859204099" Dec 13 02:15:06.786194 systemd[1]: Started sshd@7-78.47.218.196:22-147.75.109.163:37244.service - OpenSSH per-connection server daemon (147.75.109.163:37244). Dec 13 02:15:07.777364 sshd[4139]: Accepted publickey for core from 147.75.109.163 port 37244 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:07.780890 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:07.789160 systemd-logind[1452]: New session 8 of user core. Dec 13 02:15:07.793961 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 02:15:08.550027 sshd[4139]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:08.555206 systemd[1]: sshd@7-78.47.218.196:22-147.75.109.163:37244.service: Deactivated successfully. Dec 13 02:15:08.558042 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:15:08.559184 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:15:08.560403 systemd-logind[1452]: Removed session 8. Dec 13 02:15:13.720642 systemd[1]: Started sshd@8-78.47.218.196:22-147.75.109.163:37250.service - OpenSSH per-connection server daemon (147.75.109.163:37250). Dec 13 02:15:14.720031 sshd[4153]: Accepted publickey for core from 147.75.109.163 port 37250 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:14.722325 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:14.728473 systemd-logind[1452]: New session 9 of user core. Dec 13 02:15:14.735739 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 02:15:15.475946 sshd[4153]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:15.481845 systemd[1]: sshd@8-78.47.218.196:22-147.75.109.163:37250.service: Deactivated successfully. Dec 13 02:15:15.486032 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:15:15.487031 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:15:15.489835 systemd-logind[1452]: Removed session 9. Dec 13 02:15:20.651888 systemd[1]: Started sshd@9-78.47.218.196:22-147.75.109.163:39180.service - OpenSSH per-connection server daemon (147.75.109.163:39180). Dec 13 02:15:21.622674 sshd[4169]: Accepted publickey for core from 147.75.109.163 port 39180 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:21.625304 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:21.631594 systemd-logind[1452]: New session 10 of user core. Dec 13 02:15:21.635756 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 02:15:22.377682 sshd[4169]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:22.383772 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:15:22.384358 systemd[1]: sshd@9-78.47.218.196:22-147.75.109.163:39180.service: Deactivated successfully. Dec 13 02:15:22.387308 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:15:22.388916 systemd-logind[1452]: Removed session 10. Dec 13 02:15:27.550117 systemd[1]: Started sshd@10-78.47.218.196:22-147.75.109.163:38274.service - OpenSSH per-connection server daemon (147.75.109.163:38274). Dec 13 02:15:28.528769 sshd[4183]: Accepted publickey for core from 147.75.109.163 port 38274 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:28.530899 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:28.536978 systemd-logind[1452]: New session 11 of user core. Dec 13 02:15:28.546779 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 02:15:29.274770 sshd[4183]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:29.280478 systemd[1]: sshd@10-78.47.218.196:22-147.75.109.163:38274.service: Deactivated successfully. Dec 13 02:15:29.282868 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:15:29.283935 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:15:29.285355 systemd-logind[1452]: Removed session 11. Dec 13 02:15:29.449854 systemd[1]: Started sshd@11-78.47.218.196:22-147.75.109.163:38290.service - OpenSSH per-connection server daemon (147.75.109.163:38290). Dec 13 02:15:30.450917 sshd[4196]: Accepted publickey for core from 147.75.109.163 port 38290 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:30.452798 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:30.459977 systemd-logind[1452]: New session 12 of user core. Dec 13 02:15:30.467803 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 02:15:31.247663 sshd[4196]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:31.254014 systemd[1]: sshd@11-78.47.218.196:22-147.75.109.163:38290.service: Deactivated successfully. Dec 13 02:15:31.257809 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:15:31.260577 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:15:31.262210 systemd-logind[1452]: Removed session 12. Dec 13 02:15:31.427213 systemd[1]: Started sshd@12-78.47.218.196:22-147.75.109.163:38292.service - OpenSSH per-connection server daemon (147.75.109.163:38292). Dec 13 02:15:32.421446 sshd[4207]: Accepted publickey for core from 147.75.109.163 port 38292 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:32.424316 sshd[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:32.432397 systemd-logind[1452]: New session 13 of user core. Dec 13 02:15:32.436782 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 02:15:33.180987 sshd[4207]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:33.187970 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:15:33.188933 systemd[1]: sshd@12-78.47.218.196:22-147.75.109.163:38292.service: Deactivated successfully. Dec 13 02:15:33.191458 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:15:33.193028 systemd-logind[1452]: Removed session 13. Dec 13 02:15:38.359896 systemd[1]: Started sshd@13-78.47.218.196:22-147.75.109.163:54496.service - OpenSSH per-connection server daemon (147.75.109.163:54496). Dec 13 02:15:39.350904 sshd[4222]: Accepted publickey for core from 147.75.109.163 port 54496 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:39.353394 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:39.361562 systemd-logind[1452]: New session 14 of user core. Dec 13 02:15:39.370828 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 02:15:40.116158 sshd[4222]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:40.122037 systemd[1]: sshd@13-78.47.218.196:22-147.75.109.163:54496.service: Deactivated successfully. Dec 13 02:15:40.127487 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:15:40.129135 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:15:40.130521 systemd-logind[1452]: Removed session 14. Dec 13 02:15:40.288940 systemd[1]: Started sshd@14-78.47.218.196:22-147.75.109.163:54498.service - OpenSSH per-connection server daemon (147.75.109.163:54498). Dec 13 02:15:41.272507 sshd[4235]: Accepted publickey for core from 147.75.109.163 port 54498 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:41.274754 sshd[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:41.280414 systemd-logind[1452]: New session 15 of user core. Dec 13 02:15:41.285852 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 02:15:42.093846 sshd[4235]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:42.099449 systemd[1]: sshd@14-78.47.218.196:22-147.75.109.163:54498.service: Deactivated successfully. Dec 13 02:15:42.102692 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:15:42.103857 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:15:42.104803 systemd-logind[1452]: Removed session 15. Dec 13 02:15:42.273009 systemd[1]: Started sshd@15-78.47.218.196:22-147.75.109.163:54510.service - OpenSSH per-connection server daemon (147.75.109.163:54510). Dec 13 02:15:43.251984 sshd[4246]: Accepted publickey for core from 147.75.109.163 port 54510 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:43.254626 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:43.261489 systemd-logind[1452]: New session 16 of user core. Dec 13 02:15:43.267761 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 02:15:45.588359 sshd[4246]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:45.594096 systemd[1]: sshd@15-78.47.218.196:22-147.75.109.163:54510.service: Deactivated successfully. Dec 13 02:15:45.598244 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:15:45.599820 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:15:45.601038 systemd-logind[1452]: Removed session 16. Dec 13 02:15:45.763092 systemd[1]: Started sshd@16-78.47.218.196:22-147.75.109.163:54526.service - OpenSSH per-connection server daemon (147.75.109.163:54526). Dec 13 02:15:46.739063 sshd[4264]: Accepted publickey for core from 147.75.109.163 port 54526 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:46.741954 sshd[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:46.750637 systemd-logind[1452]: New session 17 of user core. Dec 13 02:15:46.754803 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 02:15:47.616927 sshd[4264]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:47.623797 systemd[1]: sshd@16-78.47.218.196:22-147.75.109.163:54526.service: Deactivated successfully. Dec 13 02:15:47.626800 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:15:47.629024 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:15:47.630355 systemd-logind[1452]: Removed session 17. Dec 13 02:15:47.793010 systemd[1]: Started sshd@17-78.47.218.196:22-147.75.109.163:35570.service - OpenSSH per-connection server daemon (147.75.109.163:35570). Dec 13 02:15:48.780566 sshd[4275]: Accepted publickey for core from 147.75.109.163 port 35570 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:48.783228 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:48.789094 systemd-logind[1452]: New session 18 of user core. Dec 13 02:15:48.794724 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 02:15:49.537890 sshd[4275]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:49.542888 systemd[1]: sshd@17-78.47.218.196:22-147.75.109.163:35570.service: Deactivated successfully. Dec 13 02:15:49.545927 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:15:49.548961 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:15:49.550443 systemd-logind[1452]: Removed session 18. Dec 13 02:15:54.706859 systemd[1]: Started sshd@18-78.47.218.196:22-147.75.109.163:35584.service - OpenSSH per-connection server daemon (147.75.109.163:35584). Dec 13 02:15:55.702610 sshd[4292]: Accepted publickey for core from 147.75.109.163 port 35584 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:55.705180 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:55.714930 systemd-logind[1452]: New session 19 of user core. Dec 13 02:15:55.721763 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 02:15:56.455732 sshd[4292]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:56.461896 systemd[1]: sshd@18-78.47.218.196:22-147.75.109.163:35584.service: Deactivated successfully. Dec 13 02:15:56.461982 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:15:56.466403 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:15:56.468340 systemd-logind[1452]: Removed session 19. Dec 13 02:16:01.636478 systemd[1]: Started sshd@19-78.47.218.196:22-147.75.109.163:50480.service - OpenSSH per-connection server daemon (147.75.109.163:50480). Dec 13 02:16:02.620951 sshd[4306]: Accepted publickey for core from 147.75.109.163 port 50480 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:16:02.623834 sshd[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:16:02.631287 systemd-logind[1452]: New session 20 of user core. Dec 13 02:16:02.633838 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 02:16:03.390378 sshd[4306]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:03.396509 systemd[1]: sshd@19-78.47.218.196:22-147.75.109.163:50480.service: Deactivated successfully. Dec 13 02:16:03.399886 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:16:03.402110 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:16:03.403467 systemd-logind[1452]: Removed session 20. Dec 13 02:16:03.563842 systemd[1]: Started sshd@20-78.47.218.196:22-147.75.109.163:50490.service - OpenSSH per-connection server daemon (147.75.109.163:50490). Dec 13 02:16:04.538112 sshd[4319]: Accepted publickey for core from 147.75.109.163 port 50490 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:16:04.540279 sshd[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:16:04.546174 systemd-logind[1452]: New session 21 of user core. Dec 13 02:16:04.555753 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 02:16:07.129957 systemd[1]: run-containerd-runc-k8s.io-b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c-runc.G325Aw.mount: Deactivated successfully. Dec 13 02:16:07.132629 containerd[1476]: time="2024-12-13T02:16:07.131192768Z" level=info msg="StopContainer for \"2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d\" with timeout 30 (s)" Dec 13 02:16:07.133981 containerd[1476]: time="2024-12-13T02:16:07.133632569Z" level=info msg="Stop container \"2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d\" with signal terminated" Dec 13 02:16:07.153194 containerd[1476]: time="2024-12-13T02:16:07.153138056Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:16:07.154006 systemd[1]: cri-containerd-2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d.scope: Deactivated successfully. Dec 13 02:16:07.166523 containerd[1476]: time="2024-12-13T02:16:07.166486617Z" level=info msg="StopContainer for \"b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c\" with timeout 2 (s)" Dec 13 02:16:07.166959 containerd[1476]: time="2024-12-13T02:16:07.166902564Z" level=info msg="Stop container \"b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c\" with signal terminated" Dec 13 02:16:07.175796 systemd-networkd[1368]: lxc_health: Link DOWN Dec 13 02:16:07.175812 systemd-networkd[1368]: lxc_health: Lost carrier Dec 13 02:16:07.194028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d-rootfs.mount: Deactivated successfully. Dec 13 02:16:07.195534 systemd[1]: cri-containerd-b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c.scope: Deactivated successfully. Dec 13 02:16:07.197613 systemd[1]: cri-containerd-b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c.scope: Consumed 7.832s CPU time. Dec 13 02:16:07.209679 containerd[1476]: time="2024-12-13T02:16:07.209349885Z" level=info msg="shim disconnected" id=2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d namespace=k8s.io Dec 13 02:16:07.209679 containerd[1476]: time="2024-12-13T02:16:07.209623263Z" level=warning msg="cleaning up after shim disconnected" id=2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d namespace=k8s.io Dec 13 02:16:07.210259 containerd[1476]: time="2024-12-13T02:16:07.209766992Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:16:07.224331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c-rootfs.mount: Deactivated successfully. Dec 13 02:16:07.228285 containerd[1476]: time="2024-12-13T02:16:07.228214609Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:16:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 02:16:07.230196 containerd[1476]: time="2024-12-13T02:16:07.230035009Z" level=info msg="shim disconnected" id=b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c namespace=k8s.io Dec 13 02:16:07.230196 containerd[1476]: time="2024-12-13T02:16:07.230081332Z" level=warning msg="cleaning up after shim disconnected" id=b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c namespace=k8s.io Dec 13 02:16:07.230196 containerd[1476]: time="2024-12-13T02:16:07.230089253Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:16:07.232160 containerd[1476]: time="2024-12-13T02:16:07.232005459Z" level=info msg="StopContainer for \"2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d\" returns successfully" Dec 13 02:16:07.235186 containerd[1476]: time="2024-12-13T02:16:07.235055700Z" level=info msg="StopPodSandbox for \"754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134\"" Dec 13 02:16:07.235186 containerd[1476]: time="2024-12-13T02:16:07.235097303Z" level=info msg="Container to stop \"2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:16:07.240921 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134-shm.mount: Deactivated successfully. Dec 13 02:16:07.253984 systemd[1]: cri-containerd-754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134.scope: Deactivated successfully. Dec 13 02:16:07.259742 containerd[1476]: time="2024-12-13T02:16:07.259653203Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:16:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 02:16:07.263208 containerd[1476]: time="2024-12-13T02:16:07.262848654Z" level=info msg="StopContainer for \"b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c\" returns successfully" Dec 13 02:16:07.265517 containerd[1476]: time="2024-12-13T02:16:07.265487628Z" level=info msg="StopPodSandbox for \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\"" Dec 13 02:16:07.266355 containerd[1476]: time="2024-12-13T02:16:07.266188474Z" level=info msg="Container to stop \"b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:16:07.266355 containerd[1476]: time="2024-12-13T02:16:07.266239358Z" level=info msg="Container to stop \"41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:16:07.266355 containerd[1476]: time="2024-12-13T02:16:07.266252038Z" level=info msg="Container to stop \"b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:16:07.266355 containerd[1476]: time="2024-12-13T02:16:07.266262479Z" level=info msg="Container to stop \"7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:16:07.266355 containerd[1476]: time="2024-12-13T02:16:07.266272200Z" level=info msg="Container to stop \"b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:16:07.276587 systemd[1]: cri-containerd-89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48.scope: Deactivated successfully. Dec 13 02:16:07.297439 containerd[1476]: time="2024-12-13T02:16:07.296857657Z" level=info msg="shim disconnected" id=754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134 namespace=k8s.io Dec 13 02:16:07.297439 containerd[1476]: time="2024-12-13T02:16:07.297104514Z" level=warning msg="cleaning up after shim disconnected" id=754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134 namespace=k8s.io Dec 13 02:16:07.297439 containerd[1476]: time="2024-12-13T02:16:07.297114034Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:16:07.310524 containerd[1476]: time="2024-12-13T02:16:07.310333346Z" level=info msg="shim disconnected" id=89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48 namespace=k8s.io Dec 13 02:16:07.310524 containerd[1476]: time="2024-12-13T02:16:07.310387470Z" level=warning msg="cleaning up after shim disconnected" id=89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48 namespace=k8s.io Dec 13 02:16:07.310524 containerd[1476]: time="2024-12-13T02:16:07.310395671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:16:07.317524 containerd[1476]: time="2024-12-13T02:16:07.317025748Z" level=info msg="TearDown network for sandbox \"754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134\" successfully" Dec 13 02:16:07.317524 containerd[1476]: time="2024-12-13T02:16:07.317059550Z" level=info msg="StopPodSandbox for \"754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134\" returns successfully" Dec 13 02:16:07.327665 containerd[1476]: time="2024-12-13T02:16:07.326770671Z" level=info msg="TearDown network for sandbox \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\" successfully" Dec 13 02:16:07.327665 containerd[1476]: time="2024-12-13T02:16:07.326805753Z" level=info msg="StopPodSandbox for \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\" returns successfully" Dec 13 02:16:07.478688 kubelet[2733]: I1213 02:16:07.478469 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112-cilium-config-path\") pod \"1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112\" (UID: \"1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112\") " Dec 13 02:16:07.478688 kubelet[2733]: I1213 02:16:07.478592 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2nb7\" (UniqueName: \"kubernetes.io/projected/bbc78614-2e91-4c8a-a962-739f02408941-kube-api-access-t2nb7\") pod \"bbc78614-2e91-4c8a-a962-739f02408941\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " Dec 13 02:16:07.478688 kubelet[2733]: I1213 02:16:07.478631 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-cni-path\") pod \"bbc78614-2e91-4c8a-a962-739f02408941\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " Dec 13 02:16:07.478688 kubelet[2733]: I1213 02:16:07.478664 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-host-proc-sys-kernel\") pod \"bbc78614-2e91-4c8a-a962-739f02408941\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " Dec 13 02:16:07.479423 kubelet[2733]: I1213 02:16:07.478702 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gnn4\" (UniqueName: \"kubernetes.io/projected/1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112-kube-api-access-7gnn4\") pod \"1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112\" (UID: \"1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112\") " Dec 13 02:16:07.479423 kubelet[2733]: I1213 02:16:07.478738 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bbc78614-2e91-4c8a-a962-739f02408941-hubble-tls\") pod \"bbc78614-2e91-4c8a-a962-739f02408941\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " Dec 13 02:16:07.479423 kubelet[2733]: I1213 02:16:07.478773 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-xtables-lock\") pod \"bbc78614-2e91-4c8a-a962-739f02408941\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " Dec 13 02:16:07.479423 kubelet[2733]: I1213 02:16:07.478819 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-host-proc-sys-net\") pod \"bbc78614-2e91-4c8a-a962-739f02408941\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " Dec 13 02:16:07.479423 kubelet[2733]: I1213 02:16:07.478860 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bbc78614-2e91-4c8a-a962-739f02408941-clustermesh-secrets\") pod \"bbc78614-2e91-4c8a-a962-739f02408941\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " Dec 13 02:16:07.479423 kubelet[2733]: I1213 02:16:07.478891 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-lib-modules\") pod \"bbc78614-2e91-4c8a-a962-739f02408941\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " Dec 13 02:16:07.479834 kubelet[2733]: I1213 02:16:07.478926 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-cilium-cgroup\") pod \"bbc78614-2e91-4c8a-a962-739f02408941\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " Dec 13 02:16:07.479834 kubelet[2733]: I1213 02:16:07.478958 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-cilium-run\") pod \"bbc78614-2e91-4c8a-a962-739f02408941\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " Dec 13 02:16:07.479834 kubelet[2733]: I1213 02:16:07.478988 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-hostproc\") pod \"bbc78614-2e91-4c8a-a962-739f02408941\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " Dec 13 02:16:07.479834 kubelet[2733]: I1213 02:16:07.479020 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-etc-cni-netd\") pod \"bbc78614-2e91-4c8a-a962-739f02408941\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " Dec 13 02:16:07.479834 kubelet[2733]: I1213 02:16:07.479056 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbc78614-2e91-4c8a-a962-739f02408941-cilium-config-path\") pod \"bbc78614-2e91-4c8a-a962-739f02408941\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " Dec 13 02:16:07.479834 kubelet[2733]: I1213 02:16:07.479089 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-bpf-maps\") pod \"bbc78614-2e91-4c8a-a962-739f02408941\" (UID: \"bbc78614-2e91-4c8a-a962-739f02408941\") " Dec 13 02:16:07.480130 kubelet[2733]: I1213 02:16:07.479189 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bbc78614-2e91-4c8a-a962-739f02408941" (UID: "bbc78614-2e91-4c8a-a962-739f02408941"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:07.480130 kubelet[2733]: I1213 02:16:07.480073 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bbc78614-2e91-4c8a-a962-739f02408941" (UID: "bbc78614-2e91-4c8a-a962-739f02408941"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:07.487278 kubelet[2733]: I1213 02:16:07.487190 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbc78614-2e91-4c8a-a962-739f02408941-kube-api-access-t2nb7" (OuterVolumeSpecName: "kube-api-access-t2nb7") pod "bbc78614-2e91-4c8a-a962-739f02408941" (UID: "bbc78614-2e91-4c8a-a962-739f02408941"). InnerVolumeSpecName "kube-api-access-t2nb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:16:07.487414 kubelet[2733]: I1213 02:16:07.487295 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-cni-path" (OuterVolumeSpecName: "cni-path") pod "bbc78614-2e91-4c8a-a962-739f02408941" (UID: "bbc78614-2e91-4c8a-a962-739f02408941"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:07.487414 kubelet[2733]: I1213 02:16:07.487331 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bbc78614-2e91-4c8a-a962-739f02408941" (UID: "bbc78614-2e91-4c8a-a962-739f02408941"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:07.489580 kubelet[2733]: I1213 02:16:07.488321 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112" (UID: "1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:16:07.490388 kubelet[2733]: I1213 02:16:07.490318 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112-kube-api-access-7gnn4" (OuterVolumeSpecName: "kube-api-access-7gnn4") pod "1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112" (UID: "1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112"). InnerVolumeSpecName "kube-api-access-7gnn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:16:07.493563 kubelet[2733]: I1213 02:16:07.492993 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc78614-2e91-4c8a-a962-739f02408941-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bbc78614-2e91-4c8a-a962-739f02408941" (UID: "bbc78614-2e91-4c8a-a962-739f02408941"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:16:07.493563 kubelet[2733]: I1213 02:16:07.493055 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bbc78614-2e91-4c8a-a962-739f02408941" (UID: "bbc78614-2e91-4c8a-a962-739f02408941"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:07.493563 kubelet[2733]: I1213 02:16:07.493072 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bbc78614-2e91-4c8a-a962-739f02408941" (UID: "bbc78614-2e91-4c8a-a962-739f02408941"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:07.493563 kubelet[2733]: I1213 02:16:07.493087 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bbc78614-2e91-4c8a-a962-739f02408941" (UID: "bbc78614-2e91-4c8a-a962-739f02408941"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:07.493563 kubelet[2733]: I1213 02:16:07.493102 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-hostproc" (OuterVolumeSpecName: "hostproc") pod "bbc78614-2e91-4c8a-a962-739f02408941" (UID: "bbc78614-2e91-4c8a-a962-739f02408941"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:07.493790 kubelet[2733]: I1213 02:16:07.493118 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bbc78614-2e91-4c8a-a962-739f02408941" (UID: "bbc78614-2e91-4c8a-a962-739f02408941"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:07.494473 kubelet[2733]: I1213 02:16:07.494443 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbc78614-2e91-4c8a-a962-739f02408941-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bbc78614-2e91-4c8a-a962-739f02408941" (UID: "bbc78614-2e91-4c8a-a962-739f02408941"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:16:07.494700 kubelet[2733]: I1213 02:16:07.494680 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bbc78614-2e91-4c8a-a962-739f02408941" (UID: "bbc78614-2e91-4c8a-a962-739f02408941"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:07.495510 kubelet[2733]: I1213 02:16:07.495464 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbc78614-2e91-4c8a-a962-739f02408941-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bbc78614-2e91-4c8a-a962-739f02408941" (UID: "bbc78614-2e91-4c8a-a962-739f02408941"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:16:07.563690 kubelet[2733]: I1213 02:16:07.563650 2733 scope.go:117] "RemoveContainer" containerID="2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d" Dec 13 02:16:07.569902 containerd[1476]: time="2024-12-13T02:16:07.569105778Z" level=info msg="RemoveContainer for \"2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d\"" Dec 13 02:16:07.573964 systemd[1]: Removed slice kubepods-besteffort-pod1a8ad8c5_f57a_4b24_90c4_d5c26f9c4112.slice - libcontainer container kubepods-besteffort-pod1a8ad8c5_f57a_4b24_90c4_d5c26f9c4112.slice. Dec 13 02:16:07.579282 containerd[1476]: time="2024-12-13T02:16:07.579074355Z" level=info msg="RemoveContainer for \"2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d\" returns successfully" Dec 13 02:16:07.579458 kubelet[2733]: I1213 02:16:07.579224 2733 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-t2nb7\" (UniqueName: \"kubernetes.io/projected/bbc78614-2e91-4c8a-a962-739f02408941-kube-api-access-t2nb7\") on node \"ci-4081-2-1-f-bc189a5809\" DevicePath \"\"" Dec 13 02:16:07.579458 kubelet[2733]: I1213 02:16:07.579243 2733 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7gnn4\" (UniqueName: \"kubernetes.io/projected/1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112-kube-api-access-7gnn4\") on node \"ci-4081-2-1-f-bc189a5809\" DevicePath \"\"" Dec 13 02:16:07.579458 kubelet[2733]: I1213 02:16:07.579252 2733 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-cni-path\") on node \"ci-4081-2-1-f-bc189a5809\" DevicePath \"\"" Dec 13 02:16:07.579458 kubelet[2733]: I1213 02:16:07.579261 2733 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-host-proc-sys-kernel\") on node \"ci-4081-2-1-f-bc189a5809\" DevicePath \"\"" Dec 13 02:16:07.579458 kubelet[2733]: I1213 02:16:07.579271 2733 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-xtables-lock\") on node \"ci-4081-2-1-f-bc189a5809\" DevicePath \"\"" Dec 13 02:16:07.579458 kubelet[2733]: I1213 02:16:07.579280 2733 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bbc78614-2e91-4c8a-a962-739f02408941-hubble-tls\") on node \"ci-4081-2-1-f-bc189a5809\" DevicePath \"\"" Dec 13 02:16:07.579458 kubelet[2733]: I1213 02:16:07.579288 2733 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bbc78614-2e91-4c8a-a962-739f02408941-clustermesh-secrets\") on node \"ci-4081-2-1-f-bc189a5809\" DevicePath \"\"" Dec 13 02:16:07.579458 kubelet[2733]: I1213 02:16:07.579297 2733 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-lib-modules\") on node \"ci-4081-2-1-f-bc189a5809\" DevicePath \"\"" Dec 13 02:16:07.580015 kubelet[2733]: I1213 02:16:07.579304 2733 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-host-proc-sys-net\") on node \"ci-4081-2-1-f-bc189a5809\" DevicePath \"\"" Dec 13 02:16:07.580015 kubelet[2733]: I1213 02:16:07.579311 2733 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-cilium-run\") on node \"ci-4081-2-1-f-bc189a5809\" DevicePath \"\"" Dec 13 02:16:07.580015 kubelet[2733]: I1213 02:16:07.579319 2733 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-hostproc\") on node \"ci-4081-2-1-f-bc189a5809\" DevicePath \"\"" Dec 13 02:16:07.580015 kubelet[2733]: I1213 02:16:07.579326 2733 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-cilium-cgroup\") on node \"ci-4081-2-1-f-bc189a5809\" DevicePath \"\"" Dec 13 02:16:07.580015 kubelet[2733]: I1213 02:16:07.579334 2733 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbc78614-2e91-4c8a-a962-739f02408941-cilium-config-path\") on node \"ci-4081-2-1-f-bc189a5809\" DevicePath \"\"" Dec 13 02:16:07.580015 kubelet[2733]: I1213 02:16:07.579341 2733 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-bpf-maps\") on node \"ci-4081-2-1-f-bc189a5809\" DevicePath \"\"" Dec 13 02:16:07.580015 kubelet[2733]: I1213 02:16:07.579348 2733 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbc78614-2e91-4c8a-a962-739f02408941-etc-cni-netd\") on node \"ci-4081-2-1-f-bc189a5809\" DevicePath \"\"" Dec 13 02:16:07.580015 kubelet[2733]: I1213 02:16:07.579356 2733 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112-cilium-config-path\") on node \"ci-4081-2-1-f-bc189a5809\" DevicePath \"\"" Dec 13 02:16:07.580738 kubelet[2733]: I1213 02:16:07.580712 2733 scope.go:117] "RemoveContainer" containerID="2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d" Dec 13 02:16:07.581177 containerd[1476]: time="2024-12-13T02:16:07.580998642Z" level=error msg="ContainerStatus for \"2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d\": not found" Dec 13 02:16:07.581682 kubelet[2733]: E1213 02:16:07.581649 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d\": not found" containerID="2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d" Dec 13 02:16:07.581756 kubelet[2733]: I1213 02:16:07.581686 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d"} err="failed to get container status \"2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d\": rpc error: code = NotFound desc = an error occurred when try to find container \"2218c9d2b89f087a41d689bc45659f5a16156d24e8c98d62c57b2621e803916d\": not found" Dec 13 02:16:07.581756 kubelet[2733]: I1213 02:16:07.581750 2733 scope.go:117] "RemoveContainer" containerID="b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c" Dec 13 02:16:07.584099 systemd[1]: Removed slice kubepods-burstable-podbbc78614_2e91_4c8a_a962_739f02408941.slice - libcontainer container kubepods-burstable-podbbc78614_2e91_4c8a_a962_739f02408941.slice. Dec 13 02:16:07.584214 systemd[1]: kubepods-burstable-podbbc78614_2e91_4c8a_a962_739f02408941.slice: Consumed 7.966s CPU time. Dec 13 02:16:07.586760 containerd[1476]: time="2024-12-13T02:16:07.586400639Z" level=info msg="RemoveContainer for \"b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c\"" Dec 13 02:16:07.590492 containerd[1476]: time="2024-12-13T02:16:07.590464667Z" level=info msg="RemoveContainer for \"b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c\" returns successfully" Dec 13 02:16:07.590773 kubelet[2733]: I1213 02:16:07.590753 2733 scope.go:117] "RemoveContainer" containerID="b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224" Dec 13 02:16:07.592105 containerd[1476]: time="2024-12-13T02:16:07.592080093Z" level=info msg="RemoveContainer for \"b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224\"" Dec 13 02:16:07.597098 containerd[1476]: time="2024-12-13T02:16:07.596980377Z" level=info msg="RemoveContainer for \"b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224\" returns successfully" Dec 13 02:16:07.597258 kubelet[2733]: I1213 02:16:07.597204 2733 scope.go:117] "RemoveContainer" containerID="7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc" Dec 13 02:16:07.600694 containerd[1476]: time="2024-12-13T02:16:07.600457046Z" level=info msg="RemoveContainer for \"7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc\"" Dec 13 02:16:07.603654 containerd[1476]: time="2024-12-13T02:16:07.603570731Z" level=info msg="RemoveContainer for \"7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc\" returns successfully" Dec 13 02:16:07.603911 kubelet[2733]: I1213 02:16:07.603889 2733 scope.go:117] "RemoveContainer" containerID="41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5" Dec 13 02:16:07.606049 containerd[1476]: time="2024-12-13T02:16:07.605825240Z" level=info msg="RemoveContainer for \"41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5\"" Dec 13 02:16:07.609736 containerd[1476]: time="2024-12-13T02:16:07.609708696Z" level=info msg="RemoveContainer for \"41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5\" returns successfully" Dec 13 02:16:07.610102 kubelet[2733]: I1213 02:16:07.610075 2733 scope.go:117] "RemoveContainer" containerID="b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477" Dec 13 02:16:07.612700 containerd[1476]: time="2024-12-13T02:16:07.612600167Z" level=info msg="RemoveContainer for \"b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477\"" Dec 13 02:16:07.615828 containerd[1476]: time="2024-12-13T02:16:07.615761536Z" level=info msg="RemoveContainer for \"b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477\" returns successfully" Dec 13 02:16:07.616143 kubelet[2733]: I1213 02:16:07.616045 2733 scope.go:117] "RemoveContainer" containerID="b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c" Dec 13 02:16:07.616461 containerd[1476]: time="2024-12-13T02:16:07.616352815Z" level=error msg="ContainerStatus for \"b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c\": not found" Dec 13 02:16:07.616756 kubelet[2733]: E1213 02:16:07.616613 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c\": not found" containerID="b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c" Dec 13 02:16:07.616756 kubelet[2733]: I1213 02:16:07.616643 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c"} err="failed to get container status \"b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2c94d152e4c72b48c2930d7dcd3ec4130564772dd82e4f63f42a6268179574c\": not found" Dec 13 02:16:07.616756 kubelet[2733]: I1213 02:16:07.616689 2733 scope.go:117] "RemoveContainer" containerID="b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224" Dec 13 02:16:07.617216 containerd[1476]: time="2024-12-13T02:16:07.617051821Z" level=error msg="ContainerStatus for \"b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224\": not found" Dec 13 02:16:07.617432 kubelet[2733]: E1213 02:16:07.617188 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224\": not found" containerID="b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224" Dec 13 02:16:07.617432 kubelet[2733]: I1213 02:16:07.617335 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224"} err="failed to get container status \"b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0945eab2cf3257fc52eca44dd9afcf6a5b4da08e0e45434466eb72bdc2d5224\": not found" Dec 13 02:16:07.617432 kubelet[2733]: I1213 02:16:07.617355 2733 scope.go:117] "RemoveContainer" containerID="7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc" Dec 13 02:16:07.617996 containerd[1476]: time="2024-12-13T02:16:07.617748267Z" level=error msg="ContainerStatus for \"7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc\": not found" Dec 13 02:16:07.618083 kubelet[2733]: E1213 02:16:07.617871 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc\": not found" containerID="7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc" Dec 13 02:16:07.618083 kubelet[2733]: I1213 02:16:07.617893 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc"} err="failed to get container status \"7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc\": rpc error: code = NotFound desc = an error occurred when try to find container \"7cf1934aef71339ffde204bc28c2e76ee7649ab11ba2a8da05534d0c9edd10bc\": not found" Dec 13 02:16:07.618083 kubelet[2733]: I1213 02:16:07.617908 2733 scope.go:117] "RemoveContainer" containerID="41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5" Dec 13 02:16:07.618161 containerd[1476]: time="2024-12-13T02:16:07.618087929Z" level=error msg="ContainerStatus for \"41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5\": not found" Dec 13 02:16:07.618468 kubelet[2733]: E1213 02:16:07.618282 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5\": not found" containerID="41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5" Dec 13 02:16:07.618468 kubelet[2733]: I1213 02:16:07.618342 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5"} err="failed to get container status \"41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"41b06b0b34226b00450f02c30869afd9dba42be42cff63e496d9d114b31528f5\": not found" Dec 13 02:16:07.618468 kubelet[2733]: I1213 02:16:07.618366 2733 scope.go:117] "RemoveContainer" containerID="b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477" Dec 13 02:16:07.618759 kubelet[2733]: E1213 02:16:07.618712 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477\": not found" containerID="b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477" Dec 13 02:16:07.618759 kubelet[2733]: I1213 02:16:07.618749 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477"} err="failed to get container status \"b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477\": rpc error: code = NotFound desc = an error occurred when try to find container \"b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477\": not found" Dec 13 02:16:07.618820 containerd[1476]: time="2024-12-13T02:16:07.618524038Z" level=error msg="ContainerStatus for \"b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b68dc8cc8c6ed3ecb776f7bb45a0466929f42abb0d9edd27a7d6264132913477\": not found" Dec 13 02:16:08.124283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134-rootfs.mount: Deactivated successfully. Dec 13 02:16:08.124594 systemd[1]: var-lib-kubelet-pods-1a8ad8c5\x2df57a\x2d4b24\x2d90c4\x2dd5c26f9c4112-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7gnn4.mount: Deactivated successfully. Dec 13 02:16:08.124791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48-rootfs.mount: Deactivated successfully. Dec 13 02:16:08.124966 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48-shm.mount: Deactivated successfully. Dec 13 02:16:08.125119 systemd[1]: var-lib-kubelet-pods-bbc78614\x2d2e91\x2d4c8a\x2da962\x2d739f02408941-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt2nb7.mount: Deactivated successfully. Dec 13 02:16:08.125281 systemd[1]: var-lib-kubelet-pods-bbc78614\x2d2e91\x2d4c8a\x2da962\x2d739f02408941-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:16:08.125429 systemd[1]: var-lib-kubelet-pods-bbc78614\x2d2e91\x2d4c8a\x2da962\x2d739f02408941-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:16:08.616581 kubelet[2733]: I1213 02:16:08.615762 2733 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112" path="/var/lib/kubelet/pods/1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112/volumes" Dec 13 02:16:08.617149 kubelet[2733]: I1213 02:16:08.616524 2733 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbc78614-2e91-4c8a-a962-739f02408941" path="/var/lib/kubelet/pods/bbc78614-2e91-4c8a-a962-739f02408941/volumes" Dec 13 02:16:09.212280 sshd[4319]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:09.217611 systemd[1]: sshd@20-78.47.218.196:22-147.75.109.163:50490.service: Deactivated successfully. Dec 13 02:16:09.220929 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:16:09.221362 systemd[1]: session-21.scope: Consumed 1.419s CPU time. Dec 13 02:16:09.222318 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:16:09.223517 systemd-logind[1452]: Removed session 21. Dec 13 02:16:09.386886 systemd[1]: Started sshd@21-78.47.218.196:22-147.75.109.163:40738.service - OpenSSH per-connection server daemon (147.75.109.163:40738). Dec 13 02:16:10.384689 sshd[4484]: Accepted publickey for core from 147.75.109.163 port 40738 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:16:10.386696 sshd[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:16:10.394086 systemd-logind[1452]: New session 22 of user core. Dec 13 02:16:10.400772 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 02:16:11.623289 kubelet[2733]: I1213 02:16:11.623215 2733 topology_manager.go:215] "Topology Admit Handler" podUID="b69c6dd4-fc88-45ff-a4a2-bbd96419f942" podNamespace="kube-system" podName="cilium-fh5p4" Dec 13 02:16:11.623289 kubelet[2733]: E1213 02:16:11.623298 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bbc78614-2e91-4c8a-a962-739f02408941" containerName="mount-bpf-fs" Dec 13 02:16:11.623727 kubelet[2733]: E1213 02:16:11.623308 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112" containerName="cilium-operator" Dec 13 02:16:11.623727 kubelet[2733]: E1213 02:16:11.623316 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bbc78614-2e91-4c8a-a962-739f02408941" containerName="clean-cilium-state" Dec 13 02:16:11.623727 kubelet[2733]: E1213 02:16:11.623323 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bbc78614-2e91-4c8a-a962-739f02408941" containerName="mount-cgroup" Dec 13 02:16:11.623727 kubelet[2733]: E1213 02:16:11.623328 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bbc78614-2e91-4c8a-a962-739f02408941" containerName="apply-sysctl-overwrites" Dec 13 02:16:11.623727 kubelet[2733]: E1213 02:16:11.623337 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bbc78614-2e91-4c8a-a962-739f02408941" containerName="cilium-agent" Dec 13 02:16:11.623727 kubelet[2733]: I1213 02:16:11.623358 2733 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc78614-2e91-4c8a-a962-739f02408941" containerName="cilium-agent" Dec 13 02:16:11.623727 kubelet[2733]: I1213 02:16:11.623364 2733 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a8ad8c5-f57a-4b24-90c4-d5c26f9c4112" containerName="cilium-operator" Dec 13 02:16:11.633747 systemd[1]: Created slice kubepods-burstable-podb69c6dd4_fc88_45ff_a4a2_bbd96419f942.slice - libcontainer container kubepods-burstable-podb69c6dd4_fc88_45ff_a4a2_bbd96419f942.slice. Dec 13 02:16:11.703092 kubelet[2733]: I1213 02:16:11.703029 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b69c6dd4-fc88-45ff-a4a2-bbd96419f942-cni-path\") pod \"cilium-fh5p4\" (UID: \"b69c6dd4-fc88-45ff-a4a2-bbd96419f942\") " pod="kube-system/cilium-fh5p4" Dec 13 02:16:11.703092 kubelet[2733]: I1213 02:16:11.703079 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b69c6dd4-fc88-45ff-a4a2-bbd96419f942-etc-cni-netd\") pod \"cilium-fh5p4\" (UID: \"b69c6dd4-fc88-45ff-a4a2-bbd96419f942\") " pod="kube-system/cilium-fh5p4" Dec 13 02:16:11.703092 kubelet[2733]: I1213 02:16:11.703102 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b69c6dd4-fc88-45ff-a4a2-bbd96419f942-host-proc-sys-net\") pod \"cilium-fh5p4\" (UID: \"b69c6dd4-fc88-45ff-a4a2-bbd96419f942\") " pod="kube-system/cilium-fh5p4" Dec 13 02:16:11.703379 kubelet[2733]: I1213 02:16:11.703121 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b69c6dd4-fc88-45ff-a4a2-bbd96419f942-xtables-lock\") pod \"cilium-fh5p4\" (UID: \"b69c6dd4-fc88-45ff-a4a2-bbd96419f942\") " pod="kube-system/cilium-fh5p4" Dec 13 02:16:11.703379 kubelet[2733]: I1213 02:16:11.703143 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b69c6dd4-fc88-45ff-a4a2-bbd96419f942-host-proc-sys-kernel\") pod \"cilium-fh5p4\" (UID: \"b69c6dd4-fc88-45ff-a4a2-bbd96419f942\") " pod="kube-system/cilium-fh5p4" Dec 13 02:16:11.703379 kubelet[2733]: I1213 02:16:11.703162 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b69c6dd4-fc88-45ff-a4a2-bbd96419f942-bpf-maps\") pod \"cilium-fh5p4\" (UID: \"b69c6dd4-fc88-45ff-a4a2-bbd96419f942\") " pod="kube-system/cilium-fh5p4" Dec 13 02:16:11.703379 kubelet[2733]: I1213 02:16:11.703179 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b69c6dd4-fc88-45ff-a4a2-bbd96419f942-lib-modules\") pod \"cilium-fh5p4\" (UID: \"b69c6dd4-fc88-45ff-a4a2-bbd96419f942\") " pod="kube-system/cilium-fh5p4" Dec 13 02:16:11.703379 kubelet[2733]: I1213 02:16:11.703199 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b69c6dd4-fc88-45ff-a4a2-bbd96419f942-cilium-config-path\") pod \"cilium-fh5p4\" (UID: \"b69c6dd4-fc88-45ff-a4a2-bbd96419f942\") " pod="kube-system/cilium-fh5p4" Dec 13 02:16:11.703379 kubelet[2733]: I1213 02:16:11.703217 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dtbc\" (UniqueName: \"kubernetes.io/projected/b69c6dd4-fc88-45ff-a4a2-bbd96419f942-kube-api-access-6dtbc\") pod \"cilium-fh5p4\" (UID: \"b69c6dd4-fc88-45ff-a4a2-bbd96419f942\") " pod="kube-system/cilium-fh5p4" Dec 13 02:16:11.703703 kubelet[2733]: I1213 02:16:11.703237 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b69c6dd4-fc88-45ff-a4a2-bbd96419f942-cilium-run\") pod \"cilium-fh5p4\" (UID: \"b69c6dd4-fc88-45ff-a4a2-bbd96419f942\") " pod="kube-system/cilium-fh5p4" Dec 13 02:16:11.703703 kubelet[2733]: I1213 02:16:11.703255 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b69c6dd4-fc88-45ff-a4a2-bbd96419f942-cilium-cgroup\") pod \"cilium-fh5p4\" (UID: \"b69c6dd4-fc88-45ff-a4a2-bbd96419f942\") " pod="kube-system/cilium-fh5p4" Dec 13 02:16:11.703703 kubelet[2733]: I1213 02:16:11.703277 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b69c6dd4-fc88-45ff-a4a2-bbd96419f942-clustermesh-secrets\") pod \"cilium-fh5p4\" (UID: \"b69c6dd4-fc88-45ff-a4a2-bbd96419f942\") " pod="kube-system/cilium-fh5p4" Dec 13 02:16:11.703703 kubelet[2733]: I1213 02:16:11.703294 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b69c6dd4-fc88-45ff-a4a2-bbd96419f942-hubble-tls\") pod \"cilium-fh5p4\" (UID: \"b69c6dd4-fc88-45ff-a4a2-bbd96419f942\") " pod="kube-system/cilium-fh5p4" Dec 13 02:16:11.703703 kubelet[2733]: I1213 02:16:11.703313 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b69c6dd4-fc88-45ff-a4a2-bbd96419f942-hostproc\") pod \"cilium-fh5p4\" (UID: \"b69c6dd4-fc88-45ff-a4a2-bbd96419f942\") " pod="kube-system/cilium-fh5p4" Dec 13 02:16:11.703703 kubelet[2733]: I1213 02:16:11.703332 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b69c6dd4-fc88-45ff-a4a2-bbd96419f942-cilium-ipsec-secrets\") pod \"cilium-fh5p4\" (UID: \"b69c6dd4-fc88-45ff-a4a2-bbd96419f942\") " pod="kube-system/cilium-fh5p4" Dec 13 02:16:11.800987 kubelet[2733]: E1213 02:16:11.800911 2733 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:16:11.820034 sshd[4484]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:11.829874 systemd[1]: sshd@21-78.47.218.196:22-147.75.109.163:40738.service: Deactivated successfully. Dec 13 02:16:11.832775 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:16:11.836787 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:16:11.841972 systemd-logind[1452]: Removed session 22. Dec 13 02:16:11.938379 containerd[1476]: time="2024-12-13T02:16:11.938214784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fh5p4,Uid:b69c6dd4-fc88-45ff-a4a2-bbd96419f942,Namespace:kube-system,Attempt:0,}" Dec 13 02:16:11.965846 containerd[1476]: time="2024-12-13T02:16:11.965301498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:16:11.965846 containerd[1476]: time="2024-12-13T02:16:11.965357901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:16:11.965846 containerd[1476]: time="2024-12-13T02:16:11.965374823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:16:11.965846 containerd[1476]: time="2024-12-13T02:16:11.965453388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:16:11.980882 systemd[1]: Started sshd@22-78.47.218.196:22-147.75.109.163:40740.service - OpenSSH per-connection server daemon (147.75.109.163:40740). Dec 13 02:16:11.984908 systemd[1]: Started cri-containerd-1b94c2add0d9c884012da9b8f91c52d62b0d7b1ea9e78005defecadbffa8524a.scope - libcontainer container 1b94c2add0d9c884012da9b8f91c52d62b0d7b1ea9e78005defecadbffa8524a. Dec 13 02:16:12.013309 containerd[1476]: time="2024-12-13T02:16:12.013271276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fh5p4,Uid:b69c6dd4-fc88-45ff-a4a2-bbd96419f942,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b94c2add0d9c884012da9b8f91c52d62b0d7b1ea9e78005defecadbffa8524a\"" Dec 13 02:16:12.017715 containerd[1476]: time="2024-12-13T02:16:12.017630485Z" level=info msg="CreateContainer within sandbox \"1b94c2add0d9c884012da9b8f91c52d62b0d7b1ea9e78005defecadbffa8524a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:16:12.033581 containerd[1476]: time="2024-12-13T02:16:12.033495577Z" level=info msg="CreateContainer within sandbox \"1b94c2add0d9c884012da9b8f91c52d62b0d7b1ea9e78005defecadbffa8524a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"30f3b49796246b0fc9e1916b8d19e4aeee32397a4096b36608b0b63425f213ca\"" Dec 13 02:16:12.034907 containerd[1476]: time="2024-12-13T02:16:12.034853987Z" level=info msg="StartContainer for \"30f3b49796246b0fc9e1916b8d19e4aeee32397a4096b36608b0b63425f213ca\"" Dec 13 02:16:12.069901 systemd[1]: Started cri-containerd-30f3b49796246b0fc9e1916b8d19e4aeee32397a4096b36608b0b63425f213ca.scope - libcontainer container 30f3b49796246b0fc9e1916b8d19e4aeee32397a4096b36608b0b63425f213ca. Dec 13 02:16:12.103016 containerd[1476]: time="2024-12-13T02:16:12.102493791Z" level=info msg="StartContainer for \"30f3b49796246b0fc9e1916b8d19e4aeee32397a4096b36608b0b63425f213ca\" returns successfully" Dec 13 02:16:12.189832 systemd[1]: cri-containerd-30f3b49796246b0fc9e1916b8d19e4aeee32397a4096b36608b0b63425f213ca.scope: Deactivated successfully. Dec 13 02:16:12.226515 containerd[1476]: time="2024-12-13T02:16:12.226435529Z" level=info msg="shim disconnected" id=30f3b49796246b0fc9e1916b8d19e4aeee32397a4096b36608b0b63425f213ca namespace=k8s.io Dec 13 02:16:12.226515 containerd[1476]: time="2024-12-13T02:16:12.226509574Z" level=warning msg="cleaning up after shim disconnected" id=30f3b49796246b0fc9e1916b8d19e4aeee32397a4096b36608b0b63425f213ca namespace=k8s.io Dec 13 02:16:12.226515 containerd[1476]: time="2024-12-13T02:16:12.226518854Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:16:12.601497 containerd[1476]: time="2024-12-13T02:16:12.601255620Z" level=info msg="CreateContainer within sandbox \"1b94c2add0d9c884012da9b8f91c52d62b0d7b1ea9e78005defecadbffa8524a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:16:12.613580 containerd[1476]: time="2024-12-13T02:16:12.613063322Z" level=info msg="CreateContainer within sandbox \"1b94c2add0d9c884012da9b8f91c52d62b0d7b1ea9e78005defecadbffa8524a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fd355cf57892c14a340f532b2ec4669c72120a84112fc0501f786b42bdf4fe58\"" Dec 13 02:16:12.614392 containerd[1476]: time="2024-12-13T02:16:12.614360008Z" level=info msg="StartContainer for \"fd355cf57892c14a340f532b2ec4669c72120a84112fc0501f786b42bdf4fe58\"" Dec 13 02:16:12.640749 systemd[1]: Started cri-containerd-fd355cf57892c14a340f532b2ec4669c72120a84112fc0501f786b42bdf4fe58.scope - libcontainer container fd355cf57892c14a340f532b2ec4669c72120a84112fc0501f786b42bdf4fe58. Dec 13 02:16:12.665043 containerd[1476]: time="2024-12-13T02:16:12.664711707Z" level=info msg="StartContainer for \"fd355cf57892c14a340f532b2ec4669c72120a84112fc0501f786b42bdf4fe58\" returns successfully" Dec 13 02:16:12.693878 systemd[1]: cri-containerd-fd355cf57892c14a340f532b2ec4669c72120a84112fc0501f786b42bdf4fe58.scope: Deactivated successfully. Dec 13 02:16:12.734829 containerd[1476]: time="2024-12-13T02:16:12.734735629Z" level=info msg="shim disconnected" id=fd355cf57892c14a340f532b2ec4669c72120a84112fc0501f786b42bdf4fe58 namespace=k8s.io Dec 13 02:16:12.734829 containerd[1476]: time="2024-12-13T02:16:12.734790553Z" level=warning msg="cleaning up after shim disconnected" id=fd355cf57892c14a340f532b2ec4669c72120a84112fc0501f786b42bdf4fe58 namespace=k8s.io Dec 13 02:16:12.734829 containerd[1476]: time="2024-12-13T02:16:12.734800114Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:16:12.964169 sshd[4525]: Accepted publickey for core from 147.75.109.163 port 40740 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:16:12.966311 sshd[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:16:12.972696 systemd-logind[1452]: New session 23 of user core. Dec 13 02:16:12.980819 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 02:16:13.604079 containerd[1476]: time="2024-12-13T02:16:13.603663878Z" level=info msg="CreateContainer within sandbox \"1b94c2add0d9c884012da9b8f91c52d62b0d7b1ea9e78005defecadbffa8524a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:16:13.625598 containerd[1476]: time="2024-12-13T02:16:13.625555171Z" level=info msg="CreateContainer within sandbox \"1b94c2add0d9c884012da9b8f91c52d62b0d7b1ea9e78005defecadbffa8524a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c4e8d2f78720961202332e8a9c08db60c183a63c9e24c424dcc67cfc2fdfbcc3\"" Dec 13 02:16:13.627587 containerd[1476]: time="2024-12-13T02:16:13.626829615Z" level=info msg="StartContainer for \"c4e8d2f78720961202332e8a9c08db60c183a63c9e24c424dcc67cfc2fdfbcc3\"" Dec 13 02:16:13.639174 sshd[4525]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:13.649726 systemd[1]: sshd@22-78.47.218.196:22-147.75.109.163:40740.service: Deactivated successfully. Dec 13 02:16:13.655460 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:16:13.661594 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:16:13.670736 systemd[1]: Started cri-containerd-c4e8d2f78720961202332e8a9c08db60c183a63c9e24c424dcc67cfc2fdfbcc3.scope - libcontainer container c4e8d2f78720961202332e8a9c08db60c183a63c9e24c424dcc67cfc2fdfbcc3. Dec 13 02:16:13.672093 systemd-logind[1452]: Removed session 23. Dec 13 02:16:13.701622 containerd[1476]: time="2024-12-13T02:16:13.701409885Z" level=info msg="StartContainer for \"c4e8d2f78720961202332e8a9c08db60c183a63c9e24c424dcc67cfc2fdfbcc3\" returns successfully" Dec 13 02:16:13.702981 systemd[1]: cri-containerd-c4e8d2f78720961202332e8a9c08db60c183a63c9e24c424dcc67cfc2fdfbcc3.scope: Deactivated successfully. Dec 13 02:16:13.732350 containerd[1476]: time="2024-12-13T02:16:13.732202968Z" level=info msg="shim disconnected" id=c4e8d2f78720961202332e8a9c08db60c183a63c9e24c424dcc67cfc2fdfbcc3 namespace=k8s.io Dec 13 02:16:13.732350 containerd[1476]: time="2024-12-13T02:16:13.732255412Z" level=warning msg="cleaning up after shim disconnected" id=c4e8d2f78720961202332e8a9c08db60c183a63c9e24c424dcc67cfc2fdfbcc3 namespace=k8s.io Dec 13 02:16:13.732350 containerd[1476]: time="2024-12-13T02:16:13.732263052Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:16:13.811159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4e8d2f78720961202332e8a9c08db60c183a63c9e24c424dcc67cfc2fdfbcc3-rootfs.mount: Deactivated successfully. Dec 13 02:16:13.819019 systemd[1]: Started sshd@23-78.47.218.196:22-147.75.109.163:40750.service - OpenSSH per-connection server daemon (147.75.109.163:40750). Dec 13 02:16:14.042689 kubelet[2733]: I1213 02:16:14.042626 2733 setters.go:580] "Node became not ready" node="ci-4081-2-1-f-bc189a5809" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:16:14Z","lastTransitionTime":"2024-12-13T02:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:16:14.609742 containerd[1476]: time="2024-12-13T02:16:14.609690719Z" level=info msg="CreateContainer within sandbox \"1b94c2add0d9c884012da9b8f91c52d62b0d7b1ea9e78005defecadbffa8524a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:16:14.610993 kubelet[2733]: E1213 02:16:14.610680 2733 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-jr4k6" podUID="1b9e94a4-1670-4b92-b9ea-1a26f7805f1a" Dec 13 02:16:14.632844 containerd[1476]: time="2024-12-13T02:16:14.632715689Z" level=info msg="CreateContainer within sandbox \"1b94c2add0d9c884012da9b8f91c52d62b0d7b1ea9e78005defecadbffa8524a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"379d5fd7fe2c57413512545f2a9dac64b314616e1082890557bcaa029b9eff78\"" Dec 13 02:16:14.633869 containerd[1476]: time="2024-12-13T02:16:14.633828483Z" level=info msg="StartContainer for \"379d5fd7fe2c57413512545f2a9dac64b314616e1082890557bcaa029b9eff78\"" Dec 13 02:16:14.664734 systemd[1]: Started cri-containerd-379d5fd7fe2c57413512545f2a9dac64b314616e1082890557bcaa029b9eff78.scope - libcontainer container 379d5fd7fe2c57413512545f2a9dac64b314616e1082890557bcaa029b9eff78. Dec 13 02:16:14.692176 systemd[1]: cri-containerd-379d5fd7fe2c57413512545f2a9dac64b314616e1082890557bcaa029b9eff78.scope: Deactivated successfully. Dec 13 02:16:14.695957 containerd[1476]: time="2024-12-13T02:16:14.695918087Z" level=info msg="StartContainer for \"379d5fd7fe2c57413512545f2a9dac64b314616e1082890557bcaa029b9eff78\" returns successfully" Dec 13 02:16:14.718184 containerd[1476]: time="2024-12-13T02:16:14.718113801Z" level=info msg="shim disconnected" id=379d5fd7fe2c57413512545f2a9dac64b314616e1082890557bcaa029b9eff78 namespace=k8s.io Dec 13 02:16:14.718184 containerd[1476]: time="2024-12-13T02:16:14.718179526Z" level=warning msg="cleaning up after shim disconnected" id=379d5fd7fe2c57413512545f2a9dac64b314616e1082890557bcaa029b9eff78 namespace=k8s.io Dec 13 02:16:14.718184 containerd[1476]: time="2024-12-13T02:16:14.718188166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:16:14.806174 sshd[4728]: Accepted publickey for core from 147.75.109.163 port 40750 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:16:14.809368 sshd[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:16:14.812278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-379d5fd7fe2c57413512545f2a9dac64b314616e1082890557bcaa029b9eff78-rootfs.mount: Deactivated successfully. Dec 13 02:16:14.819842 systemd-logind[1452]: New session 24 of user core. Dec 13 02:16:14.826728 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 02:16:15.618455 containerd[1476]: time="2024-12-13T02:16:15.618387801Z" level=info msg="CreateContainer within sandbox \"1b94c2add0d9c884012da9b8f91c52d62b0d7b1ea9e78005defecadbffa8524a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:16:15.632525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223442023.mount: Deactivated successfully. Dec 13 02:16:15.645433 containerd[1476]: time="2024-12-13T02:16:15.645362954Z" level=info msg="CreateContainer within sandbox \"1b94c2add0d9c884012da9b8f91c52d62b0d7b1ea9e78005defecadbffa8524a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bb0f60fbb94c980a143184cf5421c159ef1ef8d6271c5b8e9b33eaad0e9acfb8\"" Dec 13 02:16:15.647270 containerd[1476]: time="2024-12-13T02:16:15.646277735Z" level=info msg="StartContainer for \"bb0f60fbb94c980a143184cf5421c159ef1ef8d6271c5b8e9b33eaad0e9acfb8\"" Dec 13 02:16:15.676747 systemd[1]: Started cri-containerd-bb0f60fbb94c980a143184cf5421c159ef1ef8d6271c5b8e9b33eaad0e9acfb8.scope - libcontainer container bb0f60fbb94c980a143184cf5421c159ef1ef8d6271c5b8e9b33eaad0e9acfb8. Dec 13 02:16:15.713836 containerd[1476]: time="2024-12-13T02:16:15.713744621Z" level=info msg="StartContainer for \"bb0f60fbb94c980a143184cf5421c159ef1ef8d6271c5b8e9b33eaad0e9acfb8\" returns successfully" Dec 13 02:16:16.018656 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 02:16:16.611633 kubelet[2733]: E1213 02:16:16.611237 2733 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-jr4k6" podUID="1b9e94a4-1670-4b92-b9ea-1a26f7805f1a" Dec 13 02:16:16.635815 containerd[1476]: time="2024-12-13T02:16:16.635428418Z" level=info msg="StopPodSandbox for \"754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134\"" Dec 13 02:16:16.635815 containerd[1476]: time="2024-12-13T02:16:16.635569828Z" level=info msg="TearDown network for sandbox \"754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134\" successfully" Dec 13 02:16:16.635815 containerd[1476]: time="2024-12-13T02:16:16.635587149Z" level=info msg="StopPodSandbox for \"754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134\" returns successfully" Dec 13 02:16:16.638616 containerd[1476]: time="2024-12-13T02:16:16.636780548Z" level=info msg="RemovePodSandbox for \"754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134\"" Dec 13 02:16:16.638616 containerd[1476]: time="2024-12-13T02:16:16.636828632Z" level=info msg="Forcibly stopping sandbox \"754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134\"" Dec 13 02:16:16.638616 containerd[1476]: time="2024-12-13T02:16:16.636909157Z" level=info msg="TearDown network for sandbox \"754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134\" successfully" Dec 13 02:16:16.641193 containerd[1476]: time="2024-12-13T02:16:16.641141799Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:16:16.641512 containerd[1476]: time="2024-12-13T02:16:16.641393895Z" level=info msg="RemovePodSandbox \"754136298c91e7c0faef0956e4d621dfb27b02b73f32c5602d300c9a9dc7b134\" returns successfully" Dec 13 02:16:16.642322 containerd[1476]: time="2024-12-13T02:16:16.642252713Z" level=info msg="StopPodSandbox for \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\"" Dec 13 02:16:16.642411 containerd[1476]: time="2024-12-13T02:16:16.642377801Z" level=info msg="TearDown network for sandbox \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\" successfully" Dec 13 02:16:16.642411 containerd[1476]: time="2024-12-13T02:16:16.642398802Z" level=info msg="StopPodSandbox for \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\" returns successfully" Dec 13 02:16:16.645438 containerd[1476]: time="2024-12-13T02:16:16.644470340Z" level=info msg="RemovePodSandbox for \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\"" Dec 13 02:16:16.645438 containerd[1476]: time="2024-12-13T02:16:16.644529744Z" level=info msg="Forcibly stopping sandbox \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\"" Dec 13 02:16:16.645438 containerd[1476]: time="2024-12-13T02:16:16.644636711Z" level=info msg="TearDown network for sandbox \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\" successfully" Dec 13 02:16:16.650270 containerd[1476]: time="2024-12-13T02:16:16.650211682Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:16:16.650370 containerd[1476]: time="2024-12-13T02:16:16.650316009Z" level=info msg="RemovePodSandbox \"89f0e936fa8b629b52adf126bb7ed71c94efa1713c85d344c40dfe8ab73f2f48\" returns successfully" Dec 13 02:16:17.520473 systemd[1]: run-containerd-runc-k8s.io-bb0f60fbb94c980a143184cf5421c159ef1ef8d6271c5b8e9b33eaad0e9acfb8-runc.3IrRxB.mount: Deactivated successfully. Dec 13 02:16:17.566898 kubelet[2733]: E1213 02:16:17.566842 2733 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38912->127.0.0.1:34659: write tcp 127.0.0.1:38912->127.0.0.1:34659: write: broken pipe Dec 13 02:16:18.945517 systemd-networkd[1368]: lxc_health: Link UP Dec 13 02:16:18.960194 systemd-networkd[1368]: lxc_health: Gained carrier Dec 13 02:16:19.960055 kubelet[2733]: I1213 02:16:19.959993 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fh5p4" podStartSLOduration=8.959976041000001 podStartE2EDuration="8.959976041s" podCreationTimestamp="2024-12-13 02:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:16:16.647053192 +0000 UTC m=+360.121076043" watchObservedRunningTime="2024-12-13 02:16:19.959976041 +0000 UTC m=+363.433998812" Dec 13 02:16:20.326757 systemd-networkd[1368]: lxc_health: Gained IPv6LL Dec 13 02:16:24.046518 kubelet[2733]: E1213 02:16:24.046460 2733 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38940->127.0.0.1:34659: write tcp 127.0.0.1:38940->127.0.0.1:34659: write: broken pipe Dec 13 02:16:26.372606 sshd[4728]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:26.378451 systemd[1]: sshd@23-78.47.218.196:22-147.75.109.163:40750.service: Deactivated successfully. Dec 13 02:16:26.381184 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:16:26.384091 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:16:26.385352 systemd-logind[1452]: Removed session 24.