Dec 13 14:19:45.725913 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 14:19:45.725932 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 14:19:45.725940 kernel: efi: EFI v2.70 by EDK II Dec 13 14:19:45.725946 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Dec 13 14:19:45.725951 kernel: random: crng init done Dec 13 14:19:45.725957 kernel: ACPI: Early table checksum verification disabled Dec 13 14:19:45.725963 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Dec 13 14:19:45.725989 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 14:19:45.725995 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:19:45.726000 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:19:45.726006 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:19:45.726011 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:19:45.726017 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:19:45.726022 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:19:45.726030 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:19:45.726036 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:19:45.726042 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:19:45.726048 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 14:19:45.726053 kernel: NUMA: Failed to initialise from firmware Dec 13 14:19:45.726059 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 14:19:45.726065 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Dec 13 14:19:45.726070 kernel: Zone ranges: Dec 13 14:19:45.726076 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 14:19:45.726083 kernel: DMA32 empty Dec 13 14:19:45.726088 kernel: Normal empty Dec 13 14:19:45.726094 kernel: Movable zone start for each node Dec 13 14:19:45.726099 kernel: Early memory node ranges Dec 13 14:19:45.726105 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Dec 13 14:19:45.726111 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Dec 13 14:19:45.726116 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Dec 13 14:19:45.726122 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Dec 13 14:19:45.726127 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Dec 13 14:19:45.726133 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Dec 13 14:19:45.726139 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Dec 13 14:19:45.726144 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 14:19:45.726151 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 14:19:45.726157 kernel: psci: probing for conduit method from ACPI. Dec 13 14:19:45.726162 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 14:19:45.726168 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:19:45.726173 kernel: psci: Trusted OS migration not required Dec 13 14:19:45.726181 kernel: psci: SMC Calling Convention v1.1 Dec 13 14:19:45.726188 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 14:19:45.726195 kernel: ACPI: SRAT not present Dec 13 14:19:45.726201 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 14:19:45.726208 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 14:19:45.726214 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 14:19:45.726220 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:19:45.726226 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:19:45.726232 kernel: CPU features: detected: Hardware dirty bit management Dec 13 14:19:45.726238 kernel: CPU features: detected: Spectre-v4 Dec 13 14:19:45.726244 kernel: CPU features: detected: Spectre-BHB Dec 13 14:19:45.726251 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:19:45.726257 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:19:45.726263 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 14:19:45.726269 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 14:19:45.726275 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 14:19:45.726281 kernel: Policy zone: DMA Dec 13 14:19:45.726288 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:19:45.726295 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:19:45.726301 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:19:45.726307 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:19:45.726313 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:19:45.726321 kernel: Memory: 2457404K/2572288K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 114884K reserved, 0K cma-reserved) Dec 13 14:19:45.726327 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 14:19:45.726333 kernel: trace event string verifier disabled Dec 13 14:19:45.726339 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:19:45.726345 kernel: rcu: RCU event tracing is enabled. Dec 13 14:19:45.726351 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 14:19:45.726358 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:19:45.726364 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:19:45.726370 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:19:45.726376 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 14:19:45.726382 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:19:45.726389 kernel: GICv3: 256 SPIs implemented Dec 13 14:19:45.726395 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:19:45.726402 kernel: GICv3: Distributor has no Range Selector support Dec 13 14:19:45.726407 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:19:45.726413 kernel: GICv3: 16 PPIs implemented Dec 13 14:19:45.726419 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 14:19:45.726425 kernel: ACPI: SRAT not present Dec 13 14:19:45.726431 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 14:19:45.726437 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:19:45.726444 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 14:19:45.726450 kernel: GICv3: using LPI property table @0x00000000400d0000 Dec 13 14:19:45.726456 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Dec 13 14:19:45.726463 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:19:45.726469 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 14:19:45.726475 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 14:19:45.726482 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 14:19:45.726488 kernel: arm-pv: using stolen time PV Dec 13 14:19:45.726494 kernel: Console: colour dummy device 80x25 Dec 13 14:19:45.726500 kernel: ACPI: Core revision 20210730 Dec 13 14:19:45.726507 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 14:19:45.726513 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:19:45.726520 kernel: LSM: Security Framework initializing Dec 13 14:19:45.726527 kernel: SELinux: Initializing. Dec 13 14:19:45.726534 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:19:45.726540 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:19:45.726547 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:19:45.726553 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 14:19:45.726559 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 14:19:45.726565 kernel: Remapping and enabling EFI services. Dec 13 14:19:45.726571 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:19:45.726577 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:19:45.726585 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 14:19:45.726592 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Dec 13 14:19:45.726598 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:19:45.726604 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 14:19:45.726611 kernel: Detected PIPT I-cache on CPU2 Dec 13 14:19:45.726617 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 14:19:45.726623 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Dec 13 14:19:45.726629 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:19:45.726636 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 14:19:45.726642 kernel: Detected PIPT I-cache on CPU3 Dec 13 14:19:45.726650 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 14:19:45.726656 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Dec 13 14:19:45.726662 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:19:45.726669 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 14:19:45.726679 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 14:19:45.726687 kernel: SMP: Total of 4 processors activated. Dec 13 14:19:45.726694 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:19:45.726700 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 14:19:45.726707 kernel: CPU features: detected: Common not Private translations Dec 13 14:19:45.726713 kernel: CPU features: detected: CRC32 instructions Dec 13 14:19:45.726720 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 14:19:45.726726 kernel: CPU features: detected: LSE atomic instructions Dec 13 14:19:45.726734 kernel: CPU features: detected: Privileged Access Never Dec 13 14:19:45.726741 kernel: CPU features: detected: RAS Extension Support Dec 13 14:19:45.726747 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 14:19:45.726753 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:19:45.726760 kernel: alternatives: patching kernel code Dec 13 14:19:45.726768 kernel: devtmpfs: initialized Dec 13 14:19:45.726774 kernel: KASLR enabled Dec 13 14:19:45.726781 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:19:45.726788 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 14:19:45.726794 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:19:45.726801 kernel: SMBIOS 3.0.0 present. Dec 13 14:19:45.726808 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Dec 13 14:19:45.726814 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:19:45.726821 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:19:45.726829 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:19:45.726836 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:19:45.726842 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:19:45.726849 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Dec 13 14:19:45.726856 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:19:45.726862 kernel: cpuidle: using governor menu Dec 13 14:19:45.726869 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:19:45.726875 kernel: ASID allocator initialised with 32768 entries Dec 13 14:19:45.726882 kernel: ACPI: bus type PCI registered Dec 13 14:19:45.726890 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:19:45.726897 kernel: Serial: AMBA PL011 UART driver Dec 13 14:19:45.726903 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:19:45.726910 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:19:45.726916 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:19:45.726923 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:19:45.726929 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:19:45.726936 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:19:45.726943 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:19:45.726951 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:19:45.726958 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:19:45.726983 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:19:45.726990 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:19:45.726997 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:19:45.727004 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:19:45.727010 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:19:45.727017 kernel: ACPI: Interpreter enabled Dec 13 14:19:45.727024 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:19:45.727032 kernel: ACPI: MCFG table detected, 1 entries Dec 13 14:19:45.727039 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 14:19:45.727046 kernel: printk: console [ttyAMA0] enabled Dec 13 14:19:45.727052 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:19:45.727176 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:19:45.727239 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 14:19:45.727303 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 14:19:45.727364 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 14:19:45.727421 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 14:19:45.727430 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 14:19:45.727437 kernel: PCI host bridge to bus 0000:00 Dec 13 14:19:45.727502 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 14:19:45.727556 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 14:19:45.727608 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 14:19:45.727660 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:19:45.727733 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 14:19:45.727801 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:19:45.727860 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 14:19:45.727918 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 14:19:45.728035 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 14:19:45.728112 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 14:19:45.728175 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 14:19:45.728234 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 14:19:45.728288 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 14:19:45.728340 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 14:19:45.728393 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 14:19:45.728402 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 14:19:45.728409 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 14:19:45.728416 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 14:19:45.728424 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 14:19:45.728431 kernel: iommu: Default domain type: Translated Dec 13 14:19:45.728437 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:19:45.728444 kernel: vgaarb: loaded Dec 13 14:19:45.728450 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:19:45.728457 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:19:45.728463 kernel: PTP clock support registered Dec 13 14:19:45.728470 kernel: Registered efivars operations Dec 13 14:19:45.728476 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:19:45.728484 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:19:45.728491 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:19:45.728497 kernel: pnp: PnP ACPI init Dec 13 14:19:45.728565 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 14:19:45.728575 kernel: pnp: PnP ACPI: found 1 devices Dec 13 14:19:45.728581 kernel: NET: Registered PF_INET protocol family Dec 13 14:19:45.728588 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:19:45.728595 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:19:45.728604 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:19:45.728610 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:19:45.728617 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:19:45.728624 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:19:45.728631 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:19:45.728638 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:19:45.728645 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:19:45.728652 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:19:45.728659 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 14:19:45.728667 kernel: kvm [1]: HYP mode not available Dec 13 14:19:45.728675 kernel: Initialise system trusted keyrings Dec 13 14:19:45.728682 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:19:45.728689 kernel: Key type asymmetric registered Dec 13 14:19:45.728696 kernel: Asymmetric key parser 'x509' registered Dec 13 14:19:45.728702 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:19:45.728709 kernel: io scheduler mq-deadline registered Dec 13 14:19:45.728716 kernel: io scheduler kyber registered Dec 13 14:19:45.728722 kernel: io scheduler bfq registered Dec 13 14:19:45.728730 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 14:19:45.728737 kernel: ACPI: button: Power Button [PWRB] Dec 13 14:19:45.728743 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 14:19:45.728802 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 14:19:45.728813 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:19:45.728819 kernel: thunder_xcv, ver 1.0 Dec 13 14:19:45.728826 kernel: thunder_bgx, ver 1.0 Dec 13 14:19:45.728832 kernel: nicpf, ver 1.0 Dec 13 14:19:45.728839 kernel: nicvf, ver 1.0 Dec 13 14:19:45.728914 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:19:45.729015 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:19:45 UTC (1734099585) Dec 13 14:19:45.729026 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:19:45.729033 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:19:45.729040 kernel: Segment Routing with IPv6 Dec 13 14:19:45.729046 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:19:45.729053 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:19:45.729059 kernel: Key type dns_resolver registered Dec 13 14:19:45.729069 kernel: registered taskstats version 1 Dec 13 14:19:45.729076 kernel: Loading compiled-in X.509 certificates Dec 13 14:19:45.729083 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 14:19:45.729089 kernel: Key type .fscrypt registered Dec 13 14:19:45.729096 kernel: Key type fscrypt-provisioning registered Dec 13 14:19:45.729102 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:19:45.729109 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:19:45.729116 kernel: ima: No architecture policies found Dec 13 14:19:45.729122 kernel: clk: Disabling unused clocks Dec 13 14:19:45.729130 kernel: Freeing unused kernel memory: 36416K Dec 13 14:19:45.729137 kernel: Run /init as init process Dec 13 14:19:45.729143 kernel: with arguments: Dec 13 14:19:45.729150 kernel: /init Dec 13 14:19:45.729156 kernel: with environment: Dec 13 14:19:45.729162 kernel: HOME=/ Dec 13 14:19:45.729169 kernel: TERM=linux Dec 13 14:19:45.729175 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:19:45.729184 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:19:45.729194 systemd[1]: Detected virtualization kvm. Dec 13 14:19:45.729202 systemd[1]: Detected architecture arm64. Dec 13 14:19:45.729214 systemd[1]: Running in initrd. Dec 13 14:19:45.729221 systemd[1]: No hostname configured, using default hostname. Dec 13 14:19:45.729228 systemd[1]: Hostname set to . Dec 13 14:19:45.729236 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:19:45.729243 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:19:45.729251 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:19:45.729258 systemd[1]: Reached target cryptsetup.target. Dec 13 14:19:45.729266 systemd[1]: Reached target paths.target. Dec 13 14:19:45.729273 systemd[1]: Reached target slices.target. Dec 13 14:19:45.729280 systemd[1]: Reached target swap.target. Dec 13 14:19:45.729287 systemd[1]: Reached target timers.target. Dec 13 14:19:45.729295 systemd[1]: Listening on iscsid.socket. Dec 13 14:19:45.729303 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:19:45.729310 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:19:45.729318 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:19:45.729325 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:19:45.729332 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:19:45.729340 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:19:45.729347 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:19:45.729354 systemd[1]: Reached target sockets.target. Dec 13 14:19:45.729361 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:19:45.729369 systemd[1]: Finished network-cleanup.service. Dec 13 14:19:45.729377 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:19:45.729384 systemd[1]: Starting systemd-journald.service... Dec 13 14:19:45.729391 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:19:45.729398 systemd[1]: Starting systemd-resolved.service... Dec 13 14:19:45.729406 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:19:45.729413 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:19:45.729420 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:19:45.729427 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:19:45.729435 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:19:45.729443 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:19:45.729450 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:19:45.729457 kernel: audit: type=1130 audit(1734099585.727:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:45.729468 systemd-journald[290]: Journal started Dec 13 14:19:45.729511 systemd-journald[290]: Runtime Journal (/run/log/journal/bda3c2a70f614be4ba1d1516ea7f9856) is 6.0M, max 48.7M, 42.6M free. Dec 13 14:19:45.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:45.721733 systemd-modules-load[291]: Inserted module 'overlay' Dec 13 14:19:45.733586 systemd[1]: Started systemd-journald.service. Dec 13 14:19:45.733614 kernel: audit: type=1130 audit(1734099585.732:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:45.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:45.739324 systemd-resolved[292]: Positive Trust Anchors: Dec 13 14:19:45.739338 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:19:45.739369 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:19:45.749445 kernel: audit: type=1130 audit(1734099585.739:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:45.749464 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:19:45.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:45.740000 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:19:45.744743 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:19:45.746174 systemd-resolved[292]: Defaulting to hostname 'linux'. Dec 13 14:19:45.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:45.750764 systemd[1]: Started systemd-resolved.service. Dec 13 14:19:45.755337 kernel: audit: type=1130 audit(1734099585.751:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:45.751781 systemd[1]: Reached target nss-lookup.target. Dec 13 14:19:45.755873 dracut-cmdline[308]: dracut-dracut-053 Dec 13 14:19:45.756907 systemd-modules-load[291]: Inserted module 'br_netfilter' Dec 13 14:19:45.757636 kernel: Bridge firewalling registered Dec 13 14:19:45.758097 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:19:45.772992 kernel: SCSI subsystem initialized Dec 13 14:19:45.782985 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:19:45.783007 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:19:45.783989 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:19:45.786000 systemd-modules-load[291]: Inserted module 'dm_multipath' Dec 13 14:19:45.786993 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:19:45.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:45.789999 kernel: audit: type=1130 audit(1734099585.786:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:45.788397 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:19:45.795924 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:19:45.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:45.798994 kernel: audit: type=1130 audit(1734099585.795:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:45.821994 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:19:45.833997 kernel: iscsi: registered transport (tcp) Dec 13 14:19:45.853004 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:19:45.853053 kernel: QLogic iSCSI HBA Driver Dec 13 14:19:45.887640 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:19:45.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:45.889260 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:19:45.891535 kernel: audit: type=1130 audit(1734099585.887:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:45.935997 kernel: raid6: neonx8 gen() 13785 MB/s Dec 13 14:19:45.952983 kernel: raid6: neonx8 xor() 10778 MB/s Dec 13 14:19:45.969992 kernel: raid6: neonx4 gen() 13532 MB/s Dec 13 14:19:45.986981 kernel: raid6: neonx4 xor() 11089 MB/s Dec 13 14:19:46.003989 kernel: raid6: neonx2 gen() 12896 MB/s Dec 13 14:19:46.020978 kernel: raid6: neonx2 xor() 10439 MB/s Dec 13 14:19:46.037978 kernel: raid6: neonx1 gen() 10541 MB/s Dec 13 14:19:46.054990 kernel: raid6: neonx1 xor() 8795 MB/s Dec 13 14:19:46.071985 kernel: raid6: int64x8 gen() 6266 MB/s Dec 13 14:19:46.088977 kernel: raid6: int64x8 xor() 3539 MB/s Dec 13 14:19:46.105990 kernel: raid6: int64x4 gen() 7192 MB/s Dec 13 14:19:46.122986 kernel: raid6: int64x4 xor() 3843 MB/s Dec 13 14:19:46.139977 kernel: raid6: int64x2 gen() 6134 MB/s Dec 13 14:19:46.156982 kernel: raid6: int64x2 xor() 3317 MB/s Dec 13 14:19:46.173992 kernel: raid6: int64x1 gen() 5044 MB/s Dec 13 14:19:46.191194 kernel: raid6: int64x1 xor() 2645 MB/s Dec 13 14:19:46.191205 kernel: raid6: using algorithm neonx8 gen() 13785 MB/s Dec 13 14:19:46.191213 kernel: raid6: .... xor() 10778 MB/s, rmw enabled Dec 13 14:19:46.191222 kernel: raid6: using neon recovery algorithm Dec 13 14:19:46.201994 kernel: xor: measuring software checksum speed Dec 13 14:19:46.202011 kernel: 8regs : 17195 MB/sec Dec 13 14:19:46.202991 kernel: 32regs : 20712 MB/sec Dec 13 14:19:46.203002 kernel: arm64_neon : 27331 MB/sec Dec 13 14:19:46.203011 kernel: xor: using function: arm64_neon (27331 MB/sec) Dec 13 14:19:46.255996 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 14:19:46.268222 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:19:46.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:46.271000 audit: BPF prog-id=7 op=LOAD Dec 13 14:19:46.271992 kernel: audit: type=1130 audit(1734099586.267:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:46.272016 kernel: audit: type=1334 audit(1734099586.271:10): prog-id=7 op=LOAD Dec 13 14:19:46.271000 audit: BPF prog-id=8 op=LOAD Dec 13 14:19:46.272406 systemd[1]: Starting systemd-udevd.service... Dec 13 14:19:46.288129 systemd-udevd[492]: Using default interface naming scheme 'v252'. Dec 13 14:19:46.291557 systemd[1]: Started systemd-udevd.service. Dec 13 14:19:46.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:46.293376 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:19:46.306134 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Dec 13 14:19:46.335660 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:19:46.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:46.337306 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:19:46.373407 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:19:46.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:46.399990 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 14:19:46.403245 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:19:46.403260 kernel: GPT:9289727 != 19775487 Dec 13 14:19:46.403273 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:19:46.403281 kernel: GPT:9289727 != 19775487 Dec 13 14:19:46.403289 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:19:46.403297 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:19:46.414424 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:19:46.416356 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (543) Dec 13 14:19:46.417553 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:19:46.418350 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:19:46.424172 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:19:46.429143 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:19:46.433130 systemd[1]: Starting disk-uuid.service... Dec 13 14:19:46.438587 disk-uuid[563]: Primary Header is updated. Dec 13 14:19:46.438587 disk-uuid[563]: Secondary Entries is updated. Dec 13 14:19:46.438587 disk-uuid[563]: Secondary Header is updated. Dec 13 14:19:46.441080 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:19:47.455000 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:19:47.455365 disk-uuid[564]: The operation has completed successfully. Dec 13 14:19:47.476487 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:19:47.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:47.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:47.476582 systemd[1]: Finished disk-uuid.service. Dec 13 14:19:47.477939 systemd[1]: Starting verity-setup.service... Dec 13 14:19:47.493461 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:19:47.513920 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:19:47.515864 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:19:47.517564 systemd[1]: Finished verity-setup.service. Dec 13 14:19:47.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:47.561753 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:19:47.562765 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:19:47.562431 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:19:47.563056 systemd[1]: Starting ignition-setup.service... Dec 13 14:19:47.564808 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:19:47.572414 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:19:47.572444 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:19:47.572453 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:19:47.580540 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:19:47.586096 systemd[1]: Finished ignition-setup.service. Dec 13 14:19:47.587381 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:19:47.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:47.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:47.647000 audit: BPF prog-id=9 op=LOAD Dec 13 14:19:47.646742 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:19:47.648549 systemd[1]: Starting systemd-networkd.service... Dec 13 14:19:47.669087 ignition[645]: Ignition 2.14.0 Dec 13 14:19:47.669097 ignition[645]: Stage: fetch-offline Dec 13 14:19:47.669134 ignition[645]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:19:47.669143 ignition[645]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:19:47.669262 ignition[645]: parsed url from cmdline: "" Dec 13 14:19:47.669265 ignition[645]: no config URL provided Dec 13 14:19:47.669270 ignition[645]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:19:47.669277 ignition[645]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:19:47.669292 ignition[645]: op(1): [started] loading QEMU firmware config module Dec 13 14:19:47.669297 ignition[645]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 14:19:47.677394 ignition[645]: op(1): [finished] loading QEMU firmware config module Dec 13 14:19:47.678130 systemd-networkd[738]: lo: Link UP Dec 13 14:19:47.678133 systemd-networkd[738]: lo: Gained carrier Dec 13 14:19:47.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:47.678682 systemd-networkd[738]: Enumeration completed Dec 13 14:19:47.678778 systemd[1]: Started systemd-networkd.service. Dec 13 14:19:47.679076 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:19:47.679495 systemd[1]: Reached target network.target. Dec 13 14:19:47.680460 systemd-networkd[738]: eth0: Link UP Dec 13 14:19:47.680464 systemd-networkd[738]: eth0: Gained carrier Dec 13 14:19:47.681262 systemd[1]: Starting iscsiuio.service... Dec 13 14:19:47.690231 systemd[1]: Started iscsiuio.service. Dec 13 14:19:47.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:47.691716 systemd[1]: Starting iscsid.service... Dec 13 14:19:47.694899 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:19:47.694899 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:19:47.694899 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:19:47.694899 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:19:47.694899 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:19:47.694899 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:19:47.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:47.697720 systemd[1]: Started iscsid.service. Dec 13 14:19:47.701037 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:19:47.701727 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:19:47.711915 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:19:47.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:47.712764 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:19:47.714067 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:19:47.715358 systemd[1]: Reached target remote-fs.target. Dec 13 14:19:47.717252 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:19:47.720545 ignition[645]: parsing config with SHA512: 83d78d43ad4a66ea7c4bcc6f0797b90381921a80c5c0f988c66a8b57f93dd89c747bba9e821d9122367db7bb75a5a19bf74b413f5c1368efc9405b03b025861d Dec 13 14:19:47.729695 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:19:47.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:47.732823 unknown[645]: fetched base config from "system" Dec 13 14:19:47.732838 unknown[645]: fetched user config from "qemu" Dec 13 14:19:47.733375 ignition[645]: fetch-offline: fetch-offline passed Dec 13 14:19:47.733433 ignition[645]: Ignition finished successfully Dec 13 14:19:47.735451 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:19:47.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:47.736166 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:19:47.736778 systemd[1]: Starting ignition-kargs.service... Dec 13 14:19:47.745284 ignition[759]: Ignition 2.14.0 Dec 13 14:19:47.745293 ignition[759]: Stage: kargs Dec 13 14:19:47.745375 ignition[759]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:19:47.745384 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:19:47.748239 systemd[1]: Finished ignition-kargs.service. Dec 13 14:19:47.746221 ignition[759]: kargs: kargs passed Dec 13 14:19:47.746260 ignition[759]: Ignition finished successfully Dec 13 14:19:47.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:47.749653 systemd[1]: Starting ignition-disks.service... Dec 13 14:19:47.755106 ignition[765]: Ignition 2.14.0 Dec 13 14:19:47.755116 ignition[765]: Stage: disks Dec 13 14:19:47.755197 ignition[765]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:19:47.755207 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:19:47.756038 ignition[765]: disks: disks passed Dec 13 14:19:47.756074 ignition[765]: Ignition finished successfully Dec 13 14:19:47.759380 systemd[1]: Finished ignition-disks.service. Dec 13 14:19:47.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:47.760056 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:19:47.761003 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:19:47.762055 systemd[1]: Reached target local-fs.target. Dec 13 14:19:47.763004 systemd[1]: Reached target sysinit.target. Dec 13 14:19:47.764015 systemd[1]: Reached target basic.target. Dec 13 14:19:47.765660 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:19:47.775726 systemd-fsck[773]: ROOT: clean, 621/553520 files, 56020/553472 blocks Dec 13 14:19:47.779542 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:19:47.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:47.780901 systemd[1]: Mounting sysroot.mount... Dec 13 14:19:47.786995 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:19:47.786995 systemd[1]: Mounted sysroot.mount. Dec 13 14:19:47.787550 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:19:47.789727 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:19:47.790479 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:19:47.790516 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:19:47.790539 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:19:47.792178 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:19:47.794521 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:19:47.798544 initrd-setup-root[783]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:19:47.802788 initrd-setup-root[791]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:19:47.805824 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:19:47.809587 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:19:47.834904 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:19:47.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:47.836248 systemd[1]: Starting ignition-mount.service... Dec 13 14:19:47.837367 systemd[1]: Starting sysroot-boot.service... Dec 13 14:19:47.841629 bash[824]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:19:47.849999 ignition[826]: INFO : Ignition 2.14.0 Dec 13 14:19:47.849999 ignition[826]: INFO : Stage: mount Dec 13 14:19:47.851240 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:19:47.851240 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:19:47.851240 ignition[826]: INFO : mount: mount passed Dec 13 14:19:47.851240 ignition[826]: INFO : Ignition finished successfully Dec 13 14:19:47.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:47.852605 systemd[1]: Finished ignition-mount.service. Dec 13 14:19:47.857705 systemd[1]: Finished sysroot-boot.service. Dec 13 14:19:47.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:48.523782 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:19:48.530223 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (834) Dec 13 14:19:48.530257 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:19:48.530267 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:19:48.531142 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:19:48.533852 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:19:48.535465 systemd[1]: Starting ignition-files.service... Dec 13 14:19:48.549314 ignition[854]: INFO : Ignition 2.14.0 Dec 13 14:19:48.549314 ignition[854]: INFO : Stage: files Dec 13 14:19:48.550619 ignition[854]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:19:48.550619 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:19:48.550619 ignition[854]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:19:48.553081 ignition[854]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:19:48.553081 ignition[854]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:19:48.558150 ignition[854]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:19:48.559170 ignition[854]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:19:48.560035 ignition[854]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:19:48.560035 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:19:48.560035 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 14:19:48.559373 unknown[854]: wrote ssh authorized keys file for user: core Dec 13 14:19:48.627649 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 14:19:48.865599 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:19:48.866988 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:19:48.868191 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:19:48.868191 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:19:48.868191 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:19:48.868191 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:19:48.868191 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:19:48.868191 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:19:48.868191 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:19:48.868191 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:19:48.877569 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:19:48.877569 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:19:48.877569 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:19:48.877569 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:19:48.877569 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 14:19:49.018290 systemd-networkd[738]: eth0: Gained IPv6LL Dec 13 14:19:49.219171 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 14:19:49.744172 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:19:49.744172 ignition[854]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 14:19:49.746843 ignition[854]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:19:49.746843 ignition[854]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:19:49.746843 ignition[854]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 14:19:49.746843 ignition[854]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 14:19:49.746843 ignition[854]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:19:49.746843 ignition[854]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:19:49.746843 ignition[854]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 14:19:49.746843 ignition[854]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:19:49.746843 ignition[854]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:19:49.746843 ignition[854]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 14:19:49.746843 ignition[854]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:19:49.790349 ignition[854]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:19:49.791526 ignition[854]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 14:19:49.791526 ignition[854]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:19:49.791526 ignition[854]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:19:49.791526 ignition[854]: INFO : files: files passed Dec 13 14:19:49.791526 ignition[854]: INFO : Ignition finished successfully Dec 13 14:19:49.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.795086 systemd[1]: Finished ignition-files.service. Dec 13 14:19:49.796552 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:19:49.797322 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:19:49.797919 systemd[1]: Starting ignition-quench.service... Dec 13 14:19:49.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.802391 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 14:19:49.801029 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:19:49.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.804678 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:19:49.801108 systemd[1]: Finished ignition-quench.service. Dec 13 14:19:49.802764 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:19:49.804104 systemd[1]: Reached target ignition-complete.target. Dec 13 14:19:49.805784 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:19:49.817318 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:19:49.817396 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:19:49.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.818654 systemd[1]: Reached target initrd-fs.target. Dec 13 14:19:49.819605 systemd[1]: Reached target initrd.target. Dec 13 14:19:49.820547 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:19:49.821201 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:19:49.830731 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:19:49.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.831955 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:19:49.839134 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:19:49.839772 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:19:49.840802 systemd[1]: Stopped target timers.target. Dec 13 14:19:49.841870 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:19:49.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.841989 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:19:49.842946 systemd[1]: Stopped target initrd.target. Dec 13 14:19:49.843978 systemd[1]: Stopped target basic.target. Dec 13 14:19:49.844893 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:19:49.845873 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:19:49.846846 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:19:49.848044 systemd[1]: Stopped target remote-fs.target. Dec 13 14:19:49.849050 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:19:49.850109 systemd[1]: Stopped target sysinit.target. Dec 13 14:19:49.851036 systemd[1]: Stopped target local-fs.target. Dec 13 14:19:49.852008 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:19:49.852968 systemd[1]: Stopped target swap.target. Dec 13 14:19:49.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.853854 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:19:49.853945 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:19:49.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.855046 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:19:49.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.855897 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:19:49.856002 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:19:49.857105 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:19:49.857193 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:19:49.858170 systemd[1]: Stopped target paths.target. Dec 13 14:19:49.859031 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:19:49.863014 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:19:49.863707 systemd[1]: Stopped target slices.target. Dec 13 14:19:49.864698 systemd[1]: Stopped target sockets.target. Dec 13 14:19:49.865627 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:19:49.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.865725 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:19:49.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.866739 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:19:49.866826 systemd[1]: Stopped ignition-files.service. Dec 13 14:19:49.870229 iscsid[744]: iscsid shutting down. Dec 13 14:19:49.868714 systemd[1]: Stopping ignition-mount.service... Dec 13 14:19:49.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.869896 systemd[1]: Stopping iscsid.service... Dec 13 14:19:49.870588 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:19:49.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.870682 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:19:49.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.880677 ignition[895]: INFO : Ignition 2.14.0 Dec 13 14:19:49.880677 ignition[895]: INFO : Stage: umount Dec 13 14:19:49.880677 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:19:49.880677 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:19:49.880677 ignition[895]: INFO : umount: umount passed Dec 13 14:19:49.880677 ignition[895]: INFO : Ignition finished successfully Dec 13 14:19:49.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.872301 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:19:49.872803 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:19:49.872915 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:19:49.874084 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:19:49.874184 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:19:49.876821 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:19:49.876906 systemd[1]: Stopped iscsid.service. Dec 13 14:19:49.877832 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:19:49.877897 systemd[1]: Stopped ignition-mount.service. Dec 13 14:19:49.878841 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:19:49.878905 systemd[1]: Closed iscsid.socket. Dec 13 14:19:49.879414 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:19:49.879448 systemd[1]: Stopped ignition-disks.service. Dec 13 14:19:49.880116 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:19:49.880150 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:19:49.881238 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:19:49.881272 systemd[1]: Stopped ignition-setup.service. Dec 13 14:19:49.882377 systemd[1]: Stopping iscsiuio.service... Dec 13 14:19:49.885326 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:19:49.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.885726 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:19:49.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.885796 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:19:49.886695 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:19:49.886773 systemd[1]: Stopped iscsiuio.service. Dec 13 14:19:49.906000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:19:49.888353 systemd[1]: Stopped target network.target. Dec 13 14:19:49.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.889573 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:19:49.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.889604 systemd[1]: Closed iscsiuio.socket. Dec 13 14:19:49.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.890772 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:19:49.891844 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:19:49.899047 systemd-networkd[738]: eth0: DHCPv6 lease lost Dec 13 14:19:49.914000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:19:49.901634 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:19:49.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.901720 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:19:49.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.903074 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:19:49.903149 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:19:49.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.904077 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:19:49.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.904106 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:19:49.905746 systemd[1]: Stopping network-cleanup.service... Dec 13 14:19:49.906735 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:19:49.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.906787 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:19:49.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.907870 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:19:49.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.907906 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:19:49.909637 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:19:49.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.909676 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:19:49.910591 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:19:49.915096 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:19:49.915530 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:19:49.915602 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:19:49.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:49.916828 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:19:49.916871 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:19:49.918294 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:19:49.918374 systemd[1]: Stopped network-cleanup.service. Dec 13 14:19:49.920614 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:19:49.920717 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:19:49.921651 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:19:49.921682 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:19:49.922638 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:19:49.922664 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:19:49.923611 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:19:49.923647 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:19:49.924727 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:19:49.924758 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:19:49.925978 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:19:49.926012 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:19:49.927593 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:19:49.928270 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:19:49.928323 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:19:49.932225 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:19:49.932298 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:19:49.933337 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:19:49.934866 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:19:49.940409 systemd[1]: Switching root. Dec 13 14:19:49.958164 systemd-journald[290]: Journal stopped Dec 13 14:19:51.907869 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Dec 13 14:19:51.907926 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:19:51.907942 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:19:51.907959 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:19:51.907984 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:19:51.907995 kernel: SELinux: policy capability open_perms=1 Dec 13 14:19:51.908007 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:19:51.908017 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:19:51.908028 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:19:51.908038 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:19:51.908047 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:19:51.908057 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:19:51.908067 systemd[1]: Successfully loaded SELinux policy in 30.500ms. Dec 13 14:19:51.908083 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.386ms. Dec 13 14:19:51.908099 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:19:51.908112 systemd[1]: Detected virtualization kvm. Dec 13 14:19:51.908122 systemd[1]: Detected architecture arm64. Dec 13 14:19:51.908133 systemd[1]: Detected first boot. Dec 13 14:19:51.908143 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:19:51.908153 kernel: kauditd_printk_skb: 64 callbacks suppressed Dec 13 14:19:51.908164 kernel: audit: type=1400 audit(1734099590.096:75): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:19:51.908176 kernel: audit: type=1400 audit(1734099590.096:76): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:19:51.908187 kernel: audit: type=1334 audit(1734099590.097:77): prog-id=10 op=LOAD Dec 13 14:19:51.908198 kernel: audit: type=1334 audit(1734099590.097:78): prog-id=10 op=UNLOAD Dec 13 14:19:51.908207 kernel: audit: type=1334 audit(1734099590.098:79): prog-id=11 op=LOAD Dec 13 14:19:51.908217 kernel: audit: type=1334 audit(1734099590.098:80): prog-id=11 op=UNLOAD Dec 13 14:19:51.908227 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:19:51.908237 kernel: audit: type=1400 audit(1734099590.138:81): avc: denied { associate } for pid=928 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:19:51.908248 kernel: audit: type=1300 audit(1734099590.138:81): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58ac a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:51.908260 kernel: audit: type=1327 audit(1734099590.138:81): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:19:51.908271 kernel: audit: type=1400 audit(1734099590.139:82): avc: denied { associate } for pid=928 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:19:51.908283 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:19:51.908295 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:19:51.908306 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:19:51.908318 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:19:51.908329 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:19:51.908341 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:19:51.908351 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:19:51.908367 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:19:51.908377 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:19:51.908388 systemd[1]: Created slice system-getty.slice. Dec 13 14:19:51.908399 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:19:51.908411 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:19:51.908421 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:19:51.908432 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:19:51.908442 systemd[1]: Created slice user.slice. Dec 13 14:19:51.908453 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:19:51.908463 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:19:51.908473 systemd[1]: Set up automount boot.automount. Dec 13 14:19:51.908483 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:19:51.908494 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:19:51.908507 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:19:51.908517 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:19:51.908527 systemd[1]: Reached target integritysetup.target. Dec 13 14:19:51.908537 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:19:51.908547 systemd[1]: Reached target remote-fs.target. Dec 13 14:19:51.908557 systemd[1]: Reached target slices.target. Dec 13 14:19:51.908568 systemd[1]: Reached target swap.target. Dec 13 14:19:51.908578 systemd[1]: Reached target torcx.target. Dec 13 14:19:51.908588 systemd[1]: Reached target veritysetup.target. Dec 13 14:19:51.908599 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:19:51.908609 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:19:51.908620 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:19:51.908631 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:19:51.908641 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:19:51.908651 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:19:51.908661 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:19:51.908671 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:19:51.908681 systemd[1]: Mounting media.mount... Dec 13 14:19:51.908693 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:19:51.908703 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:19:51.908713 systemd[1]: Mounting tmp.mount... Dec 13 14:19:51.908724 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:19:51.908734 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:19:51.908744 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:19:51.908754 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:19:51.908765 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:19:51.908776 systemd[1]: Starting modprobe@drm.service... Dec 13 14:19:51.908787 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:19:51.908797 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:19:51.908807 systemd[1]: Starting modprobe@loop.service... Dec 13 14:19:51.908817 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:19:51.908829 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:19:51.908839 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:19:51.908851 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:19:51.908861 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:19:51.908871 systemd[1]: Stopped systemd-journald.service. Dec 13 14:19:51.908882 kernel: fuse: init (API version 7.34) Dec 13 14:19:51.908892 kernel: loop: module loaded Dec 13 14:19:51.908901 systemd[1]: Starting systemd-journald.service... Dec 13 14:19:51.908911 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:19:51.908922 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:19:51.908932 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:19:51.908941 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:19:51.908957 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:19:51.908973 systemd[1]: Stopped verity-setup.service. Dec 13 14:19:51.908985 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:19:51.908995 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:19:51.909005 systemd[1]: Mounted media.mount. Dec 13 14:19:51.909015 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:19:51.909025 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:19:51.909035 systemd[1]: Mounted tmp.mount. Dec 13 14:19:51.909045 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:19:51.909056 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:19:51.909066 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:19:51.909079 systemd-journald[994]: Journal started Dec 13 14:19:51.909119 systemd-journald[994]: Runtime Journal (/run/log/journal/bda3c2a70f614be4ba1d1516ea7f9856) is 6.0M, max 48.7M, 42.6M free. Dec 13 14:19:50.013000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:19:50.096000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:19:50.096000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:19:50.097000 audit: BPF prog-id=10 op=LOAD Dec 13 14:19:50.097000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:19:50.098000 audit: BPF prog-id=11 op=LOAD Dec 13 14:19:50.098000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:19:50.138000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:19:50.138000 audit[928]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58ac a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:50.138000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:19:50.139000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:19:50.139000 audit[928]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5985 a2=1ed a3=0 items=2 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:50.139000 audit: CWD cwd="/" Dec 13 14:19:50.139000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:19:50.139000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:19:50.139000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:19:51.798000 audit: BPF prog-id=12 op=LOAD Dec 13 14:19:51.798000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:19:51.798000 audit: BPF prog-id=13 op=LOAD Dec 13 14:19:51.798000 audit: BPF prog-id=14 op=LOAD Dec 13 14:19:51.798000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:19:51.798000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:19:51.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.806000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:19:51.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.880000 audit: BPF prog-id=15 op=LOAD Dec 13 14:19:51.880000 audit: BPF prog-id=16 op=LOAD Dec 13 14:19:51.880000 audit: BPF prog-id=17 op=LOAD Dec 13 14:19:51.880000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:19:51.880000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:19:51.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.902000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:19:51.902000 audit[994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffdec25b50 a2=4000 a3=1 items=0 ppid=1 pid=994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:51.902000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:19:51.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:50.138568 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:19:51.797853 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:19:50.138803 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:50Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:19:51.797863 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:19:50.138822 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:50Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:19:51.800305 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:19:50.138850 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:50Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:19:50.138860 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:50Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:19:50.138886 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:50Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:19:51.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:50.138897 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:50Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:19:50.139115 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:50Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:19:50.139149 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:50Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:19:50.139160 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:50Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:19:51.910995 systemd[1]: Started systemd-journald.service. Dec 13 14:19:51.910915 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:19:50.139538 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:50Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:19:50.139570 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:50Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:19:50.139588 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:50Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:19:50.139602 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:50Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:19:50.139618 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:50Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:19:50.139630 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:50Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:19:51.557991 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:51Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:19:51.558250 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:51Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:19:51.558352 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:51Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:19:51.558512 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:51Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:19:51.558560 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:51Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:19:51.558616 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-12-13T14:19:51Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:19:51.912074 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:19:51.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.913235 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:19:51.913402 systemd[1]: Finished modprobe@drm.service. Dec 13 14:19:51.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.914248 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:19:51.914387 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:19:51.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.915312 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:19:51.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.915672 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:19:51.916486 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:19:51.916618 systemd[1]: Finished modprobe@loop.service. Dec 13 14:19:51.917563 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:19:51.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.918464 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:19:51.919456 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:19:51.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.920469 systemd[1]: Reached target network-pre.target. Dec 13 14:19:51.922181 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:19:51.923730 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:19:51.924466 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:19:51.926152 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:19:51.927702 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:19:51.928445 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:19:51.929526 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:19:51.930248 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:19:51.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.931293 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:19:51.934105 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:19:51.934945 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:19:51.935754 systemd-journald[994]: Time spent on flushing to /var/log/journal/bda3c2a70f614be4ba1d1516ea7f9856 is 19.543ms for 986 entries. Dec 13 14:19:51.935754 systemd-journald[994]: System Journal (/var/log/journal/bda3c2a70f614be4ba1d1516ea7f9856) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:19:51.964160 systemd-journald[994]: Received client request to flush runtime journal. Dec 13 14:19:51.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.935727 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:19:51.942777 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:19:51.948925 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:19:51.964706 udevadm[1030]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:19:51.949877 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:19:51.950648 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:19:51.952380 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:19:51.955369 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:19:51.965000 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:19:51.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:51.966848 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:19:51.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.281158 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:19:52.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.281000 audit: BPF prog-id=18 op=LOAD Dec 13 14:19:52.281000 audit: BPF prog-id=19 op=LOAD Dec 13 14:19:52.281000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:19:52.281000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:19:52.283196 systemd[1]: Starting systemd-udevd.service... Dec 13 14:19:52.298169 systemd-udevd[1032]: Using default interface naming scheme 'v252'. Dec 13 14:19:52.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.321000 audit: BPF prog-id=20 op=LOAD Dec 13 14:19:52.320488 systemd[1]: Started systemd-udevd.service. Dec 13 14:19:52.323041 systemd[1]: Starting systemd-networkd.service... Dec 13 14:19:52.336000 audit: BPF prog-id=21 op=LOAD Dec 13 14:19:52.336000 audit: BPF prog-id=22 op=LOAD Dec 13 14:19:52.336000 audit: BPF prog-id=23 op=LOAD Dec 13 14:19:52.337706 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:19:52.345889 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Dec 13 14:19:52.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.366568 systemd[1]: Started systemd-userdbd.service. Dec 13 14:19:52.401694 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:19:52.423550 systemd-networkd[1043]: lo: Link UP Dec 13 14:19:52.423559 systemd-networkd[1043]: lo: Gained carrier Dec 13 14:19:52.423876 systemd-networkd[1043]: Enumeration completed Dec 13 14:19:52.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.423985 systemd[1]: Started systemd-networkd.service. Dec 13 14:19:52.424015 systemd-networkd[1043]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:19:52.425725 systemd-networkd[1043]: eth0: Link UP Dec 13 14:19:52.425736 systemd-networkd[1043]: eth0: Gained carrier Dec 13 14:19:52.431432 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:19:52.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.433182 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:19:52.448096 systemd-networkd[1043]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:19:52.452371 lvm[1065]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:19:52.474831 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:19:52.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.475653 systemd[1]: Reached target cryptsetup.target. Dec 13 14:19:52.477338 systemd[1]: Starting lvm2-activation.service... Dec 13 14:19:52.482001 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:19:52.508747 systemd[1]: Finished lvm2-activation.service. Dec 13 14:19:52.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.509506 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:19:52.510147 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:19:52.510173 systemd[1]: Reached target local-fs.target. Dec 13 14:19:52.510718 systemd[1]: Reached target machines.target. Dec 13 14:19:52.512333 systemd[1]: Starting ldconfig.service... Dec 13 14:19:52.514255 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:19:52.514325 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:19:52.515536 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:19:52.517663 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:19:52.519749 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:19:52.521509 systemd[1]: Starting systemd-sysext.service... Dec 13 14:19:52.522380 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1068 (bootctl) Dec 13 14:19:52.523417 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:19:52.526916 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:19:52.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.533930 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:19:52.540064 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:19:52.540255 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:19:52.554007 kernel: loop0: detected capacity change from 0 to 194512 Dec 13 14:19:52.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.591689 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:19:52.607996 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:19:52.617577 systemd-fsck[1077]: fsck.fat 4.2 (2021-01-31) Dec 13 14:19:52.617577 systemd-fsck[1077]: /dev/vda1: 236 files, 117175/258078 clusters Dec 13 14:19:52.619159 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:19:52.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.624044 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 14:19:52.636397 (sd-sysext)[1080]: Using extensions 'kubernetes'. Dec 13 14:19:52.636729 (sd-sysext)[1080]: Merged extensions into '/usr'. Dec 13 14:19:52.652421 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:19:52.653703 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:19:52.655525 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:19:52.657348 systemd[1]: Starting modprobe@loop.service... Dec 13 14:19:52.658067 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:19:52.658190 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:19:52.658863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:19:52.659024 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:19:52.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.660322 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:19:52.660453 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:19:52.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.661654 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:19:52.661757 systemd[1]: Finished modprobe@loop.service. Dec 13 14:19:52.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.663056 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:19:52.663157 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:19:52.788145 ldconfig[1067]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:19:52.791687 systemd[1]: Finished ldconfig.service. Dec 13 14:19:52.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.899092 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:19:52.900680 systemd[1]: Mounting boot.mount... Dec 13 14:19:52.902313 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:19:52.908165 systemd[1]: Mounted boot.mount. Dec 13 14:19:52.908899 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:19:52.910860 systemd[1]: Finished systemd-sysext.service. Dec 13 14:19:52.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.913443 systemd[1]: Starting ensure-sysext.service... Dec 13 14:19:52.915338 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:19:52.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:52.918059 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:19:52.920493 systemd[1]: Reloading. Dec 13 14:19:52.928867 systemd-tmpfiles[1088]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:19:52.931785 systemd-tmpfiles[1088]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:19:52.935565 systemd-tmpfiles[1088]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:19:52.960286 /usr/lib/systemd/system-generators/torcx-generator[1109]: time="2024-12-13T14:19:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:19:52.960311 /usr/lib/systemd/system-generators/torcx-generator[1109]: time="2024-12-13T14:19:52Z" level=info msg="torcx already run" Dec 13 14:19:53.008063 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:19:53.008220 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:19:53.023309 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:19:53.065000 audit: BPF prog-id=24 op=LOAD Dec 13 14:19:53.065000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:19:53.065000 audit: BPF prog-id=25 op=LOAD Dec 13 14:19:53.065000 audit: BPF prog-id=26 op=LOAD Dec 13 14:19:53.065000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:19:53.065000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:19:53.067000 audit: BPF prog-id=27 op=LOAD Dec 13 14:19:53.067000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:19:53.068000 audit: BPF prog-id=28 op=LOAD Dec 13 14:19:53.068000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:19:53.068000 audit: BPF prog-id=29 op=LOAD Dec 13 14:19:53.068000 audit: BPF prog-id=30 op=LOAD Dec 13 14:19:53.068000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:19:53.068000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:19:53.068000 audit: BPF prog-id=31 op=LOAD Dec 13 14:19:53.068000 audit: BPF prog-id=32 op=LOAD Dec 13 14:19:53.068000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:19:53.068000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:19:53.071486 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:19:53.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.075817 systemd[1]: Starting audit-rules.service... Dec 13 14:19:53.077777 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:19:53.079753 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:19:53.080000 audit: BPF prog-id=33 op=LOAD Dec 13 14:19:53.081941 systemd[1]: Starting systemd-resolved.service... Dec 13 14:19:53.082000 audit: BPF prog-id=34 op=LOAD Dec 13 14:19:53.084487 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:19:53.086763 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:19:53.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.090638 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:19:53.091609 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:19:53.094000 audit[1154]: SYSTEM_BOOT pid=1154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.098699 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:19:53.100173 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:19:53.102140 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:19:53.103939 systemd[1]: Starting modprobe@loop.service... Dec 13 14:19:53.104668 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:19:53.104872 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:19:53.105062 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:19:53.106257 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:19:53.106390 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:19:53.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.107528 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:19:53.107642 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:19:53.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.108887 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:19:53.109025 systemd[1]: Finished modprobe@loop.service. Dec 13 14:19:53.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.110163 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:19:53.110297 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:19:53.111319 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:19:53.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.113481 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:19:53.114759 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:19:53.116544 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:19:53.118334 systemd[1]: Starting modprobe@loop.service... Dec 13 14:19:53.118907 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:19:53.119051 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:19:53.119155 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:19:53.119929 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:19:53.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.121139 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:19:53.121258 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:19:53.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.122247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:19:53.122355 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:19:53.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.123554 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:19:53.123661 systemd[1]: Finished modprobe@loop.service. Dec 13 14:19:53.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.124910 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:19:53.125033 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:19:53.127300 systemd[1]: Starting systemd-update-done.service... Dec 13 14:19:53.130218 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:19:53.131398 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:19:53.133062 systemd[1]: Starting modprobe@drm.service... Dec 13 14:19:53.134763 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:19:53.136513 systemd[1]: Starting modprobe@loop.service... Dec 13 14:19:53.137264 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:19:53.137409 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:19:53.138712 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:19:53.139649 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:19:53.140734 systemd[1]: Finished systemd-update-done.service. Dec 13 14:19:53.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.141797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:19:53.141914 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:19:53.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.143139 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:19:53.143252 systemd[1]: Finished modprobe@drm.service. Dec 13 14:19:53.144278 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:19:53.144387 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:19:53.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.145395 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:19:53.145871 systemd[1]: Finished modprobe@loop.service. Dec 13 14:19:53.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.147225 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:19:53.147318 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:19:53.149263 systemd[1]: Finished ensure-sysext.service. Dec 13 14:19:53.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.151220 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:19:53.152087 systemd-timesyncd[1153]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 14:19:53.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.152000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:19:53.152000 audit[1180]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe04452a0 a2=420 a3=0 items=0 ppid=1148 pid=1180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:53.152000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:19:53.152639 augenrules[1180]: No rules Dec 13 14:19:53.152144 systemd-timesyncd[1153]: Initial clock synchronization to Fri 2024-12-13 14:19:52.969680 UTC. Dec 13 14:19:53.152391 systemd[1]: Reached target time-set.target. Dec 13 14:19:53.153230 systemd[1]: Finished audit-rules.service. Dec 13 14:19:53.154381 systemd-resolved[1152]: Positive Trust Anchors: Dec 13 14:19:53.154594 systemd-resolved[1152]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:19:53.154667 systemd-resolved[1152]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:19:53.167544 systemd-resolved[1152]: Defaulting to hostname 'linux'. Dec 13 14:19:53.168954 systemd[1]: Started systemd-resolved.service. Dec 13 14:19:53.169683 systemd[1]: Reached target network.target. Dec 13 14:19:53.170281 systemd[1]: Reached target nss-lookup.target. Dec 13 14:19:53.170838 systemd[1]: Reached target sysinit.target. Dec 13 14:19:53.171482 systemd[1]: Started motdgen.path. Dec 13 14:19:53.172018 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:19:53.172933 systemd[1]: Started logrotate.timer. Dec 13 14:19:53.173612 systemd[1]: Started mdadm.timer. Dec 13 14:19:53.174135 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:19:53.174716 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:19:53.174742 systemd[1]: Reached target paths.target. Dec 13 14:19:53.175291 systemd[1]: Reached target timers.target. Dec 13 14:19:53.176100 systemd[1]: Listening on dbus.socket. Dec 13 14:19:53.177569 systemd[1]: Starting docker.socket... Dec 13 14:19:53.180524 systemd[1]: Listening on sshd.socket. Dec 13 14:19:53.181197 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:19:53.181603 systemd[1]: Listening on docker.socket. Dec 13 14:19:53.182285 systemd[1]: Reached target sockets.target. Dec 13 14:19:53.182826 systemd[1]: Reached target basic.target. Dec 13 14:19:53.183423 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:19:53.183453 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:19:53.184365 systemd[1]: Starting containerd.service... Dec 13 14:19:53.185914 systemd[1]: Starting dbus.service... Dec 13 14:19:53.187363 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:19:53.189129 systemd[1]: Starting extend-filesystems.service... Dec 13 14:19:53.189848 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:19:53.190878 systemd[1]: Starting motdgen.service... Dec 13 14:19:53.194452 systemd[1]: Starting prepare-helm.service... Dec 13 14:19:53.196238 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:19:53.198058 systemd[1]: Starting sshd-keygen.service... Dec 13 14:19:53.201674 systemd[1]: Starting systemd-logind.service... Dec 13 14:19:53.203281 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:19:53.203356 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:19:53.203766 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:19:53.204525 systemd[1]: Starting update-engine.service... Dec 13 14:19:53.206366 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:19:53.209789 jq[1205]: true Dec 13 14:19:53.210070 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:19:53.210233 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:19:53.211913 jq[1190]: false Dec 13 14:19:53.214343 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:19:53.214534 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:19:53.215670 extend-filesystems[1191]: Found loop1 Dec 13 14:19:53.216638 extend-filesystems[1191]: Found vda Dec 13 14:19:53.217401 extend-filesystems[1191]: Found vda1 Dec 13 14:19:53.218052 extend-filesystems[1191]: Found vda2 Dec 13 14:19:53.218622 extend-filesystems[1191]: Found vda3 Dec 13 14:19:53.219205 extend-filesystems[1191]: Found usr Dec 13 14:19:53.219902 extend-filesystems[1191]: Found vda4 Dec 13 14:19:53.220635 extend-filesystems[1191]: Found vda6 Dec 13 14:19:53.224570 extend-filesystems[1191]: Found vda7 Dec 13 14:19:53.225250 extend-filesystems[1191]: Found vda9 Dec 13 14:19:53.225814 extend-filesystems[1191]: Checking size of /dev/vda9 Dec 13 14:19:53.227258 jq[1209]: true Dec 13 14:19:53.233450 tar[1207]: linux-arm64/helm Dec 13 14:19:53.237064 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:19:53.237234 systemd[1]: Finished motdgen.service. Dec 13 14:19:53.242797 dbus-daemon[1189]: [system] SELinux support is enabled Dec 13 14:19:53.242962 systemd[1]: Started dbus.service. Dec 13 14:19:53.245333 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:19:53.245360 systemd[1]: Reached target system-config.target. Dec 13 14:19:53.246036 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:19:53.246062 systemd[1]: Reached target user-config.target. Dec 13 14:19:53.246842 extend-filesystems[1191]: Resized partition /dev/vda9 Dec 13 14:19:53.252236 extend-filesystems[1231]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:19:53.262880 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 14:19:53.271555 systemd-logind[1200]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 14:19:53.271753 systemd-logind[1200]: New seat seat0. Dec 13 14:19:53.272927 systemd[1]: Started systemd-logind.service. Dec 13 14:19:53.296987 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 14:19:53.314733 extend-filesystems[1231]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:19:53.314733 extend-filesystems[1231]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:19:53.314733 extend-filesystems[1231]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 14:19:53.319291 bash[1238]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:19:53.316795 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:19:53.319435 extend-filesystems[1191]: Resized filesystem in /dev/vda9 Dec 13 14:19:53.316990 systemd[1]: Finished extend-filesystems.service. Dec 13 14:19:53.321366 update_engine[1203]: I1213 14:19:53.319226 1203 main.cc:92] Flatcar Update Engine starting Dec 13 14:19:53.318473 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:19:53.326879 systemd[1]: Started update-engine.service. Dec 13 14:19:53.326975 update_engine[1203]: I1213 14:19:53.326893 1203 update_check_scheduler.cc:74] Next update check in 3m34s Dec 13 14:19:53.329325 systemd[1]: Started locksmithd.service. Dec 13 14:19:53.339937 env[1214]: time="2024-12-13T14:19:53.339888120Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:19:53.365091 env[1214]: time="2024-12-13T14:19:53.365045200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:19:53.365217 env[1214]: time="2024-12-13T14:19:53.365195960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:19:53.367335 env[1214]: time="2024-12-13T14:19:53.367291280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:19:53.367335 env[1214]: time="2024-12-13T14:19:53.367329880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:19:53.367560 env[1214]: time="2024-12-13T14:19:53.367534400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:19:53.367604 env[1214]: time="2024-12-13T14:19:53.367559840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:19:53.367604 env[1214]: time="2024-12-13T14:19:53.367572600Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:19:53.367604 env[1214]: time="2024-12-13T14:19:53.367581920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:19:53.367685 env[1214]: time="2024-12-13T14:19:53.367666800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:19:53.367955 env[1214]: time="2024-12-13T14:19:53.367929560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:19:53.368121 env[1214]: time="2024-12-13T14:19:53.368095720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:19:53.368151 env[1214]: time="2024-12-13T14:19:53.368120000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:19:53.368198 env[1214]: time="2024-12-13T14:19:53.368180640Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:19:53.368198 env[1214]: time="2024-12-13T14:19:53.368196000Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:19:53.372532 env[1214]: time="2024-12-13T14:19:53.372498560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:19:53.372640 env[1214]: time="2024-12-13T14:19:53.372621640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:19:53.372670 env[1214]: time="2024-12-13T14:19:53.372640680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:19:53.372690 env[1214]: time="2024-12-13T14:19:53.372676800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:19:53.372711 env[1214]: time="2024-12-13T14:19:53.372691560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:19:53.372711 env[1214]: time="2024-12-13T14:19:53.372704920Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:19:53.372763 env[1214]: time="2024-12-13T14:19:53.372716760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:19:53.373275 env[1214]: time="2024-12-13T14:19:53.373235760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:19:53.373309 env[1214]: time="2024-12-13T14:19:53.373277760Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:19:53.373309 env[1214]: time="2024-12-13T14:19:53.373293000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:19:53.373366 env[1214]: time="2024-12-13T14:19:53.373312240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:19:53.373366 env[1214]: time="2024-12-13T14:19:53.373325640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:19:53.373543 env[1214]: time="2024-12-13T14:19:53.373522440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:19:53.373625 env[1214]: time="2024-12-13T14:19:53.373608280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:19:53.373903 env[1214]: time="2024-12-13T14:19:53.373880240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:19:53.373931 env[1214]: time="2024-12-13T14:19:53.373913600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:19:53.373931 env[1214]: time="2024-12-13T14:19:53.373927080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:19:53.374121 env[1214]: time="2024-12-13T14:19:53.374103680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:19:53.374151 env[1214]: time="2024-12-13T14:19:53.374123360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:19:53.374151 env[1214]: time="2024-12-13T14:19:53.374136680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:19:53.374204 env[1214]: time="2024-12-13T14:19:53.374148320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:19:53.374238 env[1214]: time="2024-12-13T14:19:53.374205880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:19:53.374238 env[1214]: time="2024-12-13T14:19:53.374219480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:19:53.374238 env[1214]: time="2024-12-13T14:19:53.374230440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:19:53.374303 env[1214]: time="2024-12-13T14:19:53.374241640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:19:53.374303 env[1214]: time="2024-12-13T14:19:53.374264680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:19:53.374411 env[1214]: time="2024-12-13T14:19:53.374391120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:19:53.374449 env[1214]: time="2024-12-13T14:19:53.374417600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:19:53.374449 env[1214]: time="2024-12-13T14:19:53.374430600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:19:53.374449 env[1214]: time="2024-12-13T14:19:53.374441680Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:19:53.374507 env[1214]: time="2024-12-13T14:19:53.374455240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:19:53.374507 env[1214]: time="2024-12-13T14:19:53.374465720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:19:53.374507 env[1214]: time="2024-12-13T14:19:53.374483720Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:19:53.374569 env[1214]: time="2024-12-13T14:19:53.374514280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:19:53.374745 env[1214]: time="2024-12-13T14:19:53.374693160Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:19:53.378663 env[1214]: time="2024-12-13T14:19:53.374750400Z" level=info msg="Connect containerd service" Dec 13 14:19:53.378663 env[1214]: time="2024-12-13T14:19:53.374776960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:19:53.378663 env[1214]: time="2024-12-13T14:19:53.375615240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:19:53.378663 env[1214]: time="2024-12-13T14:19:53.375895560Z" level=info msg="Start subscribing containerd event" Dec 13 14:19:53.378663 env[1214]: time="2024-12-13T14:19:53.375931680Z" level=info msg="Start recovering state" Dec 13 14:19:53.378663 env[1214]: time="2024-12-13T14:19:53.376006960Z" level=info msg="Start event monitor" Dec 13 14:19:53.378663 env[1214]: time="2024-12-13T14:19:53.376023080Z" level=info msg="Start snapshots syncer" Dec 13 14:19:53.378663 env[1214]: time="2024-12-13T14:19:53.376031720Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:19:53.378663 env[1214]: time="2024-12-13T14:19:53.376038760Z" level=info msg="Start streaming server" Dec 13 14:19:53.378663 env[1214]: time="2024-12-13T14:19:53.376442840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:19:53.378663 env[1214]: time="2024-12-13T14:19:53.376478640Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:19:53.378663 env[1214]: time="2024-12-13T14:19:53.376560560Z" level=info msg="containerd successfully booted in 0.037603s" Dec 13 14:19:53.377667 systemd[1]: Started containerd.service. Dec 13 14:19:53.388187 locksmithd[1243]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:19:53.627279 tar[1207]: linux-arm64/LICENSE Dec 13 14:19:53.627493 tar[1207]: linux-arm64/README.md Dec 13 14:19:53.633708 systemd[1]: Finished prepare-helm.service. Dec 13 14:19:53.882266 systemd-networkd[1043]: eth0: Gained IPv6LL Dec 13 14:19:53.883855 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:19:53.884824 systemd[1]: Reached target network-online.target. Dec 13 14:19:53.886858 systemd[1]: Starting kubelet.service... Dec 13 14:19:54.372523 systemd[1]: Started kubelet.service. Dec 13 14:19:54.835511 kubelet[1258]: E1213 14:19:54.835428 1258 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:19:54.838044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:19:54.838166 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:19:55.056948 sshd_keygen[1208]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:19:55.074151 systemd[1]: Finished sshd-keygen.service. Dec 13 14:19:55.076194 systemd[1]: Starting issuegen.service... Dec 13 14:19:55.080478 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:19:55.080618 systemd[1]: Finished issuegen.service. Dec 13 14:19:55.082549 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:19:55.088240 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:19:55.090157 systemd[1]: Started getty@tty1.service. Dec 13 14:19:55.091889 systemd[1]: Started serial-getty@ttyAMA0.service. Dec 13 14:19:55.092734 systemd[1]: Reached target getty.target. Dec 13 14:19:55.093394 systemd[1]: Reached target multi-user.target. Dec 13 14:19:55.095240 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:19:55.100909 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:19:55.101108 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:19:55.101880 systemd[1]: Startup finished in 550ms (kernel) + 4.412s (initrd) + 5.120s (userspace) = 10.083s. Dec 13 14:19:58.994815 systemd[1]: Created slice system-sshd.slice. Dec 13 14:19:58.995940 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:50226.service. Dec 13 14:19:59.048410 sshd[1281]: Accepted publickey for core from 10.0.0.1 port 50226 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:19:59.050573 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:59.061318 systemd[1]: Created slice user-500.slice. Dec 13 14:19:59.062340 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:19:59.067020 systemd-logind[1200]: New session 1 of user core. Dec 13 14:19:59.073834 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:19:59.075082 systemd[1]: Starting user@500.service... Dec 13 14:19:59.077747 (systemd)[1284]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:59.137744 systemd[1284]: Queued start job for default target default.target. Dec 13 14:19:59.138240 systemd[1284]: Reached target paths.target. Dec 13 14:19:59.138260 systemd[1284]: Reached target sockets.target. Dec 13 14:19:59.138270 systemd[1284]: Reached target timers.target. Dec 13 14:19:59.138281 systemd[1284]: Reached target basic.target. Dec 13 14:19:59.138331 systemd[1284]: Reached target default.target. Dec 13 14:19:59.138356 systemd[1284]: Startup finished in 55ms. Dec 13 14:19:59.138407 systemd[1]: Started user@500.service. Dec 13 14:19:59.139348 systemd[1]: Started session-1.scope. Dec 13 14:19:59.190208 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:50230.service. Dec 13 14:19:59.235930 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 50230 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:19:59.237931 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:59.242968 systemd-logind[1200]: New session 2 of user core. Dec 13 14:19:59.243506 systemd[1]: Started session-2.scope. Dec 13 14:19:59.303740 sshd[1293]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:59.306766 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:50232.service. Dec 13 14:19:59.309297 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:50230.service: Deactivated successfully. Dec 13 14:19:59.309975 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:19:59.310607 systemd-logind[1200]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:19:59.311307 systemd-logind[1200]: Removed session 2. Dec 13 14:19:59.348576 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 50232 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:19:59.350287 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:59.355623 systemd-logind[1200]: New session 3 of user core. Dec 13 14:19:59.356672 systemd[1]: Started session-3.scope. Dec 13 14:19:59.408533 sshd[1298]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:59.412666 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:50232.service: Deactivated successfully. Dec 13 14:19:59.413236 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:19:59.413779 systemd-logind[1200]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:19:59.415149 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:50236.service. Dec 13 14:19:59.416280 systemd-logind[1200]: Removed session 3. Dec 13 14:19:59.453190 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 50236 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:19:59.454228 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:59.457232 systemd-logind[1200]: New session 4 of user core. Dec 13 14:19:59.459211 systemd[1]: Started session-4.scope. Dec 13 14:19:59.512319 sshd[1305]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:59.515990 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:50242.service. Dec 13 14:19:59.516560 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:50236.service: Deactivated successfully. Dec 13 14:19:59.517237 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:19:59.517776 systemd-logind[1200]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:19:59.518512 systemd-logind[1200]: Removed session 4. Dec 13 14:19:59.553701 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 50242 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:19:59.554786 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:59.558431 systemd-logind[1200]: New session 5 of user core. Dec 13 14:19:59.559279 systemd[1]: Started session-5.scope. Dec 13 14:19:59.620427 sudo[1314]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:19:59.620632 sudo[1314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:19:59.676991 systemd[1]: Starting docker.service... Dec 13 14:19:59.759390 env[1326]: time="2024-12-13T14:19:59.759340946Z" level=info msg="Starting up" Dec 13 14:19:59.760695 env[1326]: time="2024-12-13T14:19:59.760668484Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:19:59.760695 env[1326]: time="2024-12-13T14:19:59.760693271Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:19:59.760782 env[1326]: time="2024-12-13T14:19:59.760718809Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:19:59.760782 env[1326]: time="2024-12-13T14:19:59.760732527Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:19:59.763004 env[1326]: time="2024-12-13T14:19:59.762977850Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:19:59.763105 env[1326]: time="2024-12-13T14:19:59.763090913Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:19:59.763164 env[1326]: time="2024-12-13T14:19:59.763149539Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:19:59.763212 env[1326]: time="2024-12-13T14:19:59.763200220Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:19:59.844805 env[1326]: time="2024-12-13T14:19:59.844700269Z" level=info msg="Loading containers: start." Dec 13 14:19:59.979996 kernel: Initializing XFRM netlink socket Dec 13 14:20:00.006531 env[1326]: time="2024-12-13T14:20:00.006364704Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:20:00.069916 systemd-networkd[1043]: docker0: Link UP Dec 13 14:20:00.091184 env[1326]: time="2024-12-13T14:20:00.091154536Z" level=info msg="Loading containers: done." Dec 13 14:20:00.114081 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1393076472-merged.mount: Deactivated successfully. Dec 13 14:20:00.116188 env[1326]: time="2024-12-13T14:20:00.116140811Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:20:00.116338 env[1326]: time="2024-12-13T14:20:00.116311249Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:20:00.116442 env[1326]: time="2024-12-13T14:20:00.116418818Z" level=info msg="Daemon has completed initialization" Dec 13 14:20:00.135082 systemd[1]: Started docker.service. Dec 13 14:20:00.136912 env[1326]: time="2024-12-13T14:20:00.136867820Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:20:00.930579 env[1214]: time="2024-12-13T14:20:00.930205917Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:20:01.567141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4184979652.mount: Deactivated successfully. Dec 13 14:20:03.021084 env[1214]: time="2024-12-13T14:20:03.021030255Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:03.022440 env[1214]: time="2024-12-13T14:20:03.022408069Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:03.024215 env[1214]: time="2024-12-13T14:20:03.024189180Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:03.025905 env[1214]: time="2024-12-13T14:20:03.025878882Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:03.027463 env[1214]: time="2024-12-13T14:20:03.027429901Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 14:20:03.036387 env[1214]: time="2024-12-13T14:20:03.036361201Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:20:04.812572 env[1214]: time="2024-12-13T14:20:04.812517022Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:04.814425 env[1214]: time="2024-12-13T14:20:04.814367739Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:04.816063 env[1214]: time="2024-12-13T14:20:04.816026772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:04.818269 env[1214]: time="2024-12-13T14:20:04.818230281Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:04.819760 env[1214]: time="2024-12-13T14:20:04.819722401Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 14:20:04.831350 env[1214]: time="2024-12-13T14:20:04.831314481Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:20:04.998950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:20:04.999143 systemd[1]: Stopped kubelet.service. Dec 13 14:20:05.000517 systemd[1]: Starting kubelet.service... Dec 13 14:20:05.079503 systemd[1]: Started kubelet.service. Dec 13 14:20:05.121468 kubelet[1484]: E1213 14:20:05.121412 1484 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:20:05.124939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:20:05.125087 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:20:06.081989 env[1214]: time="2024-12-13T14:20:06.081927593Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:06.083584 env[1214]: time="2024-12-13T14:20:06.083540117Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:06.086188 env[1214]: time="2024-12-13T14:20:06.086158403Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:06.087741 env[1214]: time="2024-12-13T14:20:06.087712834Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:06.089290 env[1214]: time="2024-12-13T14:20:06.089258228Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 14:20:06.097772 env[1214]: time="2024-12-13T14:20:06.097749152Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:20:07.186997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3544758040.mount: Deactivated successfully. Dec 13 14:20:07.727900 env[1214]: time="2024-12-13T14:20:07.727851502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:07.728973 env[1214]: time="2024-12-13T14:20:07.728935012Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:07.730897 env[1214]: time="2024-12-13T14:20:07.730863953Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:07.732086 env[1214]: time="2024-12-13T14:20:07.732062360Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:07.732524 env[1214]: time="2024-12-13T14:20:07.732493661Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 14:20:07.741030 env[1214]: time="2024-12-13T14:20:07.741001391Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:20:08.341894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1198061067.mount: Deactivated successfully. Dec 13 14:20:09.106135 env[1214]: time="2024-12-13T14:20:09.106092325Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:09.107569 env[1214]: time="2024-12-13T14:20:09.107528455Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:09.109285 env[1214]: time="2024-12-13T14:20:09.109259236Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:09.111518 env[1214]: time="2024-12-13T14:20:09.111480185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:09.112423 env[1214]: time="2024-12-13T14:20:09.112390776Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 14:20:09.122916 env[1214]: time="2024-12-13T14:20:09.122873361Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:20:09.543872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1947394259.mount: Deactivated successfully. Dec 13 14:20:09.547484 env[1214]: time="2024-12-13T14:20:09.547449045Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:09.548610 env[1214]: time="2024-12-13T14:20:09.548584782Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:09.550101 env[1214]: time="2024-12-13T14:20:09.550067847Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:09.551387 env[1214]: time="2024-12-13T14:20:09.551362733Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:09.552023 env[1214]: time="2024-12-13T14:20:09.551999209Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 14:20:09.560751 env[1214]: time="2024-12-13T14:20:09.560723578Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:20:10.177759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1891648170.mount: Deactivated successfully. Dec 13 14:20:12.325847 env[1214]: time="2024-12-13T14:20:12.325780839Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:12.328642 env[1214]: time="2024-12-13T14:20:12.328607084Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:12.331213 env[1214]: time="2024-12-13T14:20:12.331175422Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:12.335720 env[1214]: time="2024-12-13T14:20:12.335682318Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:12.336725 env[1214]: time="2024-12-13T14:20:12.336686485Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 14:20:15.248997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:20:15.249157 systemd[1]: Stopped kubelet.service. Dec 13 14:20:15.250579 systemd[1]: Starting kubelet.service... Dec 13 14:20:15.332254 systemd[1]: Started kubelet.service. Dec 13 14:20:15.375862 kubelet[1601]: E1213 14:20:15.375807 1601 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:20:15.378228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:20:15.378352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:20:17.557262 systemd[1]: Stopped kubelet.service. Dec 13 14:20:17.559267 systemd[1]: Starting kubelet.service... Dec 13 14:20:17.574746 systemd[1]: Reloading. Dec 13 14:20:17.622710 /usr/lib/systemd/system-generators/torcx-generator[1637]: time="2024-12-13T14:20:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:20:17.622745 /usr/lib/systemd/system-generators/torcx-generator[1637]: time="2024-12-13T14:20:17Z" level=info msg="torcx already run" Dec 13 14:20:17.750344 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:20:17.750518 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:20:17.766322 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:20:17.846461 systemd[1]: Started kubelet.service. Dec 13 14:20:17.850701 systemd[1]: Stopping kubelet.service... Dec 13 14:20:17.851423 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:20:17.851689 systemd[1]: Stopped kubelet.service. Dec 13 14:20:17.853812 systemd[1]: Starting kubelet.service... Dec 13 14:20:17.933276 systemd[1]: Started kubelet.service. Dec 13 14:20:17.971695 kubelet[1686]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:20:17.971695 kubelet[1686]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:20:17.971695 kubelet[1686]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:20:17.972053 kubelet[1686]: I1213 14:20:17.971738 1686 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:20:18.703868 kubelet[1686]: I1213 14:20:18.703832 1686 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:20:18.703868 kubelet[1686]: I1213 14:20:18.703864 1686 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:20:18.704079 kubelet[1686]: I1213 14:20:18.704063 1686 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:20:18.735309 kubelet[1686]: I1213 14:20:18.735272 1686 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:20:18.735560 kubelet[1686]: E1213 14:20:18.735534 1686 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:18.742658 kubelet[1686]: I1213 14:20:18.742639 1686 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:20:18.743694 kubelet[1686]: I1213 14:20:18.743669 1686 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:20:18.743866 kubelet[1686]: I1213 14:20:18.743849 1686 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:20:18.743866 kubelet[1686]: I1213 14:20:18.743866 1686 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:20:18.743993 kubelet[1686]: I1213 14:20:18.743876 1686 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:20:18.745543 kubelet[1686]: I1213 14:20:18.745515 1686 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:20:18.749697 kubelet[1686]: I1213 14:20:18.749671 1686 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:20:18.749697 kubelet[1686]: I1213 14:20:18.749698 1686 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:20:18.749767 kubelet[1686]: I1213 14:20:18.749721 1686 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:20:18.749767 kubelet[1686]: I1213 14:20:18.749733 1686 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:20:18.750247 kubelet[1686]: W1213 14:20:18.750185 1686 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:18.750247 kubelet[1686]: E1213 14:20:18.750244 1686 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:18.750504 kubelet[1686]: W1213 14:20:18.750468 1686 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:18.750592 kubelet[1686]: E1213 14:20:18.750580 1686 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:18.750661 kubelet[1686]: I1213 14:20:18.750490 1686 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:20:18.751157 kubelet[1686]: I1213 14:20:18.751138 1686 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:20:18.751816 kubelet[1686]: W1213 14:20:18.751791 1686 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:20:18.752738 kubelet[1686]: I1213 14:20:18.752716 1686 server.go:1256] "Started kubelet" Dec 13 14:20:18.754442 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:20:18.769615 kubelet[1686]: I1213 14:20:18.769594 1686 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:20:18.770368 kubelet[1686]: I1213 14:20:18.770351 1686 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:20:18.771227 kubelet[1686]: I1213 14:20:18.771201 1686 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:20:18.771388 kubelet[1686]: I1213 14:20:18.771374 1686 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:20:18.773731 kubelet[1686]: I1213 14:20:18.773706 1686 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:20:18.775331 kubelet[1686]: I1213 14:20:18.774822 1686 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:20:18.775331 kubelet[1686]: I1213 14:20:18.774892 1686 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:20:18.775331 kubelet[1686]: I1213 14:20:18.774955 1686 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:20:18.775331 kubelet[1686]: W1213 14:20:18.775201 1686 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:18.775331 kubelet[1686]: E1213 14:20:18.775234 1686 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:18.775895 kubelet[1686]: E1213 14:20:18.775877 1686 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="200ms" Dec 13 14:20:18.776310 kubelet[1686]: E1213 14:20:18.776292 1686 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:20:18.776435 kubelet[1686]: I1213 14:20:18.776423 1686 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:20:18.776508 kubelet[1686]: I1213 14:20:18.776492 1686 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:20:18.777209 kubelet[1686]: I1213 14:20:18.777192 1686 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:20:18.779355 kubelet[1686]: E1213 14:20:18.779326 1686 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810c26a64e4ffd9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 14:20:18.752692185 +0000 UTC m=+0.815889378,LastTimestamp:2024-12-13 14:20:18.752692185 +0000 UTC m=+0.815889378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 14:20:18.788005 kubelet[1686]: I1213 14:20:18.787983 1686 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:20:18.788005 kubelet[1686]: I1213 14:20:18.788001 1686 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:20:18.788103 kubelet[1686]: I1213 14:20:18.788017 1686 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:20:18.877042 kubelet[1686]: I1213 14:20:18.877012 1686 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:20:18.877487 kubelet[1686]: E1213 14:20:18.877467 1686 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Dec 13 14:20:18.913503 kubelet[1686]: I1213 14:20:18.913471 1686 policy_none.go:49] "None policy: Start" Dec 13 14:20:18.914045 kubelet[1686]: I1213 14:20:18.914032 1686 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:20:18.914087 kubelet[1686]: I1213 14:20:18.914070 1686 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:20:18.919954 systemd[1]: Created slice kubepods.slice. Dec 13 14:20:18.921940 kubelet[1686]: I1213 14:20:18.921913 1686 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:20:18.923133 kubelet[1686]: I1213 14:20:18.923112 1686 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:20:18.923133 kubelet[1686]: I1213 14:20:18.923137 1686 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:20:18.923225 kubelet[1686]: I1213 14:20:18.923153 1686 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:20:18.923225 kubelet[1686]: E1213 14:20:18.923199 1686 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:20:18.923671 kubelet[1686]: W1213 14:20:18.923633 1686 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:18.923729 kubelet[1686]: E1213 14:20:18.923681 1686 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:18.924294 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:20:18.926895 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:20:18.930573 kubelet[1686]: I1213 14:20:18.930556 1686 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:20:18.930759 kubelet[1686]: I1213 14:20:18.930747 1686 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:20:18.932061 kubelet[1686]: E1213 14:20:18.932041 1686 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 14:20:18.976709 kubelet[1686]: E1213 14:20:18.976639 1686 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="400ms" Dec 13 14:20:19.024062 kubelet[1686]: I1213 14:20:19.024031 1686 topology_manager.go:215] "Topology Admit Handler" podUID="994eacc6dc48a1548dd1f11ea8750f28" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 14:20:19.025085 kubelet[1686]: I1213 14:20:19.025064 1686 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 14:20:19.025933 kubelet[1686]: I1213 14:20:19.025910 1686 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 14:20:19.030253 systemd[1]: Created slice kubepods-burstable-pod994eacc6dc48a1548dd1f11ea8750f28.slice. Dec 13 14:20:19.052048 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Dec 13 14:20:19.063542 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Dec 13 14:20:19.077042 kubelet[1686]: I1213 14:20:19.077010 1686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:20:19.077117 kubelet[1686]: I1213 14:20:19.077050 1686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:20:19.077117 kubelet[1686]: I1213 14:20:19.077073 1686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:20:19.077117 kubelet[1686]: I1213 14:20:19.077093 1686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/994eacc6dc48a1548dd1f11ea8750f28-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"994eacc6dc48a1548dd1f11ea8750f28\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:20:19.077117 kubelet[1686]: I1213 14:20:19.077113 1686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/994eacc6dc48a1548dd1f11ea8750f28-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"994eacc6dc48a1548dd1f11ea8750f28\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:20:19.077209 kubelet[1686]: I1213 14:20:19.077137 1686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/994eacc6dc48a1548dd1f11ea8750f28-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"994eacc6dc48a1548dd1f11ea8750f28\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:20:19.077209 kubelet[1686]: I1213 14:20:19.077158 1686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:20:19.077209 kubelet[1686]: I1213 14:20:19.077180 1686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:20:19.077209 kubelet[1686]: I1213 14:20:19.077198 1686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 14:20:19.078680 kubelet[1686]: I1213 14:20:19.078658 1686 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:20:19.079064 kubelet[1686]: E1213 14:20:19.079042 1686 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Dec 13 14:20:19.352195 kubelet[1686]: E1213 14:20:19.352149 1686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:19.353196 env[1214]: time="2024-12-13T14:20:19.352837629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:994eacc6dc48a1548dd1f11ea8750f28,Namespace:kube-system,Attempt:0,}" Dec 13 14:20:19.362922 kubelet[1686]: E1213 14:20:19.362894 1686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:19.363337 env[1214]: time="2024-12-13T14:20:19.363303396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 14:20:19.365551 kubelet[1686]: E1213 14:20:19.365532 1686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:19.365883 env[1214]: time="2024-12-13T14:20:19.365852292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 14:20:19.378074 kubelet[1686]: E1213 14:20:19.377126 1686 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="800ms" Dec 13 14:20:19.480915 kubelet[1686]: I1213 14:20:19.480882 1686 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:20:19.481496 kubelet[1686]: E1213 14:20:19.481455 1686 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Dec 13 14:20:19.772322 kubelet[1686]: W1213 14:20:19.772255 1686 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:19.772322 kubelet[1686]: E1213 14:20:19.772302 1686 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:19.804156 kubelet[1686]: W1213 14:20:19.804100 1686 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:19.804156 kubelet[1686]: E1213 14:20:19.804152 1686 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:19.875683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2887868601.mount: Deactivated successfully. Dec 13 14:20:19.881839 env[1214]: time="2024-12-13T14:20:19.881018542Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:19.884234 env[1214]: time="2024-12-13T14:20:19.884102245Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:19.886821 env[1214]: time="2024-12-13T14:20:19.886090116Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:19.890103 env[1214]: time="2024-12-13T14:20:19.888932414Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:19.890595 env[1214]: time="2024-12-13T14:20:19.890558098Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:19.893061 env[1214]: time="2024-12-13T14:20:19.892519110Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:19.894849 env[1214]: time="2024-12-13T14:20:19.894350348Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:19.895048 env[1214]: time="2024-12-13T14:20:19.895018447Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:19.895827 env[1214]: time="2024-12-13T14:20:19.895797616Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:19.898544 env[1214]: time="2024-12-13T14:20:19.898454984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:19.899806 env[1214]: time="2024-12-13T14:20:19.899776674Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:19.902238 env[1214]: time="2024-12-13T14:20:19.902207386Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:19.948545 env[1214]: time="2024-12-13T14:20:19.948467891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:20:19.948545 env[1214]: time="2024-12-13T14:20:19.948508738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:20:19.948545 env[1214]: time="2024-12-13T14:20:19.948518570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:20:19.948850 env[1214]: time="2024-12-13T14:20:19.948806097Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72b3fb1e977397f684ae1f58473617462cce0975dbc692c192e54bbd6170fe1e pid=1741 runtime=io.containerd.runc.v2 Dec 13 14:20:19.950832 env[1214]: time="2024-12-13T14:20:19.950746406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:20:19.950832 env[1214]: time="2024-12-13T14:20:19.950781298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:20:19.950832 env[1214]: time="2024-12-13T14:20:19.950791849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:20:19.951044 env[1214]: time="2024-12-13T14:20:19.950916988Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6336265af351e54ce4394b9b7d4e01766071bbe00e13508cdff641df0013943 pid=1732 runtime=io.containerd.runc.v2 Dec 13 14:20:19.953385 env[1214]: time="2024-12-13T14:20:19.953318084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:20:19.953385 env[1214]: time="2024-12-13T14:20:19.953352536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:20:19.953385 env[1214]: time="2024-12-13T14:20:19.953367204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:20:19.953496 env[1214]: time="2024-12-13T14:20:19.953459210Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e3ef1ab2a5b7018eedf441413e85953cea265cdd26bc15ea91f7daad55b8434 pid=1758 runtime=io.containerd.runc.v2 Dec 13 14:20:19.962475 systemd[1]: Started cri-containerd-72b3fb1e977397f684ae1f58473617462cce0975dbc692c192e54bbd6170fe1e.scope. Dec 13 14:20:19.966094 systemd[1]: Started cri-containerd-c6336265af351e54ce4394b9b7d4e01766071bbe00e13508cdff641df0013943.scope. Dec 13 14:20:19.990533 systemd[1]: Started cri-containerd-3e3ef1ab2a5b7018eedf441413e85953cea265cdd26bc15ea91f7daad55b8434.scope. Dec 13 14:20:20.031071 env[1214]: time="2024-12-13T14:20:20.029739056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"72b3fb1e977397f684ae1f58473617462cce0975dbc692c192e54bbd6170fe1e\"" Dec 13 14:20:20.032791 kubelet[1686]: E1213 14:20:20.032747 1686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:20.036316 env[1214]: time="2024-12-13T14:20:20.036277064Z" level=info msg="CreateContainer within sandbox \"72b3fb1e977397f684ae1f58473617462cce0975dbc692c192e54bbd6170fe1e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:20:20.041442 env[1214]: time="2024-12-13T14:20:20.041408989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:994eacc6dc48a1548dd1f11ea8750f28,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6336265af351e54ce4394b9b7d4e01766071bbe00e13508cdff641df0013943\"" Dec 13 14:20:20.042202 kubelet[1686]: E1213 14:20:20.042182 1686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:20.044101 env[1214]: time="2024-12-13T14:20:20.044064628Z" level=info msg="CreateContainer within sandbox \"c6336265af351e54ce4394b9b7d4e01766071bbe00e13508cdff641df0013943\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:20:20.054286 env[1214]: time="2024-12-13T14:20:20.054209921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e3ef1ab2a5b7018eedf441413e85953cea265cdd26bc15ea91f7daad55b8434\"" Dec 13 14:20:20.055130 kubelet[1686]: E1213 14:20:20.055066 1686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:20.057107 env[1214]: time="2024-12-13T14:20:20.057070654Z" level=info msg="CreateContainer within sandbox \"3e3ef1ab2a5b7018eedf441413e85953cea265cdd26bc15ea91f7daad55b8434\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:20:20.057542 env[1214]: time="2024-12-13T14:20:20.057508784Z" level=info msg="CreateContainer within sandbox \"72b3fb1e977397f684ae1f58473617462cce0975dbc692c192e54bbd6170fe1e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"35e96435711ba2ff40376fdec117e97cdd6dc79c7efa4e5c8b3a46a65a13da00\"" Dec 13 14:20:20.058134 env[1214]: time="2024-12-13T14:20:20.058107999Z" level=info msg="StartContainer for \"35e96435711ba2ff40376fdec117e97cdd6dc79c7efa4e5c8b3a46a65a13da00\"" Dec 13 14:20:20.061619 env[1214]: time="2024-12-13T14:20:20.061580260Z" level=info msg="CreateContainer within sandbox \"c6336265af351e54ce4394b9b7d4e01766071bbe00e13508cdff641df0013943\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"725d165ff5f7a454f053fe5ac66ed7cd1ad1d5e2c97e0ef087097485b862688c\"" Dec 13 14:20:20.062892 env[1214]: time="2024-12-13T14:20:20.062846403Z" level=info msg="StartContainer for \"725d165ff5f7a454f053fe5ac66ed7cd1ad1d5e2c97e0ef087097485b862688c\"" Dec 13 14:20:20.072869 systemd[1]: Started cri-containerd-35e96435711ba2ff40376fdec117e97cdd6dc79c7efa4e5c8b3a46a65a13da00.scope. Dec 13 14:20:20.073883 kubelet[1686]: W1213 14:20:20.073785 1686 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:20.073883 kubelet[1686]: E1213 14:20:20.073858 1686 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:20.075319 env[1214]: time="2024-12-13T14:20:20.075279595Z" level=info msg="CreateContainer within sandbox \"3e3ef1ab2a5b7018eedf441413e85953cea265cdd26bc15ea91f7daad55b8434\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0eba7f7a55cce241effc7373ba961e7ba6637e092272f1923738b5cab9f10658\"" Dec 13 14:20:20.077774 env[1214]: time="2024-12-13T14:20:20.077740572Z" level=info msg="StartContainer for \"0eba7f7a55cce241effc7373ba961e7ba6637e092272f1923738b5cab9f10658\"" Dec 13 14:20:20.080432 systemd[1]: Started cri-containerd-725d165ff5f7a454f053fe5ac66ed7cd1ad1d5e2c97e0ef087097485b862688c.scope. Dec 13 14:20:20.108364 systemd[1]: Started cri-containerd-0eba7f7a55cce241effc7373ba961e7ba6637e092272f1923738b5cab9f10658.scope. Dec 13 14:20:20.119996 kubelet[1686]: W1213 14:20:20.116368 1686 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:20.119996 kubelet[1686]: E1213 14:20:20.116422 1686 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Dec 13 14:20:20.148900 env[1214]: time="2024-12-13T14:20:20.148854996Z" level=info msg="StartContainer for \"725d165ff5f7a454f053fe5ac66ed7cd1ad1d5e2c97e0ef087097485b862688c\" returns successfully" Dec 13 14:20:20.149146 env[1214]: time="2024-12-13T14:20:20.148908838Z" level=info msg="StartContainer for \"35e96435711ba2ff40376fdec117e97cdd6dc79c7efa4e5c8b3a46a65a13da00\" returns successfully" Dec 13 14:20:20.170121 env[1214]: time="2024-12-13T14:20:20.170081480Z" level=info msg="StartContainer for \"0eba7f7a55cce241effc7373ba961e7ba6637e092272f1923738b5cab9f10658\" returns successfully" Dec 13 14:20:20.181707 kubelet[1686]: E1213 14:20:20.180375 1686 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="1.6s" Dec 13 14:20:20.282786 kubelet[1686]: I1213 14:20:20.282677 1686 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:20:20.283049 kubelet[1686]: E1213 14:20:20.283026 1686 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Dec 13 14:20:20.933509 kubelet[1686]: E1213 14:20:20.933482 1686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:20.935386 kubelet[1686]: E1213 14:20:20.935323 1686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:20.936355 kubelet[1686]: E1213 14:20:20.936304 1686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:21.751689 kubelet[1686]: I1213 14:20:21.751648 1686 apiserver.go:52] "Watching apiserver" Dec 13 14:20:21.775262 kubelet[1686]: I1213 14:20:21.775226 1686 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:20:21.784190 kubelet[1686]: E1213 14:20:21.784170 1686 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 14:20:21.884826 kubelet[1686]: I1213 14:20:21.884802 1686 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:20:21.889744 kubelet[1686]: I1213 14:20:21.889719 1686 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 14:20:21.941826 kubelet[1686]: E1213 14:20:21.941785 1686 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 14:20:21.942276 kubelet[1686]: E1213 14:20:21.942261 1686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:22.579890 kubelet[1686]: E1213 14:20:22.579832 1686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:22.938824 kubelet[1686]: E1213 14:20:22.938739 1686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:24.065536 systemd[1]: Reloading. Dec 13 14:20:24.115848 /usr/lib/systemd/system-generators/torcx-generator[1983]: time="2024-12-13T14:20:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:20:24.115883 /usr/lib/systemd/system-generators/torcx-generator[1983]: time="2024-12-13T14:20:24Z" level=info msg="torcx already run" Dec 13 14:20:24.175454 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:20:24.175640 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:20:24.191001 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:20:24.271619 systemd[1]: Stopping kubelet.service... Dec 13 14:20:24.290489 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:20:24.290665 systemd[1]: Stopped kubelet.service. Dec 13 14:20:24.290710 systemd[1]: kubelet.service: Consumed 1.151s CPU time. Dec 13 14:20:24.292174 systemd[1]: Starting kubelet.service... Dec 13 14:20:24.374197 systemd[1]: Started kubelet.service. Dec 13 14:20:24.428104 kubelet[2025]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:20:24.428104 kubelet[2025]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:20:24.428104 kubelet[2025]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:20:24.428520 kubelet[2025]: I1213 14:20:24.428148 2025 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:20:24.433420 kubelet[2025]: I1213 14:20:24.433374 2025 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:20:24.433420 kubelet[2025]: I1213 14:20:24.433401 2025 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:20:24.433580 kubelet[2025]: I1213 14:20:24.433552 2025 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:20:24.435269 kubelet[2025]: I1213 14:20:24.435204 2025 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:20:24.437208 kubelet[2025]: I1213 14:20:24.437167 2025 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:20:24.444900 kubelet[2025]: I1213 14:20:24.444866 2025 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:20:24.445219 kubelet[2025]: I1213 14:20:24.445197 2025 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:20:24.445557 kubelet[2025]: I1213 14:20:24.445534 2025 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:20:24.445664 kubelet[2025]: I1213 14:20:24.445567 2025 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:20:24.445664 kubelet[2025]: I1213 14:20:24.445577 2025 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:20:24.445664 kubelet[2025]: I1213 14:20:24.445618 2025 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:20:24.445745 kubelet[2025]: I1213 14:20:24.445703 2025 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:20:24.445745 kubelet[2025]: I1213 14:20:24.445721 2025 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:20:24.445745 kubelet[2025]: I1213 14:20:24.445742 2025 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:20:24.445816 kubelet[2025]: I1213 14:20:24.445756 2025 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:20:24.446407 kubelet[2025]: I1213 14:20:24.446303 2025 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:20:24.447757 kubelet[2025]: I1213 14:20:24.446617 2025 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:20:24.447757 kubelet[2025]: I1213 14:20:24.447125 2025 server.go:1256] "Started kubelet" Dec 13 14:20:24.462577 kubelet[2025]: I1213 14:20:24.450634 2025 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:20:24.462577 kubelet[2025]: E1213 14:20:24.455615 2025 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:20:24.462577 kubelet[2025]: I1213 14:20:24.457660 2025 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:20:24.462577 kubelet[2025]: I1213 14:20:24.457907 2025 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:20:24.462577 kubelet[2025]: I1213 14:20:24.458214 2025 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:20:24.462577 kubelet[2025]: I1213 14:20:24.458425 2025 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:20:24.462577 kubelet[2025]: I1213 14:20:24.458450 2025 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:20:24.462577 kubelet[2025]: E1213 14:20:24.458499 2025 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 14:20:24.462577 kubelet[2025]: I1213 14:20:24.458722 2025 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:20:24.462577 kubelet[2025]: I1213 14:20:24.458845 2025 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:20:24.462577 kubelet[2025]: I1213 14:20:24.460203 2025 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:20:24.462577 kubelet[2025]: I1213 14:20:24.460310 2025 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:20:24.462577 kubelet[2025]: I1213 14:20:24.462121 2025 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:20:24.489599 kubelet[2025]: I1213 14:20:24.489573 2025 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:20:24.493388 kubelet[2025]: I1213 14:20:24.493371 2025 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:20:24.493506 kubelet[2025]: I1213 14:20:24.493493 2025 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:20:24.493582 kubelet[2025]: I1213 14:20:24.493571 2025 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:20:24.493687 kubelet[2025]: E1213 14:20:24.493675 2025 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:20:24.503212 kubelet[2025]: I1213 14:20:24.503189 2025 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:20:24.503212 kubelet[2025]: I1213 14:20:24.503211 2025 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:20:24.503342 kubelet[2025]: I1213 14:20:24.503231 2025 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:20:24.503430 kubelet[2025]: I1213 14:20:24.503414 2025 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:20:24.503458 kubelet[2025]: I1213 14:20:24.503441 2025 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:20:24.503458 kubelet[2025]: I1213 14:20:24.503449 2025 policy_none.go:49] "None policy: Start" Dec 13 14:20:24.504235 kubelet[2025]: I1213 14:20:24.504212 2025 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:20:24.504235 kubelet[2025]: I1213 14:20:24.504240 2025 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:20:24.504493 kubelet[2025]: I1213 14:20:24.504474 2025 state_mem.go:75] "Updated machine memory state" Dec 13 14:20:24.510429 kubelet[2025]: I1213 14:20:24.509877 2025 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:20:24.510698 kubelet[2025]: I1213 14:20:24.510494 2025 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:20:24.561997 kubelet[2025]: I1213 14:20:24.561947 2025 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:20:24.573781 kubelet[2025]: I1213 14:20:24.573661 2025 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 14:20:24.573781 kubelet[2025]: I1213 14:20:24.573734 2025 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 14:20:24.594490 kubelet[2025]: I1213 14:20:24.594439 2025 topology_manager.go:215] "Topology Admit Handler" podUID="994eacc6dc48a1548dd1f11ea8750f28" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 14:20:24.594601 kubelet[2025]: I1213 14:20:24.594570 2025 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 14:20:24.594687 kubelet[2025]: I1213 14:20:24.594657 2025 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 14:20:24.603784 kubelet[2025]: E1213 14:20:24.603759 2025 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 14:20:24.659332 kubelet[2025]: I1213 14:20:24.659172 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:20:24.659332 kubelet[2025]: I1213 14:20:24.659291 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/994eacc6dc48a1548dd1f11ea8750f28-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"994eacc6dc48a1548dd1f11ea8750f28\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:20:24.659332 kubelet[2025]: I1213 14:20:24.659322 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 14:20:24.659491 kubelet[2025]: I1213 14:20:24.659371 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/994eacc6dc48a1548dd1f11ea8750f28-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"994eacc6dc48a1548dd1f11ea8750f28\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:20:24.659491 kubelet[2025]: I1213 14:20:24.659397 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/994eacc6dc48a1548dd1f11ea8750f28-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"994eacc6dc48a1548dd1f11ea8750f28\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:20:24.659491 kubelet[2025]: I1213 14:20:24.659447 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:20:24.659491 kubelet[2025]: I1213 14:20:24.659468 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:20:24.659590 kubelet[2025]: I1213 14:20:24.659527 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:20:24.659590 kubelet[2025]: I1213 14:20:24.659550 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:20:24.904296 kubelet[2025]: E1213 14:20:24.904261 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:24.904978 kubelet[2025]: E1213 14:20:24.904927 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:24.905186 kubelet[2025]: E1213 14:20:24.905166 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:25.411952 sudo[1314]: pam_unix(sudo:session): session closed for user root Dec 13 14:20:25.413657 sshd[1310]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:25.416301 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:20:25.416480 systemd[1]: session-5.scope: Consumed 6.145s CPU time. Dec 13 14:20:25.417150 systemd-logind[1200]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:20:25.417252 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:50242.service: Deactivated successfully. Dec 13 14:20:25.418492 systemd-logind[1200]: Removed session 5. Dec 13 14:20:25.454881 kubelet[2025]: I1213 14:20:25.454814 2025 apiserver.go:52] "Watching apiserver" Dec 13 14:20:25.459418 kubelet[2025]: I1213 14:20:25.459396 2025 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:20:25.505658 kubelet[2025]: E1213 14:20:25.505624 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:25.506297 kubelet[2025]: E1213 14:20:25.506276 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:25.512863 kubelet[2025]: E1213 14:20:25.512819 2025 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 14:20:25.513317 kubelet[2025]: E1213 14:20:25.513291 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:25.523770 kubelet[2025]: I1213 14:20:25.523736 2025 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.523700124 podStartE2EDuration="3.523700124s" podCreationTimestamp="2024-12-13 14:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:20:25.52278562 +0000 UTC m=+1.144899221" watchObservedRunningTime="2024-12-13 14:20:25.523700124 +0000 UTC m=+1.145813685" Dec 13 14:20:25.535391 kubelet[2025]: I1213 14:20:25.535350 2025 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.535318253 podStartE2EDuration="1.535318253s" podCreationTimestamp="2024-12-13 14:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:20:25.529171473 +0000 UTC m=+1.151285074" watchObservedRunningTime="2024-12-13 14:20:25.535318253 +0000 UTC m=+1.157431854" Dec 13 14:20:25.544119 kubelet[2025]: I1213 14:20:25.544085 2025 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.544050963 podStartE2EDuration="1.544050963s" podCreationTimestamp="2024-12-13 14:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:20:25.536103684 +0000 UTC m=+1.158217245" watchObservedRunningTime="2024-12-13 14:20:25.544050963 +0000 UTC m=+1.166164564" Dec 13 14:20:26.506802 kubelet[2025]: E1213 14:20:26.506734 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:27.508365 kubelet[2025]: E1213 14:20:27.508096 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:28.743390 kubelet[2025]: E1213 14:20:28.743348 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:31.525917 kubelet[2025]: E1213 14:20:31.525500 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:32.515342 kubelet[2025]: E1213 14:20:32.515301 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:35.427750 kubelet[2025]: E1213 14:20:35.427719 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:35.519610 kubelet[2025]: E1213 14:20:35.519569 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:37.957820 kubelet[2025]: I1213 14:20:37.957791 2025 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:20:37.958472 env[1214]: time="2024-12-13T14:20:37.958426857Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:20:37.958956 kubelet[2025]: I1213 14:20:37.958932 2025 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:20:37.971922 kubelet[2025]: I1213 14:20:37.971893 2025 topology_manager.go:215] "Topology Admit Handler" podUID="c875705d-12ca-4d7d-81d6-dc5e2f9e7404" podNamespace="kube-system" podName="kube-proxy-bz9nq" Dec 13 14:20:37.976728 systemd[1]: Created slice kubepods-besteffort-podc875705d_12ca_4d7d_81d6_dc5e2f9e7404.slice. Dec 13 14:20:37.982206 kubelet[2025]: I1213 14:20:37.982177 2025 topology_manager.go:215] "Topology Admit Handler" podUID="eafd58c2-978b-4a80-acfc-28677cc0654d" podNamespace="kube-flannel" podName="kube-flannel-ds-5fkt5" Dec 13 14:20:37.988402 systemd[1]: Created slice kubepods-burstable-podeafd58c2_978b_4a80_acfc_28677cc0654d.slice. Dec 13 14:20:38.049562 kubelet[2025]: I1213 14:20:38.049531 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9rw8\" (UniqueName: \"kubernetes.io/projected/c875705d-12ca-4d7d-81d6-dc5e2f9e7404-kube-api-access-p9rw8\") pod \"kube-proxy-bz9nq\" (UID: \"c875705d-12ca-4d7d-81d6-dc5e2f9e7404\") " pod="kube-system/kube-proxy-bz9nq" Dec 13 14:20:38.049769 kubelet[2025]: I1213 14:20:38.049755 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/eafd58c2-978b-4a80-acfc-28677cc0654d-cni\") pod \"kube-flannel-ds-5fkt5\" (UID: \"eafd58c2-978b-4a80-acfc-28677cc0654d\") " pod="kube-flannel/kube-flannel-ds-5fkt5" Dec 13 14:20:38.049876 kubelet[2025]: I1213 14:20:38.049863 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/eafd58c2-978b-4a80-acfc-28677cc0654d-flannel-cfg\") pod \"kube-flannel-ds-5fkt5\" (UID: \"eafd58c2-978b-4a80-acfc-28677cc0654d\") " pod="kube-flannel/kube-flannel-ds-5fkt5" Dec 13 14:20:38.049988 kubelet[2025]: I1213 14:20:38.049959 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c875705d-12ca-4d7d-81d6-dc5e2f9e7404-kube-proxy\") pod \"kube-proxy-bz9nq\" (UID: \"c875705d-12ca-4d7d-81d6-dc5e2f9e7404\") " pod="kube-system/kube-proxy-bz9nq" Dec 13 14:20:38.050082 kubelet[2025]: I1213 14:20:38.050072 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c875705d-12ca-4d7d-81d6-dc5e2f9e7404-xtables-lock\") pod \"kube-proxy-bz9nq\" (UID: \"c875705d-12ca-4d7d-81d6-dc5e2f9e7404\") " pod="kube-system/kube-proxy-bz9nq" Dec 13 14:20:38.050188 kubelet[2025]: I1213 14:20:38.050176 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c875705d-12ca-4d7d-81d6-dc5e2f9e7404-lib-modules\") pod \"kube-proxy-bz9nq\" (UID: \"c875705d-12ca-4d7d-81d6-dc5e2f9e7404\") " pod="kube-system/kube-proxy-bz9nq" Dec 13 14:20:38.050300 kubelet[2025]: I1213 14:20:38.050288 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/eafd58c2-978b-4a80-acfc-28677cc0654d-cni-plugin\") pod \"kube-flannel-ds-5fkt5\" (UID: \"eafd58c2-978b-4a80-acfc-28677cc0654d\") " pod="kube-flannel/kube-flannel-ds-5fkt5" Dec 13 14:20:38.050418 kubelet[2025]: I1213 14:20:38.050406 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eafd58c2-978b-4a80-acfc-28677cc0654d-xtables-lock\") pod \"kube-flannel-ds-5fkt5\" (UID: \"eafd58c2-978b-4a80-acfc-28677cc0654d\") " pod="kube-flannel/kube-flannel-ds-5fkt5" Dec 13 14:20:38.050514 kubelet[2025]: I1213 14:20:38.050502 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shxxv\" (UniqueName: \"kubernetes.io/projected/eafd58c2-978b-4a80-acfc-28677cc0654d-kube-api-access-shxxv\") pod \"kube-flannel-ds-5fkt5\" (UID: \"eafd58c2-978b-4a80-acfc-28677cc0654d\") " pod="kube-flannel/kube-flannel-ds-5fkt5" Dec 13 14:20:38.050597 kubelet[2025]: I1213 14:20:38.050587 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/eafd58c2-978b-4a80-acfc-28677cc0654d-run\") pod \"kube-flannel-ds-5fkt5\" (UID: \"eafd58c2-978b-4a80-acfc-28677cc0654d\") " pod="kube-flannel/kube-flannel-ds-5fkt5" Dec 13 14:20:38.285586 kubelet[2025]: E1213 14:20:38.285555 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:38.286858 env[1214]: time="2024-12-13T14:20:38.286395730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bz9nq,Uid:c875705d-12ca-4d7d-81d6-dc5e2f9e7404,Namespace:kube-system,Attempt:0,}" Dec 13 14:20:38.290222 kubelet[2025]: E1213 14:20:38.290200 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:38.290997 env[1214]: time="2024-12-13T14:20:38.290620484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-5fkt5,Uid:eafd58c2-978b-4a80-acfc-28677cc0654d,Namespace:kube-flannel,Attempt:0,}" Dec 13 14:20:38.310228 env[1214]: time="2024-12-13T14:20:38.310156523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:20:38.310228 env[1214]: time="2024-12-13T14:20:38.310195243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:20:38.310228 env[1214]: time="2024-12-13T14:20:38.310205962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:20:38.310578 env[1214]: time="2024-12-13T14:20:38.310501596Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1bbca6e2cd9709728151c382a8fd5f6f649bee98b27813bff65d8f24eb0ed1b pid=2100 runtime=io.containerd.runc.v2 Dec 13 14:20:38.315573 env[1214]: time="2024-12-13T14:20:38.315504974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:20:38.315769 env[1214]: time="2024-12-13T14:20:38.315729809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:20:38.315881 env[1214]: time="2024-12-13T14:20:38.315858807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:20:38.316386 env[1214]: time="2024-12-13T14:20:38.316343597Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a1760f7d5d3f5cd06b3c2314beca510fdf66c2eb83614d124db3198b5ab2811 pid=2116 runtime=io.containerd.runc.v2 Dec 13 14:20:38.321618 systemd[1]: Started cri-containerd-d1bbca6e2cd9709728151c382a8fd5f6f649bee98b27813bff65d8f24eb0ed1b.scope. Dec 13 14:20:38.335645 systemd[1]: Started cri-containerd-3a1760f7d5d3f5cd06b3c2314beca510fdf66c2eb83614d124db3198b5ab2811.scope. Dec 13 14:20:38.361719 env[1214]: time="2024-12-13T14:20:38.361684227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bz9nq,Uid:c875705d-12ca-4d7d-81d6-dc5e2f9e7404,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1bbca6e2cd9709728151c382a8fd5f6f649bee98b27813bff65d8f24eb0ed1b\"" Dec 13 14:20:38.362690 kubelet[2025]: E1213 14:20:38.362657 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:38.365815 env[1214]: time="2024-12-13T14:20:38.365780783Z" level=info msg="CreateContainer within sandbox \"d1bbca6e2cd9709728151c382a8fd5f6f649bee98b27813bff65d8f24eb0ed1b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:20:38.379544 env[1214]: time="2024-12-13T14:20:38.379505022Z" level=info msg="CreateContainer within sandbox \"d1bbca6e2cd9709728151c382a8fd5f6f649bee98b27813bff65d8f24eb0ed1b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5128120ce2d1609f29ff9b22d7985f84452b4465d2a59beb30ec6fa83288874a\"" Dec 13 14:20:38.380451 env[1214]: time="2024-12-13T14:20:38.380414724Z" level=info msg="StartContainer for \"5128120ce2d1609f29ff9b22d7985f84452b4465d2a59beb30ec6fa83288874a\"" Dec 13 14:20:38.386725 env[1214]: time="2024-12-13T14:20:38.386687515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-5fkt5,Uid:eafd58c2-978b-4a80-acfc-28677cc0654d,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"3a1760f7d5d3f5cd06b3c2314beca510fdf66c2eb83614d124db3198b5ab2811\"" Dec 13 14:20:38.387724 kubelet[2025]: E1213 14:20:38.387700 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:38.391220 env[1214]: time="2024-12-13T14:20:38.391161023Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 14:20:38.398227 systemd[1]: Started cri-containerd-5128120ce2d1609f29ff9b22d7985f84452b4465d2a59beb30ec6fa83288874a.scope. Dec 13 14:20:38.457724 env[1214]: time="2024-12-13T14:20:38.457675900Z" level=info msg="StartContainer for \"5128120ce2d1609f29ff9b22d7985f84452b4465d2a59beb30ec6fa83288874a\" returns successfully" Dec 13 14:20:38.527741 kubelet[2025]: E1213 14:20:38.527698 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:38.534220 kubelet[2025]: I1213 14:20:38.533841 2025 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bz9nq" podStartSLOduration=1.5338051799999999 podStartE2EDuration="1.53380518s" podCreationTimestamp="2024-12-13 14:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:20:38.533556145 +0000 UTC m=+14.155669706" watchObservedRunningTime="2024-12-13 14:20:38.53380518 +0000 UTC m=+14.155918781" Dec 13 14:20:38.755095 kubelet[2025]: E1213 14:20:38.755062 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:39.019081 update_engine[1203]: I1213 14:20:39.018908 1203 update_attempter.cc:509] Updating boot flags... Dec 13 14:20:39.449173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4048515015.mount: Deactivated successfully. Dec 13 14:20:39.499785 env[1214]: time="2024-12-13T14:20:39.499742088Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:39.502213 env[1214]: time="2024-12-13T14:20:39.502181121Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:39.503420 env[1214]: time="2024-12-13T14:20:39.503379857Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:39.504804 env[1214]: time="2024-12-13T14:20:39.504773710Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:39.506195 env[1214]: time="2024-12-13T14:20:39.506157763Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Dec 13 14:20:39.509236 env[1214]: time="2024-12-13T14:20:39.509193744Z" level=info msg="CreateContainer within sandbox \"3a1760f7d5d3f5cd06b3c2314beca510fdf66c2eb83614d124db3198b5ab2811\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 14:20:39.520161 env[1214]: time="2024-12-13T14:20:39.520122131Z" level=info msg="CreateContainer within sandbox \"3a1760f7d5d3f5cd06b3c2314beca510fdf66c2eb83614d124db3198b5ab2811\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"05472651bbd528fb50e17d2df5f9cc3a199d5d24cba2cbe7db35af7212a89193\"" Dec 13 14:20:39.520729 env[1214]: time="2024-12-13T14:20:39.520694960Z" level=info msg="StartContainer for \"05472651bbd528fb50e17d2df5f9cc3a199d5d24cba2cbe7db35af7212a89193\"" Dec 13 14:20:39.539189 systemd[1]: Started cri-containerd-05472651bbd528fb50e17d2df5f9cc3a199d5d24cba2cbe7db35af7212a89193.scope. Dec 13 14:20:39.586584 env[1214]: time="2024-12-13T14:20:39.585455858Z" level=info msg="StartContainer for \"05472651bbd528fb50e17d2df5f9cc3a199d5d24cba2cbe7db35af7212a89193\" returns successfully" Dec 13 14:20:39.586683 systemd[1]: cri-containerd-05472651bbd528fb50e17d2df5f9cc3a199d5d24cba2cbe7db35af7212a89193.scope: Deactivated successfully. Dec 13 14:20:39.626493 env[1214]: time="2024-12-13T14:20:39.626435540Z" level=info msg="shim disconnected" id=05472651bbd528fb50e17d2df5f9cc3a199d5d24cba2cbe7db35af7212a89193 Dec 13 14:20:39.626493 env[1214]: time="2024-12-13T14:20:39.626480659Z" level=warning msg="cleaning up after shim disconnected" id=05472651bbd528fb50e17d2df5f9cc3a199d5d24cba2cbe7db35af7212a89193 namespace=k8s.io Dec 13 14:20:39.626493 env[1214]: time="2024-12-13T14:20:39.626490139Z" level=info msg="cleaning up dead shim" Dec 13 14:20:39.632994 env[1214]: time="2024-12-13T14:20:39.632945453Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:20:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2384 runtime=io.containerd.runc.v2\n" Dec 13 14:20:40.531342 kubelet[2025]: E1213 14:20:40.531283 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:40.533907 env[1214]: time="2024-12-13T14:20:40.533869041Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 14:20:41.712598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2962427549.mount: Deactivated successfully. Dec 13 14:20:42.390784 env[1214]: time="2024-12-13T14:20:42.390734408Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:42.392254 env[1214]: time="2024-12-13T14:20:42.392219343Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:42.394441 env[1214]: time="2024-12-13T14:20:42.394411826Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:42.396523 env[1214]: time="2024-12-13T14:20:42.396496831Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:42.397234 env[1214]: time="2024-12-13T14:20:42.397205899Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Dec 13 14:20:42.402113 env[1214]: time="2024-12-13T14:20:42.402081057Z" level=info msg="CreateContainer within sandbox \"3a1760f7d5d3f5cd06b3c2314beca510fdf66c2eb83614d124db3198b5ab2811\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 14:20:42.411312 env[1214]: time="2024-12-13T14:20:42.411274942Z" level=info msg="CreateContainer within sandbox \"3a1760f7d5d3f5cd06b3c2314beca510fdf66c2eb83614d124db3198b5ab2811\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dc1750c6f9d0b4b1e463256c0cb355be1b98de9e5b8b414df70b3b92344e4e69\"" Dec 13 14:20:42.411845 env[1214]: time="2024-12-13T14:20:42.411808893Z" level=info msg="StartContainer for \"dc1750c6f9d0b4b1e463256c0cb355be1b98de9e5b8b414df70b3b92344e4e69\"" Dec 13 14:20:42.428478 systemd[1]: Started cri-containerd-dc1750c6f9d0b4b1e463256c0cb355be1b98de9e5b8b414df70b3b92344e4e69.scope. Dec 13 14:20:42.459609 env[1214]: time="2024-12-13T14:20:42.458795983Z" level=info msg="StartContainer for \"dc1750c6f9d0b4b1e463256c0cb355be1b98de9e5b8b414df70b3b92344e4e69\" returns successfully" Dec 13 14:20:42.463144 systemd[1]: cri-containerd-dc1750c6f9d0b4b1e463256c0cb355be1b98de9e5b8b414df70b3b92344e4e69.scope: Deactivated successfully. Dec 13 14:20:42.464947 kubelet[2025]: I1213 14:20:42.464806 2025 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:20:42.483832 kubelet[2025]: I1213 14:20:42.483776 2025 topology_manager.go:215] "Topology Admit Handler" podUID="cd762d06-351d-4403-b7dd-f78cc71f48b7" podNamespace="kube-system" podName="coredns-76f75df574-mmjqt" Dec 13 14:20:42.485226 kubelet[2025]: I1213 14:20:42.485197 2025 topology_manager.go:215] "Topology Admit Handler" podUID="79d03888-7024-4548-a247-206189f9698e" podNamespace="kube-system" podName="coredns-76f75df574-pcqdh" Dec 13 14:20:42.491555 systemd[1]: Created slice kubepods-burstable-podcd762d06_351d_4403_b7dd_f78cc71f48b7.slice. Dec 13 14:20:42.500615 systemd[1]: Created slice kubepods-burstable-pod79d03888_7024_4548_a247_206189f9698e.slice. Dec 13 14:20:42.534568 kubelet[2025]: E1213 14:20:42.534528 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:42.566590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc1750c6f9d0b4b1e463256c0cb355be1b98de9e5b8b414df70b3b92344e4e69-rootfs.mount: Deactivated successfully. Dec 13 14:20:42.585337 kubelet[2025]: I1213 14:20:42.585290 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79d03888-7024-4548-a247-206189f9698e-config-volume\") pod \"coredns-76f75df574-pcqdh\" (UID: \"79d03888-7024-4548-a247-206189f9698e\") " pod="kube-system/coredns-76f75df574-pcqdh" Dec 13 14:20:42.585337 kubelet[2025]: I1213 14:20:42.585338 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g85fv\" (UniqueName: \"kubernetes.io/projected/79d03888-7024-4548-a247-206189f9698e-kube-api-access-g85fv\") pod \"coredns-76f75df574-pcqdh\" (UID: \"79d03888-7024-4548-a247-206189f9698e\") " pod="kube-system/coredns-76f75df574-pcqdh" Dec 13 14:20:42.585526 kubelet[2025]: I1213 14:20:42.585363 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd762d06-351d-4403-b7dd-f78cc71f48b7-config-volume\") pod \"coredns-76f75df574-mmjqt\" (UID: \"cd762d06-351d-4403-b7dd-f78cc71f48b7\") " pod="kube-system/coredns-76f75df574-mmjqt" Dec 13 14:20:42.585526 kubelet[2025]: I1213 14:20:42.585406 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7f6c\" (UniqueName: \"kubernetes.io/projected/cd762d06-351d-4403-b7dd-f78cc71f48b7-kube-api-access-g7f6c\") pod \"coredns-76f75df574-mmjqt\" (UID: \"cd762d06-351d-4403-b7dd-f78cc71f48b7\") " pod="kube-system/coredns-76f75df574-mmjqt" Dec 13 14:20:42.605062 env[1214]: time="2024-12-13T14:20:42.605015123Z" level=info msg="shim disconnected" id=dc1750c6f9d0b4b1e463256c0cb355be1b98de9e5b8b414df70b3b92344e4e69 Dec 13 14:20:42.605062 env[1214]: time="2024-12-13T14:20:42.605062602Z" level=warning msg="cleaning up after shim disconnected" id=dc1750c6f9d0b4b1e463256c0cb355be1b98de9e5b8b414df70b3b92344e4e69 namespace=k8s.io Dec 13 14:20:42.605205 env[1214]: time="2024-12-13T14:20:42.605072682Z" level=info msg="cleaning up dead shim" Dec 13 14:20:42.611680 env[1214]: time="2024-12-13T14:20:42.611637011Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:20:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2441 runtime=io.containerd.runc.v2\n" Dec 13 14:20:42.798891 kubelet[2025]: E1213 14:20:42.798864 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:42.800114 env[1214]: time="2024-12-13T14:20:42.799555730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mmjqt,Uid:cd762d06-351d-4403-b7dd-f78cc71f48b7,Namespace:kube-system,Attempt:0,}" Dec 13 14:20:42.805999 kubelet[2025]: E1213 14:20:42.805725 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:42.807051 env[1214]: time="2024-12-13T14:20:42.806097940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pcqdh,Uid:79d03888-7024-4548-a247-206189f9698e,Namespace:kube-system,Attempt:0,}" Dec 13 14:20:42.838768 env[1214]: time="2024-12-13T14:20:42.838716111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mmjqt,Uid:cd762d06-351d-4403-b7dd-f78cc71f48b7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7f1519bd42c76e343dbbe773b85cfa3d3f62f198832e614b146a4446d094bd5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:20:42.839661 kubelet[2025]: E1213 14:20:42.839288 2025 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f1519bd42c76e343dbbe773b85cfa3d3f62f198832e614b146a4446d094bd5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:20:42.839661 kubelet[2025]: E1213 14:20:42.839340 2025 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f1519bd42c76e343dbbe773b85cfa3d3f62f198832e614b146a4446d094bd5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-mmjqt" Dec 13 14:20:42.839661 kubelet[2025]: E1213 14:20:42.839388 2025 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f1519bd42c76e343dbbe773b85cfa3d3f62f198832e614b146a4446d094bd5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-mmjqt" Dec 13 14:20:42.839661 kubelet[2025]: E1213 14:20:42.839455 2025 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mmjqt_kube-system(cd762d06-351d-4403-b7dd-f78cc71f48b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mmjqt_kube-system(cd762d06-351d-4403-b7dd-f78cc71f48b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7f1519bd42c76e343dbbe773b85cfa3d3f62f198832e614b146a4446d094bd5\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-mmjqt" podUID="cd762d06-351d-4403-b7dd-f78cc71f48b7" Dec 13 14:20:42.840954 env[1214]: time="2024-12-13T14:20:42.840867035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pcqdh,Uid:79d03888-7024-4548-a247-206189f9698e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2bbf3284ecf5ecb9f31f0cdeaa7ff0371fb4af410a7eede505407c75a2959b00\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:20:42.841448 kubelet[2025]: E1213 14:20:42.841319 2025 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bbf3284ecf5ecb9f31f0cdeaa7ff0371fb4af410a7eede505407c75a2959b00\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:20:42.841448 kubelet[2025]: E1213 14:20:42.841356 2025 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bbf3284ecf5ecb9f31f0cdeaa7ff0371fb4af410a7eede505407c75a2959b00\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-pcqdh" Dec 13 14:20:42.841448 kubelet[2025]: E1213 14:20:42.841372 2025 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bbf3284ecf5ecb9f31f0cdeaa7ff0371fb4af410a7eede505407c75a2959b00\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-pcqdh" Dec 13 14:20:42.841448 kubelet[2025]: E1213 14:20:42.841418 2025 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-pcqdh_kube-system(79d03888-7024-4548-a247-206189f9698e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-pcqdh_kube-system(79d03888-7024-4548-a247-206189f9698e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bbf3284ecf5ecb9f31f0cdeaa7ff0371fb4af410a7eede505407c75a2959b00\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-pcqdh" podUID="79d03888-7024-4548-a247-206189f9698e" Dec 13 14:20:43.537460 kubelet[2025]: E1213 14:20:43.537424 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:43.544730 env[1214]: time="2024-12-13T14:20:43.544694135Z" level=info msg="CreateContainer within sandbox \"3a1760f7d5d3f5cd06b3c2314beca510fdf66c2eb83614d124db3198b5ab2811\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 14:20:43.554286 env[1214]: time="2024-12-13T14:20:43.554250862Z" level=info msg="CreateContainer within sandbox \"3a1760f7d5d3f5cd06b3c2314beca510fdf66c2eb83614d124db3198b5ab2811\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"955f73c9feebc6c8c469b0cd45bdb5df66b055e5b446347a81a0ba6110adffd8\"" Dec 13 14:20:43.555438 env[1214]: time="2024-12-13T14:20:43.555405963Z" level=info msg="StartContainer for \"955f73c9feebc6c8c469b0cd45bdb5df66b055e5b446347a81a0ba6110adffd8\"" Dec 13 14:20:43.567283 systemd[1]: run-netns-cni\x2df183b14b\x2df3e5\x2df791\x2d4de3\x2daba590543e43.mount: Deactivated successfully. Dec 13 14:20:43.567363 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7f1519bd42c76e343dbbe773b85cfa3d3f62f198832e614b146a4446d094bd5-shm.mount: Deactivated successfully. Dec 13 14:20:43.571486 systemd[1]: Started cri-containerd-955f73c9feebc6c8c469b0cd45bdb5df66b055e5b446347a81a0ba6110adffd8.scope. Dec 13 14:20:43.601693 env[1214]: time="2024-12-13T14:20:43.601654741Z" level=info msg="StartContainer for \"955f73c9feebc6c8c469b0cd45bdb5df66b055e5b446347a81a0ba6110adffd8\" returns successfully" Dec 13 14:20:44.542367 kubelet[2025]: E1213 14:20:44.541920 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:44.678128 systemd-networkd[1043]: flannel.1: Link UP Dec 13 14:20:44.678135 systemd-networkd[1043]: flannel.1: Gained carrier Dec 13 14:20:45.543084 kubelet[2025]: E1213 14:20:45.543048 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:46.426122 systemd-networkd[1043]: flannel.1: Gained IPv6LL Dec 13 14:20:50.579283 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:47884.service. Dec 13 14:20:50.618131 sshd[2651]: Accepted publickey for core from 10.0.0.1 port 47884 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:20:50.619404 sshd[2651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:50.623032 systemd-logind[1200]: New session 6 of user core. Dec 13 14:20:50.623449 systemd[1]: Started session-6.scope. Dec 13 14:20:50.738871 sshd[2651]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:50.742555 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:47884.service: Deactivated successfully. Dec 13 14:20:50.743263 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:20:50.743721 systemd-logind[1200]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:20:50.744442 systemd-logind[1200]: Removed session 6. Dec 13 14:20:53.495167 kubelet[2025]: E1213 14:20:53.495121 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:53.495977 env[1214]: time="2024-12-13T14:20:53.495925080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pcqdh,Uid:79d03888-7024-4548-a247-206189f9698e,Namespace:kube-system,Attempt:0,}" Dec 13 14:20:53.518479 systemd-networkd[1043]: cni0: Link UP Dec 13 14:20:53.518494 systemd-networkd[1043]: cni0: Gained carrier Dec 13 14:20:53.521988 systemd-networkd[1043]: cni0: Lost carrier Dec 13 14:20:53.536721 systemd-networkd[1043]: vethaf3305db: Link UP Dec 13 14:20:53.538994 kernel: cni0: port 1(vethaf3305db) entered blocking state Dec 13 14:20:53.539084 kernel: cni0: port 1(vethaf3305db) entered disabled state Dec 13 14:20:53.543998 kernel: device vethaf3305db entered promiscuous mode Dec 13 14:20:53.544059 kernel: cni0: port 1(vethaf3305db) entered blocking state Dec 13 14:20:53.544087 kernel: cni0: port 1(vethaf3305db) entered forwarding state Dec 13 14:20:53.545018 kernel: cni0: port 1(vethaf3305db) entered disabled state Dec 13 14:20:53.555145 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethaf3305db: link becomes ready Dec 13 14:20:53.555215 kernel: cni0: port 1(vethaf3305db) entered blocking state Dec 13 14:20:53.555235 kernel: cni0: port 1(vethaf3305db) entered forwarding state Dec 13 14:20:53.555456 systemd-networkd[1043]: vethaf3305db: Gained carrier Dec 13 14:20:53.555940 systemd-networkd[1043]: cni0: Gained carrier Dec 13 14:20:53.559763 env[1214]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000020928), "name":"cbr0", "type":"bridge"} Dec 13 14:20:53.559763 env[1214]: delegateAdd: netconf sent to delegate plugin: Dec 13 14:20:53.568603 env[1214]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T14:20:53.568533998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:20:53.568603 env[1214]: time="2024-12-13T14:20:53.568571678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:20:53.568603 env[1214]: time="2024-12-13T14:20:53.568594198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:20:53.568771 env[1214]: time="2024-12-13T14:20:53.568724276Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9a89d7febf5911913dcca4f4f9ed6be75f65f8bf9e6f786aef57680b22d12ac pid=2712 runtime=io.containerd.runc.v2 Dec 13 14:20:53.584980 systemd[1]: run-containerd-runc-k8s.io-f9a89d7febf5911913dcca4f4f9ed6be75f65f8bf9e6f786aef57680b22d12ac-runc.6QdOaG.mount: Deactivated successfully. Dec 13 14:20:53.587481 systemd[1]: Started cri-containerd-f9a89d7febf5911913dcca4f4f9ed6be75f65f8bf9e6f786aef57680b22d12ac.scope. Dec 13 14:20:53.607381 systemd-resolved[1152]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:20:53.624395 env[1214]: time="2024-12-13T14:20:53.624358292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pcqdh,Uid:79d03888-7024-4548-a247-206189f9698e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9a89d7febf5911913dcca4f4f9ed6be75f65f8bf9e6f786aef57680b22d12ac\"" Dec 13 14:20:53.625131 kubelet[2025]: E1213 14:20:53.625110 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:53.631639 env[1214]: time="2024-12-13T14:20:53.631603856Z" level=info msg="CreateContainer within sandbox \"f9a89d7febf5911913dcca4f4f9ed6be75f65f8bf9e6f786aef57680b22d12ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:20:53.641602 env[1214]: time="2024-12-13T14:20:53.641568392Z" level=info msg="CreateContainer within sandbox \"f9a89d7febf5911913dcca4f4f9ed6be75f65f8bf9e6f786aef57680b22d12ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7f85065beee7d6c3c0f7227fb7d88d82200db51079aaed46a3fa3c37e75c8c2\"" Dec 13 14:20:53.642279 env[1214]: time="2024-12-13T14:20:53.642258184Z" level=info msg="StartContainer for \"f7f85065beee7d6c3c0f7227fb7d88d82200db51079aaed46a3fa3c37e75c8c2\"" Dec 13 14:20:53.655574 systemd[1]: Started cri-containerd-f7f85065beee7d6c3c0f7227fb7d88d82200db51079aaed46a3fa3c37e75c8c2.scope. Dec 13 14:20:53.683952 env[1214]: time="2024-12-13T14:20:53.683910227Z" level=info msg="StartContainer for \"f7f85065beee7d6c3c0f7227fb7d88d82200db51079aaed46a3fa3c37e75c8c2\" returns successfully" Dec 13 14:20:54.564515 kubelet[2025]: E1213 14:20:54.564293 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:54.573921 kubelet[2025]: I1213 14:20:54.573879 2025 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-5fkt5" podStartSLOduration=13.567072845 podStartE2EDuration="17.573830347s" podCreationTimestamp="2024-12-13 14:20:37 +0000 UTC" firstStartedPulling="2024-12-13 14:20:38.390668673 +0000 UTC m=+14.012782234" lastFinishedPulling="2024-12-13 14:20:42.397426135 +0000 UTC m=+18.019539736" observedRunningTime="2024-12-13 14:20:44.554941449 +0000 UTC m=+20.177055050" watchObservedRunningTime="2024-12-13 14:20:54.573830347 +0000 UTC m=+30.195943948" Dec 13 14:20:54.582358 kubelet[2025]: I1213 14:20:54.582302 2025 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-pcqdh" podStartSLOduration=16.582256142 podStartE2EDuration="16.582256142s" podCreationTimestamp="2024-12-13 14:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:20:54.573801348 +0000 UTC m=+30.195914949" watchObservedRunningTime="2024-12-13 14:20:54.582256142 +0000 UTC m=+30.204369743" Dec 13 14:20:54.938121 systemd-networkd[1043]: cni0: Gained IPv6LL Dec 13 14:20:55.002108 systemd-networkd[1043]: vethaf3305db: Gained IPv6LL Dec 13 14:20:55.495047 kubelet[2025]: E1213 14:20:55.495007 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:55.495503 env[1214]: time="2024-12-13T14:20:55.495462045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mmjqt,Uid:cd762d06-351d-4403-b7dd-f78cc71f48b7,Namespace:kube-system,Attempt:0,}" Dec 13 14:20:55.516778 systemd-networkd[1043]: veth7988ed23: Link UP Dec 13 14:20:55.519493 kernel: cni0: port 2(veth7988ed23) entered blocking state Dec 13 14:20:55.519566 kernel: cni0: port 2(veth7988ed23) entered disabled state Dec 13 14:20:55.519583 kernel: device veth7988ed23 entered promiscuous mode Dec 13 14:20:55.526508 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:20:55.526579 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth7988ed23: link becomes ready Dec 13 14:20:55.526603 kernel: cni0: port 2(veth7988ed23) entered blocking state Dec 13 14:20:55.526618 kernel: cni0: port 2(veth7988ed23) entered forwarding state Dec 13 14:20:55.527228 systemd-networkd[1043]: veth7988ed23: Gained carrier Dec 13 14:20:55.528706 env[1214]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001a928), "name":"cbr0", "type":"bridge"} Dec 13 14:20:55.528706 env[1214]: delegateAdd: netconf sent to delegate plugin: Dec 13 14:20:55.537856 env[1214]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T14:20:55.537512995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:20:55.537856 env[1214]: time="2024-12-13T14:20:55.537557435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:20:55.537856 env[1214]: time="2024-12-13T14:20:55.537567074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:20:55.537856 env[1214]: time="2024-12-13T14:20:55.537703993Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6b9e73a0f6a3c08ae083d17193df583c6a4d53d16ca7c86c7e8114008f25893 pid=2847 runtime=io.containerd.runc.v2 Dec 13 14:20:55.557255 systemd[1]: Started cri-containerd-a6b9e73a0f6a3c08ae083d17193df583c6a4d53d16ca7c86c7e8114008f25893.scope. Dec 13 14:20:55.560039 systemd[1]: run-containerd-runc-k8s.io-a6b9e73a0f6a3c08ae083d17193df583c6a4d53d16ca7c86c7e8114008f25893-runc.6qO9Bv.mount: Deactivated successfully. Dec 13 14:20:55.568006 kubelet[2025]: E1213 14:20:55.565982 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:55.584358 systemd-resolved[1152]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:20:55.604929 env[1214]: time="2024-12-13T14:20:55.604887978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mmjqt,Uid:cd762d06-351d-4403-b7dd-f78cc71f48b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6b9e73a0f6a3c08ae083d17193df583c6a4d53d16ca7c86c7e8114008f25893\"" Dec 13 14:20:55.605834 kubelet[2025]: E1213 14:20:55.605641 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:55.608232 env[1214]: time="2024-12-13T14:20:55.607982308Z" level=info msg="CreateContainer within sandbox \"a6b9e73a0f6a3c08ae083d17193df583c6a4d53d16ca7c86c7e8114008f25893\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:20:55.617795 env[1214]: time="2024-12-13T14:20:55.617750732Z" level=info msg="CreateContainer within sandbox \"a6b9e73a0f6a3c08ae083d17193df583c6a4d53d16ca7c86c7e8114008f25893\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e7d480e4790e6891b538ada202896e449a2563d57da8edfaabc498f061c650b\"" Dec 13 14:20:55.618738 env[1214]: time="2024-12-13T14:20:55.618201288Z" level=info msg="StartContainer for \"9e7d480e4790e6891b538ada202896e449a2563d57da8edfaabc498f061c650b\"" Dec 13 14:20:55.631460 systemd[1]: Started cri-containerd-9e7d480e4790e6891b538ada202896e449a2563d57da8edfaabc498f061c650b.scope. Dec 13 14:20:55.678475 env[1214]: time="2024-12-13T14:20:55.678433901Z" level=info msg="StartContainer for \"9e7d480e4790e6891b538ada202896e449a2563d57da8edfaabc498f061c650b\" returns successfully" Dec 13 14:20:55.743897 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:33068.service. Dec 13 14:20:55.782640 sshd[2923]: Accepted publickey for core from 10.0.0.1 port 33068 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:20:55.784366 sshd[2923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:55.788021 systemd-logind[1200]: New session 7 of user core. Dec 13 14:20:55.788509 systemd[1]: Started session-7.scope. Dec 13 14:20:55.899144 sshd[2923]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:55.901718 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:33068.service: Deactivated successfully. Dec 13 14:20:55.902424 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:20:55.902933 systemd-logind[1200]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:20:55.903646 systemd-logind[1200]: Removed session 7. Dec 13 14:20:56.569248 kubelet[2025]: E1213 14:20:56.569043 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:56.569248 kubelet[2025]: E1213 14:20:56.569168 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:56.603071 systemd-networkd[1043]: veth7988ed23: Gained IPv6LL Dec 13 14:21:00.903915 systemd[1]: Started sshd@7-10.0.0.130:22-10.0.0.1:33084.service. Dec 13 14:21:00.943037 sshd[2960]: Accepted publickey for core from 10.0.0.1 port 33084 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:00.944725 sshd[2960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:00.948444 systemd-logind[1200]: New session 8 of user core. Dec 13 14:21:00.949385 systemd[1]: Started session-8.scope. Dec 13 14:21:01.058533 sshd[2960]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:01.061791 systemd[1]: Started sshd@8-10.0.0.130:22-10.0.0.1:33086.service. Dec 13 14:21:01.064786 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:21:01.065505 systemd[1]: sshd@7-10.0.0.130:22-10.0.0.1:33084.service: Deactivated successfully. Dec 13 14:21:01.066029 systemd-logind[1200]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:21:01.066660 systemd-logind[1200]: Removed session 8. Dec 13 14:21:01.102252 sshd[2973]: Accepted publickey for core from 10.0.0.1 port 33086 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:01.103460 sshd[2973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:01.106509 systemd-logind[1200]: New session 9 of user core. Dec 13 14:21:01.107312 systemd[1]: Started session-9.scope. Dec 13 14:21:01.264857 sshd[2973]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:01.268087 systemd[1]: Started sshd@9-10.0.0.130:22-10.0.0.1:33088.service. Dec 13 14:21:01.270761 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:21:01.271511 systemd-logind[1200]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:21:01.271600 systemd[1]: sshd@8-10.0.0.130:22-10.0.0.1:33086.service: Deactivated successfully. Dec 13 14:21:01.273349 systemd-logind[1200]: Removed session 9. Dec 13 14:21:01.317746 sshd[2985]: Accepted publickey for core from 10.0.0.1 port 33088 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:01.319137 sshd[2985]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:01.323255 systemd[1]: Started session-10.scope. Dec 13 14:21:01.323281 systemd-logind[1200]: New session 10 of user core. Dec 13 14:21:01.430255 sshd[2985]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:01.432578 systemd[1]: sshd@9-10.0.0.130:22-10.0.0.1:33088.service: Deactivated successfully. Dec 13 14:21:01.433266 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:21:01.433726 systemd-logind[1200]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:21:01.434451 systemd-logind[1200]: Removed session 10. Dec 13 14:21:02.800517 kubelet[2025]: E1213 14:21:02.800486 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:02.809555 kubelet[2025]: I1213 14:21:02.809522 2025 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-mmjqt" podStartSLOduration=24.809489144 podStartE2EDuration="24.809489144s" podCreationTimestamp="2024-12-13 14:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:20:56.577147329 +0000 UTC m=+32.199260930" watchObservedRunningTime="2024-12-13 14:21:02.809489144 +0000 UTC m=+38.431602705" Dec 13 14:21:03.581813 kubelet[2025]: E1213 14:21:03.581780 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:06.435476 systemd[1]: Started sshd@10-10.0.0.130:22-10.0.0.1:54736.service. Dec 13 14:21:06.475463 sshd[3026]: Accepted publickey for core from 10.0.0.1 port 54736 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:06.476662 sshd[3026]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:06.479939 systemd-logind[1200]: New session 11 of user core. Dec 13 14:21:06.480743 systemd[1]: Started session-11.scope. Dec 13 14:21:06.591799 sshd[3026]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:06.595642 systemd[1]: Started sshd@11-10.0.0.130:22-10.0.0.1:54752.service. Dec 13 14:21:06.596099 systemd[1]: sshd@10-10.0.0.130:22-10.0.0.1:54736.service: Deactivated successfully. Dec 13 14:21:06.597012 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:21:06.597031 systemd-logind[1200]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:21:06.598166 systemd-logind[1200]: Removed session 11. Dec 13 14:21:06.634710 sshd[3038]: Accepted publickey for core from 10.0.0.1 port 54752 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:06.635857 sshd[3038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:06.639255 systemd-logind[1200]: New session 12 of user core. Dec 13 14:21:06.640284 systemd[1]: Started session-12.scope. Dec 13 14:21:06.845711 sshd[3038]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:06.849996 systemd[1]: Started sshd@12-10.0.0.130:22-10.0.0.1:54764.service. Dec 13 14:21:06.850567 systemd[1]: sshd@11-10.0.0.130:22-10.0.0.1:54752.service: Deactivated successfully. Dec 13 14:21:06.851262 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:21:06.851827 systemd-logind[1200]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:21:06.852515 systemd-logind[1200]: Removed session 12. Dec 13 14:21:06.888931 sshd[3049]: Accepted publickey for core from 10.0.0.1 port 54764 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:06.890250 sshd[3049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:06.893478 systemd-logind[1200]: New session 13 of user core. Dec 13 14:21:06.894301 systemd[1]: Started session-13.scope. Dec 13 14:21:08.058316 sshd[3049]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:08.061542 systemd[1]: Started sshd@13-10.0.0.130:22-10.0.0.1:54778.service. Dec 13 14:21:08.062111 systemd[1]: sshd@12-10.0.0.130:22-10.0.0.1:54764.service: Deactivated successfully. Dec 13 14:21:08.062766 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:21:08.063502 systemd-logind[1200]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:21:08.064799 systemd-logind[1200]: Removed session 13. Dec 13 14:21:08.104675 sshd[3068]: Accepted publickey for core from 10.0.0.1 port 54778 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:08.106336 sshd[3068]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:08.109649 systemd-logind[1200]: New session 14 of user core. Dec 13 14:21:08.110487 systemd[1]: Started session-14.scope. Dec 13 14:21:08.327355 sshd[3068]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:08.336684 systemd[1]: sshd@13-10.0.0.130:22-10.0.0.1:54778.service: Deactivated successfully. Dec 13 14:21:08.337307 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:21:08.337833 systemd-logind[1200]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:21:08.338944 systemd[1]: Started sshd@14-10.0.0.130:22-10.0.0.1:54780.service. Dec 13 14:21:08.341369 systemd-logind[1200]: Removed session 14. Dec 13 14:21:08.378276 sshd[3081]: Accepted publickey for core from 10.0.0.1 port 54780 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:08.379365 sshd[3081]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:08.382868 systemd-logind[1200]: New session 15 of user core. Dec 13 14:21:08.383210 systemd[1]: Started session-15.scope. Dec 13 14:21:08.494845 sshd[3081]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:08.498686 systemd[1]: sshd@14-10.0.0.130:22-10.0.0.1:54780.service: Deactivated successfully. Dec 13 14:21:08.499537 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:21:08.500182 systemd-logind[1200]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:21:08.501125 systemd-logind[1200]: Removed session 15. Dec 13 14:21:13.499371 systemd[1]: Started sshd@15-10.0.0.130:22-10.0.0.1:54768.service. Dec 13 14:21:13.538494 sshd[3121]: Accepted publickey for core from 10.0.0.1 port 54768 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:13.540265 sshd[3121]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:13.543696 systemd-logind[1200]: New session 16 of user core. Dec 13 14:21:13.544595 systemd[1]: Started session-16.scope. Dec 13 14:21:13.652864 sshd[3121]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:13.655258 systemd[1]: sshd@15-10.0.0.130:22-10.0.0.1:54768.service: Deactivated successfully. Dec 13 14:21:13.656015 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:21:13.656529 systemd-logind[1200]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:21:13.657245 systemd-logind[1200]: Removed session 16. Dec 13 14:21:18.657519 systemd[1]: Started sshd@16-10.0.0.130:22-10.0.0.1:54770.service. Dec 13 14:21:18.695942 sshd[3156]: Accepted publickey for core from 10.0.0.1 port 54770 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:18.697659 sshd[3156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:18.701132 systemd-logind[1200]: New session 17 of user core. Dec 13 14:21:18.701539 systemd[1]: Started session-17.scope. Dec 13 14:21:18.807672 sshd[3156]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:18.810033 systemd[1]: sshd@16-10.0.0.130:22-10.0.0.1:54770.service: Deactivated successfully. Dec 13 14:21:18.810736 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:21:18.811316 systemd-logind[1200]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:21:18.812067 systemd-logind[1200]: Removed session 17. Dec 13 14:21:23.812704 systemd[1]: Started sshd@17-10.0.0.130:22-10.0.0.1:35270.service. Dec 13 14:21:23.850850 sshd[3191]: Accepted publickey for core from 10.0.0.1 port 35270 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:23.852033 sshd[3191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:23.855144 systemd-logind[1200]: New session 18 of user core. Dec 13 14:21:23.855934 systemd[1]: Started session-18.scope. Dec 13 14:21:23.959448 sshd[3191]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:23.961650 systemd[1]: sshd@17-10.0.0.130:22-10.0.0.1:35270.service: Deactivated successfully. Dec 13 14:21:23.962378 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:21:23.962886 systemd-logind[1200]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:21:23.963645 systemd-logind[1200]: Removed session 18.