Jul 14 21:43:33.759023 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 14 21:43:33.759042 kernel: Linux version 5.15.187-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Jul 14 20:49:56 -00 2025 Jul 14 21:43:33.759050 kernel: efi: EFI v2.70 by EDK II Jul 14 21:43:33.759056 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 14 21:43:33.759061 kernel: random: crng init done Jul 14 21:43:33.759066 kernel: ACPI: Early table checksum verification disabled Jul 14 21:43:33.759073 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 14 21:43:33.759080 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 14 21:43:33.759085 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:33.759091 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:33.759097 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:33.759102 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:33.759107 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:33.759113 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:33.759121 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:33.759127 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:33.759132 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:43:33.759138 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 14 21:43:33.759144 kernel: NUMA: Failed to initialise from firmware Jul 14 21:43:33.759150 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:43:33.759155 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Jul 14 21:43:33.759161 kernel: Zone ranges: Jul 14 21:43:33.759167 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:43:33.759173 kernel: DMA32 empty Jul 14 21:43:33.759179 kernel: Normal empty Jul 14 21:43:33.759185 kernel: Movable zone start for each node Jul 14 21:43:33.759190 kernel: Early memory node ranges Jul 14 21:43:33.759196 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 14 21:43:33.759202 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 14 21:43:33.759207 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 14 21:43:33.759213 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 14 21:43:33.759219 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 14 21:43:33.759224 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 14 21:43:33.759230 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 14 21:43:33.759236 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:43:33.759242 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 14 21:43:33.759248 kernel: psci: probing for conduit method from ACPI. Jul 14 21:43:33.759253 kernel: psci: PSCIv1.1 detected in firmware. Jul 14 21:43:33.759259 kernel: psci: Using standard PSCI v0.2 function IDs Jul 14 21:43:33.759265 kernel: psci: Trusted OS migration not required Jul 14 21:43:33.759273 kernel: psci: SMC Calling Convention v1.1 Jul 14 21:43:33.759279 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 14 21:43:33.759286 kernel: ACPI: SRAT not present Jul 14 21:43:33.759292 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 14 21:43:33.759298 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 14 21:43:33.759305 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 14 21:43:33.759316 kernel: Detected PIPT I-cache on CPU0 Jul 14 21:43:33.759323 kernel: CPU features: detected: GIC system register CPU interface Jul 14 21:43:33.759330 kernel: CPU features: detected: Hardware dirty bit management Jul 14 21:43:33.759338 kernel: CPU features: detected: Spectre-v4 Jul 14 21:43:33.759345 kernel: CPU features: detected: Spectre-BHB Jul 14 21:43:33.759352 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 14 21:43:33.759358 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 14 21:43:33.759365 kernel: CPU features: detected: ARM erratum 1418040 Jul 14 21:43:33.759371 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 14 21:43:33.759377 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 14 21:43:33.759383 kernel: Policy zone: DMA Jul 14 21:43:33.759390 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0fbac260ee8dcd4db6590eed44229ca41387b27ea0fa758fd2be410620d68236 Jul 14 21:43:33.759397 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 21:43:33.759403 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 21:43:33.759409 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 21:43:33.759415 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 21:43:33.759423 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Jul 14 21:43:33.759429 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 21:43:33.759435 kernel: trace event string verifier disabled Jul 14 21:43:33.759441 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 21:43:33.759448 kernel: rcu: RCU event tracing is enabled. Jul 14 21:43:33.759455 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 21:43:33.759461 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 21:43:33.759468 kernel: Tracing variant of Tasks RCU enabled. Jul 14 21:43:33.759474 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 21:43:33.759480 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 21:43:33.759487 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 14 21:43:33.759494 kernel: GICv3: 256 SPIs implemented Jul 14 21:43:33.759500 kernel: GICv3: 0 Extended SPIs implemented Jul 14 21:43:33.759507 kernel: GICv3: Distributor has no Range Selector support Jul 14 21:43:33.759513 kernel: Root IRQ handler: gic_handle_irq Jul 14 21:43:33.759519 kernel: GICv3: 16 PPIs implemented Jul 14 21:43:33.759525 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 14 21:43:33.759531 kernel: ACPI: SRAT not present Jul 14 21:43:33.759537 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 14 21:43:33.759544 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 14 21:43:33.759550 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 14 21:43:33.759556 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 14 21:43:33.759562 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 14 21:43:33.759570 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:43:33.759576 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 14 21:43:33.759582 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 14 21:43:33.759589 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 14 21:43:33.759595 kernel: arm-pv: using stolen time PV Jul 14 21:43:33.759601 kernel: Console: colour dummy device 80x25 Jul 14 21:43:33.759607 kernel: ACPI: Core revision 20210730 Jul 14 21:43:33.759614 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 14 21:43:33.759621 kernel: pid_max: default: 32768 minimum: 301 Jul 14 21:43:33.759627 kernel: LSM: Security Framework initializing Jul 14 21:43:33.759634 kernel: SELinux: Initializing. Jul 14 21:43:33.759640 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:43:33.759646 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:43:33.759653 kernel: rcu: Hierarchical SRCU implementation. Jul 14 21:43:33.759659 kernel: Platform MSI: ITS@0x8080000 domain created Jul 14 21:43:33.759665 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 14 21:43:33.759671 kernel: Remapping and enabling EFI services. Jul 14 21:43:33.759677 kernel: smp: Bringing up secondary CPUs ... Jul 14 21:43:33.759683 kernel: Detected PIPT I-cache on CPU1 Jul 14 21:43:33.759691 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 14 21:43:33.759697 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 14 21:43:33.759704 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:43:33.759710 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 14 21:43:33.759716 kernel: Detected PIPT I-cache on CPU2 Jul 14 21:43:33.759722 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 14 21:43:33.759729 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 14 21:43:33.759735 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:43:33.759741 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 14 21:43:33.759761 kernel: Detected PIPT I-cache on CPU3 Jul 14 21:43:33.759771 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 14 21:43:33.759794 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 14 21:43:33.759801 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:43:33.759807 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 14 21:43:33.759818 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 21:43:33.759826 kernel: SMP: Total of 4 processors activated. Jul 14 21:43:33.759833 kernel: CPU features: detected: 32-bit EL0 Support Jul 14 21:43:33.759840 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 14 21:43:33.759846 kernel: CPU features: detected: Common not Private translations Jul 14 21:43:33.759853 kernel: CPU features: detected: CRC32 instructions Jul 14 21:43:33.759859 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 14 21:43:33.759866 kernel: CPU features: detected: LSE atomic instructions Jul 14 21:43:33.759873 kernel: CPU features: detected: Privileged Access Never Jul 14 21:43:33.759880 kernel: CPU features: detected: RAS Extension Support Jul 14 21:43:33.759887 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 14 21:43:33.759894 kernel: CPU: All CPU(s) started at EL1 Jul 14 21:43:33.759900 kernel: alternatives: patching kernel code Jul 14 21:43:33.759908 kernel: devtmpfs: initialized Jul 14 21:43:33.759914 kernel: KASLR enabled Jul 14 21:43:33.759921 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 21:43:33.759928 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 21:43:33.759934 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 21:43:33.759941 kernel: SMBIOS 3.0.0 present. Jul 14 21:43:33.759947 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 14 21:43:33.759954 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 21:43:33.759960 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 14 21:43:33.759968 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 14 21:43:33.759975 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 14 21:43:33.759981 kernel: audit: initializing netlink subsys (disabled) Jul 14 21:43:33.759988 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 Jul 14 21:43:33.759995 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 21:43:33.760001 kernel: cpuidle: using governor menu Jul 14 21:43:33.760008 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 14 21:43:33.760014 kernel: ASID allocator initialised with 32768 entries Jul 14 21:43:33.760021 kernel: ACPI: bus type PCI registered Jul 14 21:43:33.760029 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 21:43:33.760035 kernel: Serial: AMBA PL011 UART driver Jul 14 21:43:33.760042 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 21:43:33.760049 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 14 21:43:33.760055 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 21:43:33.760062 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 14 21:43:33.760069 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 21:43:33.760076 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 14 21:43:33.760083 kernel: ACPI: Added _OSI(Module Device) Jul 14 21:43:33.760091 kernel: ACPI: Added _OSI(Processor Device) Jul 14 21:43:33.760097 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 21:43:33.760104 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 14 21:43:33.760110 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 14 21:43:33.760117 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 14 21:43:33.760123 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 21:43:33.760130 kernel: ACPI: Interpreter enabled Jul 14 21:43:33.760137 kernel: ACPI: Using GIC for interrupt routing Jul 14 21:43:33.760143 kernel: ACPI: MCFG table detected, 1 entries Jul 14 21:43:33.760151 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 14 21:43:33.760157 kernel: printk: console [ttyAMA0] enabled Jul 14 21:43:33.760164 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 21:43:33.760285 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 21:43:33.760348 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 14 21:43:33.760406 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 14 21:43:33.760469 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 14 21:43:33.760530 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 14 21:43:33.760539 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 14 21:43:33.760546 kernel: PCI host bridge to bus 0000:00 Jul 14 21:43:33.760625 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 14 21:43:33.760678 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 14 21:43:33.760729 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 14 21:43:33.760800 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 21:43:33.760881 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 14 21:43:33.760955 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 21:43:33.761017 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 14 21:43:33.761081 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 14 21:43:33.761139 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:43:33.761209 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:43:33.761267 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 14 21:43:33.761334 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 14 21:43:33.761663 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 14 21:43:33.761718 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 14 21:43:33.761802 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 14 21:43:33.761814 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 14 21:43:33.761822 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 14 21:43:33.761828 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 14 21:43:33.761839 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 14 21:43:33.761846 kernel: iommu: Default domain type: Translated Jul 14 21:43:33.761853 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 14 21:43:33.761859 kernel: vgaarb: loaded Jul 14 21:43:33.761866 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 14 21:43:33.761873 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 14 21:43:33.761880 kernel: PTP clock support registered Jul 14 21:43:33.761886 kernel: Registered efivars operations Jul 14 21:43:33.761893 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 14 21:43:33.761899 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 21:43:33.761908 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 21:43:33.761915 kernel: pnp: PnP ACPI init Jul 14 21:43:33.761987 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 14 21:43:33.761997 kernel: pnp: PnP ACPI: found 1 devices Jul 14 21:43:33.762004 kernel: NET: Registered PF_INET protocol family Jul 14 21:43:33.762011 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 21:43:33.762017 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 21:43:33.762024 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 21:43:33.762032 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 21:43:33.762039 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 14 21:43:33.762046 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 21:43:33.762052 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:43:33.762059 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:43:33.762066 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 21:43:33.762072 kernel: PCI: CLS 0 bytes, default 64 Jul 14 21:43:33.762131 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 14 21:43:33.762138 kernel: kvm [1]: HYP mode not available Jul 14 21:43:33.762147 kernel: Initialise system trusted keyrings Jul 14 21:43:33.762154 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 21:43:33.762160 kernel: Key type asymmetric registered Jul 14 21:43:33.762167 kernel: Asymmetric key parser 'x509' registered Jul 14 21:43:33.762173 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 14 21:43:33.762180 kernel: io scheduler mq-deadline registered Jul 14 21:43:33.762186 kernel: io scheduler kyber registered Jul 14 21:43:33.762193 kernel: io scheduler bfq registered Jul 14 21:43:33.762199 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 14 21:43:33.762207 kernel: ACPI: button: Power Button [PWRB] Jul 14 21:43:33.762214 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 14 21:43:33.762293 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 14 21:43:33.762303 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 21:43:33.762310 kernel: thunder_xcv, ver 1.0 Jul 14 21:43:33.762316 kernel: thunder_bgx, ver 1.0 Jul 14 21:43:33.762322 kernel: nicpf, ver 1.0 Jul 14 21:43:33.762329 kernel: nicvf, ver 1.0 Jul 14 21:43:33.762397 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 14 21:43:33.762455 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-14T21:43:33 UTC (1752529413) Jul 14 21:43:33.762464 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 14 21:43:33.762471 kernel: NET: Registered PF_INET6 protocol family Jul 14 21:43:33.762477 kernel: Segment Routing with IPv6 Jul 14 21:43:33.762484 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 21:43:33.762491 kernel: NET: Registered PF_PACKET protocol family Jul 14 21:43:33.762497 kernel: Key type dns_resolver registered Jul 14 21:43:33.762504 kernel: registered taskstats version 1 Jul 14 21:43:33.762512 kernel: Loading compiled-in X.509 certificates Jul 14 21:43:33.762519 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.187-flatcar: 118351bb2b1409a8fe1c98db16ecff1bb5342a27' Jul 14 21:43:33.762526 kernel: Key type .fscrypt registered Jul 14 21:43:33.762532 kernel: Key type fscrypt-provisioning registered Jul 14 21:43:33.762539 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 21:43:33.762545 kernel: ima: Allocated hash algorithm: sha1 Jul 14 21:43:33.762552 kernel: ima: No architecture policies found Jul 14 21:43:33.762559 kernel: clk: Disabling unused clocks Jul 14 21:43:33.762565 kernel: Freeing unused kernel memory: 36416K Jul 14 21:43:33.762573 kernel: Run /init as init process Jul 14 21:43:33.762579 kernel: with arguments: Jul 14 21:43:33.762585 kernel: /init Jul 14 21:43:33.762591 kernel: with environment: Jul 14 21:43:33.762598 kernel: HOME=/ Jul 14 21:43:33.762604 kernel: TERM=linux Jul 14 21:43:33.762611 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 21:43:33.762619 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 14 21:43:33.762629 systemd[1]: Detected virtualization kvm. Jul 14 21:43:33.762637 systemd[1]: Detected architecture arm64. Jul 14 21:43:33.762643 systemd[1]: Running in initrd. Jul 14 21:43:33.762650 systemd[1]: No hostname configured, using default hostname. Jul 14 21:43:33.762657 systemd[1]: Hostname set to . Jul 14 21:43:33.762665 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:43:33.762672 systemd[1]: Queued start job for default target initrd.target. Jul 14 21:43:33.762679 systemd[1]: Started systemd-ask-password-console.path. Jul 14 21:43:33.762687 systemd[1]: Reached target cryptsetup.target. Jul 14 21:43:33.762694 systemd[1]: Reached target paths.target. Jul 14 21:43:33.762701 systemd[1]: Reached target slices.target. Jul 14 21:43:33.762708 systemd[1]: Reached target swap.target. Jul 14 21:43:33.762715 systemd[1]: Reached target timers.target. Jul 14 21:43:33.762722 systemd[1]: Listening on iscsid.socket. Jul 14 21:43:33.762730 systemd[1]: Listening on iscsiuio.socket. Jul 14 21:43:33.762738 systemd[1]: Listening on systemd-journald-audit.socket. Jul 14 21:43:33.762769 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 14 21:43:33.762779 systemd[1]: Listening on systemd-journald.socket. Jul 14 21:43:33.762786 systemd[1]: Listening on systemd-networkd.socket. Jul 14 21:43:33.762793 systemd[1]: Listening on systemd-udevd-control.socket. Jul 14 21:43:33.762804 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 14 21:43:33.762811 systemd[1]: Reached target sockets.target. Jul 14 21:43:33.762819 systemd[1]: Starting kmod-static-nodes.service... Jul 14 21:43:33.762826 systemd[1]: Finished network-cleanup.service. Jul 14 21:43:33.762836 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 21:43:33.762843 systemd[1]: Starting systemd-journald.service... Jul 14 21:43:33.762852 systemd[1]: Starting systemd-modules-load.service... Jul 14 21:43:33.762859 systemd[1]: Starting systemd-resolved.service... Jul 14 21:43:33.762866 systemd[1]: Starting systemd-vconsole-setup.service... Jul 14 21:43:33.762874 systemd[1]: Finished kmod-static-nodes.service. Jul 14 21:43:33.762880 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 21:43:33.762888 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 14 21:43:33.762897 systemd[1]: Finished systemd-vconsole-setup.service. Jul 14 21:43:33.762906 kernel: audit: type=1130 audit(1752529413.758:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:33.762913 systemd[1]: Starting dracut-cmdline-ask.service... Jul 14 21:43:33.762923 systemd-journald[290]: Journal started Jul 14 21:43:33.762971 systemd-journald[290]: Runtime Journal (/run/log/journal/b894782f72e645b19fe5a53cbc6fe091) is 6.0M, max 48.7M, 42.6M free. Jul 14 21:43:33.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:33.754920 systemd-modules-load[291]: Inserted module 'overlay' Jul 14 21:43:33.764930 systemd[1]: Started systemd-journald.service. Jul 14 21:43:33.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:33.765343 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 14 21:43:33.771036 kernel: audit: type=1130 audit(1752529413.764:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:33.771056 kernel: audit: type=1130 audit(1752529413.766:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:33.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:33.778891 systemd[1]: Finished dracut-cmdline-ask.service. Jul 14 21:43:33.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:33.780329 systemd[1]: Starting dracut-cmdline.service... Jul 14 21:43:33.784314 kernel: audit: type=1130 audit(1752529413.778:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:33.784331 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 21:43:33.781511 systemd-resolved[292]: Positive Trust Anchors: Jul 14 21:43:33.781518 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:43:33.781545 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 14 21:43:33.785675 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 14 21:43:33.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:33.788257 systemd[1]: Started systemd-resolved.service. Jul 14 21:43:33.793696 kernel: audit: type=1130 audit(1752529413.789:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:33.790527 systemd[1]: Reached target nss-lookup.target. Jul 14 21:43:33.795777 kernel: Bridge firewalling registered Jul 14 21:43:33.795814 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 14 21:43:33.799721 dracut-cmdline[308]: dracut-dracut-053 Jul 14 21:43:33.801889 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0fbac260ee8dcd4db6590eed44229ca41387b27ea0fa758fd2be410620d68236 Jul 14 21:43:33.806771 kernel: SCSI subsystem initialized Jul 14 21:43:33.814514 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 21:43:33.814546 kernel: device-mapper: uevent: version 1.0.3 Jul 14 21:43:33.814555 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 14 21:43:33.818133 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 14 21:43:33.818903 systemd[1]: Finished systemd-modules-load.service. Jul 14 21:43:33.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:33.821776 kernel: audit: type=1130 audit(1752529413.818:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:33.820234 systemd[1]: Starting systemd-sysctl.service... Jul 14 21:43:33.828351 systemd[1]: Finished systemd-sysctl.service. Jul 14 21:43:33.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:33.831795 kernel: audit: type=1130 audit(1752529413.828:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:33.861776 kernel: Loading iSCSI transport class v2.0-870. Jul 14 21:43:33.874786 kernel: iscsi: registered transport (tcp) Jul 14 21:43:33.888774 kernel: iscsi: registered transport (qla4xxx) Jul 14 21:43:33.888790 kernel: QLogic iSCSI HBA Driver Jul 14 21:43:33.922169 systemd[1]: Finished dracut-cmdline.service. Jul 14 21:43:33.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:33.923833 systemd[1]: Starting dracut-pre-udev.service... Jul 14 21:43:33.926188 kernel: audit: type=1130 audit(1752529413.922:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:33.967777 kernel: raid6: neonx8 gen() 13746 MB/s Jul 14 21:43:33.984769 kernel: raid6: neonx8 xor() 10733 MB/s Jul 14 21:43:34.001781 kernel: raid6: neonx4 gen() 13523 MB/s Jul 14 21:43:34.018768 kernel: raid6: neonx4 xor() 11139 MB/s Jul 14 21:43:34.035774 kernel: raid6: neonx2 gen() 13050 MB/s Jul 14 21:43:34.052773 kernel: raid6: neonx2 xor() 10287 MB/s Jul 14 21:43:34.069772 kernel: raid6: neonx1 gen() 10609 MB/s Jul 14 21:43:34.086773 kernel: raid6: neonx1 xor() 8728 MB/s Jul 14 21:43:34.103769 kernel: raid6: int64x8 gen() 6268 MB/s Jul 14 21:43:34.120770 kernel: raid6: int64x8 xor() 3542 MB/s Jul 14 21:43:34.137772 kernel: raid6: int64x4 gen() 7223 MB/s Jul 14 21:43:34.154770 kernel: raid6: int64x4 xor() 3854 MB/s Jul 14 21:43:34.171769 kernel: raid6: int64x2 gen() 6146 MB/s Jul 14 21:43:34.189389 kernel: raid6: int64x2 xor() 3310 MB/s Jul 14 21:43:34.205774 kernel: raid6: int64x1 gen() 5036 MB/s Jul 14 21:43:34.223110 kernel: raid6: int64x1 xor() 2644 MB/s Jul 14 21:43:34.223129 kernel: raid6: using algorithm neonx8 gen() 13746 MB/s Jul 14 21:43:34.223138 kernel: raid6: .... xor() 10733 MB/s, rmw enabled Jul 14 21:43:34.223147 kernel: raid6: using neon recovery algorithm Jul 14 21:43:34.234986 kernel: xor: measuring software checksum speed Jul 14 21:43:34.235005 kernel: 8regs : 17202 MB/sec Jul 14 21:43:34.235013 kernel: 32regs : 20733 MB/sec Jul 14 21:43:34.235895 kernel: arm64_neon : 27738 MB/sec Jul 14 21:43:34.235906 kernel: xor: using function: arm64_neon (27738 MB/sec) Jul 14 21:43:34.289782 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 14 21:43:34.300688 systemd[1]: Finished dracut-pre-udev.service. Jul 14 21:43:34.303839 kernel: audit: type=1130 audit(1752529414.300:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:34.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:34.302000 audit: BPF prog-id=7 op=LOAD Jul 14 21:43:34.303000 audit: BPF prog-id=8 op=LOAD Jul 14 21:43:34.304238 systemd[1]: Starting systemd-udevd.service... Jul 14 21:43:34.321032 systemd-udevd[491]: Using default interface naming scheme 'v252'. Jul 14 21:43:34.324473 systemd[1]: Started systemd-udevd.service. Jul 14 21:43:34.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:34.326610 systemd[1]: Starting dracut-pre-trigger.service... Jul 14 21:43:34.338742 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Jul 14 21:43:34.369468 systemd[1]: Finished dracut-pre-trigger.service. Jul 14 21:43:34.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:34.371013 systemd[1]: Starting systemd-udev-trigger.service... Jul 14 21:43:34.404489 systemd[1]: Finished systemd-udev-trigger.service. Jul 14 21:43:34.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:34.435816 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 21:43:34.440657 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 21:43:34.440673 kernel: GPT:9289727 != 19775487 Jul 14 21:43:34.440682 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 21:43:34.440691 kernel: GPT:9289727 != 19775487 Jul 14 21:43:34.440699 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 21:43:34.440707 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:43:34.462538 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 14 21:43:34.464018 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (538) Jul 14 21:43:34.466808 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 14 21:43:34.469671 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 14 21:43:34.470511 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 14 21:43:34.474909 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 14 21:43:34.476493 systemd[1]: Starting disk-uuid.service... Jul 14 21:43:34.485834 disk-uuid[562]: Primary Header is updated. Jul 14 21:43:34.485834 disk-uuid[562]: Secondary Entries is updated. Jul 14 21:43:34.485834 disk-uuid[562]: Secondary Header is updated. Jul 14 21:43:34.488775 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:43:34.502781 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:43:35.501520 disk-uuid[563]: The operation has completed successfully. Jul 14 21:43:35.502391 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:43:35.524254 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 21:43:35.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:35.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:35.524362 systemd[1]: Finished disk-uuid.service. Jul 14 21:43:35.528518 systemd[1]: Starting verity-setup.service... Jul 14 21:43:35.548782 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 14 21:43:35.579691 systemd[1]: Found device dev-mapper-usr.device. Jul 14 21:43:35.581830 systemd[1]: Mounting sysusr-usr.mount... Jul 14 21:43:35.583624 systemd[1]: Finished verity-setup.service. Jul 14 21:43:35.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:35.630080 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 14 21:43:35.630044 systemd[1]: Mounted sysusr-usr.mount. Jul 14 21:43:35.630674 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 14 21:43:35.631495 systemd[1]: Starting ignition-setup.service... Jul 14 21:43:35.633146 systemd[1]: Starting parse-ip-for-networkd.service... Jul 14 21:43:35.641162 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:43:35.641219 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:43:35.641230 kernel: BTRFS info (device vda6): has skinny extents Jul 14 21:43:35.650629 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 21:43:35.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:35.657202 systemd[1]: Finished ignition-setup.service. Jul 14 21:43:35.658678 systemd[1]: Starting ignition-fetch-offline.service... Jul 14 21:43:35.730198 systemd[1]: Finished parse-ip-for-networkd.service. Jul 14 21:43:35.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:35.731000 audit: BPF prog-id=9 op=LOAD Jul 14 21:43:35.732992 systemd[1]: Starting systemd-networkd.service... Jul 14 21:43:35.757063 ignition[640]: Ignition 2.14.0 Jul 14 21:43:35.757073 ignition[640]: Stage: fetch-offline Jul 14 21:43:35.757115 ignition[640]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:43:35.758255 systemd-networkd[738]: lo: Link UP Jul 14 21:43:35.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:35.757124 ignition[640]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:43:35.758259 systemd-networkd[738]: lo: Gained carrier Jul 14 21:43:35.757290 ignition[640]: parsed url from cmdline: "" Jul 14 21:43:35.758847 systemd-networkd[738]: Enumeration completed Jul 14 21:43:35.757293 ignition[640]: no config URL provided Jul 14 21:43:35.758993 systemd[1]: Started systemd-networkd.service. Jul 14 21:43:35.757298 ignition[640]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 21:43:35.759106 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:43:35.757305 ignition[640]: no config at "/usr/lib/ignition/user.ign" Jul 14 21:43:35.760218 systemd[1]: Reached target network.target. Jul 14 21:43:35.757322 ignition[640]: op(1): [started] loading QEMU firmware config module Jul 14 21:43:35.760389 systemd-networkd[738]: eth0: Link UP Jul 14 21:43:35.757328 ignition[640]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 21:43:35.760393 systemd-networkd[738]: eth0: Gained carrier Jul 14 21:43:35.764151 systemd[1]: Starting iscsiuio.service... Jul 14 21:43:35.771107 ignition[640]: op(1): [finished] loading QEMU firmware config module Jul 14 21:43:35.779248 systemd[1]: Started iscsiuio.service. Jul 14 21:43:35.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:35.780851 systemd[1]: Starting iscsid.service... Jul 14 21:43:35.782838 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.9/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:43:35.784619 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 14 21:43:35.784619 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 14 21:43:35.784619 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 14 21:43:35.784619 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 14 21:43:35.784619 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 14 21:43:35.784619 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 14 21:43:35.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:35.787585 systemd[1]: Started iscsid.service. Jul 14 21:43:35.791835 systemd[1]: Starting dracut-initqueue.service... Jul 14 21:43:35.803231 systemd[1]: Finished dracut-initqueue.service. Jul 14 21:43:35.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:35.804130 systemd[1]: Reached target remote-fs-pre.target. Jul 14 21:43:35.805361 systemd[1]: Reached target remote-cryptsetup.target. Jul 14 21:43:35.806636 systemd[1]: Reached target remote-fs.target. Jul 14 21:43:35.808774 systemd[1]: Starting dracut-pre-mount.service... Jul 14 21:43:35.810827 ignition[640]: parsing config with SHA512: b8c01f22b207e9d0cd126fc8572a8a4b04b3ddfa57b3e0df9892c244eb0e61c80b9064272a6059933cf27e84d091dfbec9d7dfae608a7c82e32a5cac01a275eb Jul 14 21:43:35.822775 systemd[1]: Finished dracut-pre-mount.service. Jul 14 21:43:35.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:35.826150 unknown[640]: fetched base config from "system" Jul 14 21:43:35.826162 unknown[640]: fetched user config from "qemu" Jul 14 21:43:35.826583 ignition[640]: fetch-offline: fetch-offline passed Jul 14 21:43:35.826636 ignition[640]: Ignition finished successfully Jul 14 21:43:35.829613 systemd[1]: Finished ignition-fetch-offline.service. Jul 14 21:43:35.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:35.830494 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 21:43:35.831313 systemd[1]: Starting ignition-kargs.service... Jul 14 21:43:35.840534 ignition[760]: Ignition 2.14.0 Jul 14 21:43:35.840545 ignition[760]: Stage: kargs Jul 14 21:43:35.840649 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:43:35.840660 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:43:35.841491 ignition[760]: kargs: kargs passed Jul 14 21:43:35.843404 systemd[1]: Finished ignition-kargs.service. Jul 14 21:43:35.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:35.841535 ignition[760]: Ignition finished successfully Jul 14 21:43:35.845195 systemd[1]: Starting ignition-disks.service... Jul 14 21:43:35.852096 ignition[766]: Ignition 2.14.0 Jul 14 21:43:35.852106 ignition[766]: Stage: disks Jul 14 21:43:35.852206 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:43:35.853928 systemd[1]: Finished ignition-disks.service. Jul 14 21:43:35.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:35.852216 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:43:35.855119 systemd[1]: Reached target initrd-root-device.target. Jul 14 21:43:35.853052 ignition[766]: disks: disks passed Jul 14 21:43:35.856175 systemd[1]: Reached target local-fs-pre.target. Jul 14 21:43:35.853099 ignition[766]: Ignition finished successfully Jul 14 21:43:35.857367 systemd[1]: Reached target local-fs.target. Jul 14 21:43:35.858335 systemd[1]: Reached target sysinit.target. Jul 14 21:43:35.859235 systemd[1]: Reached target basic.target. Jul 14 21:43:35.861164 systemd[1]: Starting systemd-fsck-root.service... Jul 14 21:43:35.875645 systemd-fsck[774]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 14 21:43:35.960449 systemd[1]: Finished systemd-fsck-root.service. Jul 14 21:43:35.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:35.963311 systemd[1]: Mounting sysroot.mount... Jul 14 21:43:35.976672 systemd[1]: Mounted sysroot.mount. Jul 14 21:43:35.977729 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 14 21:43:35.977354 systemd[1]: Reached target initrd-root-fs.target. Jul 14 21:43:35.980659 systemd[1]: Mounting sysroot-usr.mount... Jul 14 21:43:35.981450 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 14 21:43:35.981488 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 21:43:35.981509 systemd[1]: Reached target ignition-diskful.target. Jul 14 21:43:35.985469 systemd[1]: Mounted sysroot-usr.mount. Jul 14 21:43:35.986730 systemd[1]: Starting initrd-setup-root.service... Jul 14 21:43:35.992968 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 21:43:35.997517 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Jul 14 21:43:36.001880 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 21:43:36.005790 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 21:43:36.032511 systemd[1]: Finished initrd-setup-root.service. Jul 14 21:43:36.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:36.034029 systemd[1]: Starting ignition-mount.service... Jul 14 21:43:36.035708 systemd[1]: Starting sysroot-boot.service... Jul 14 21:43:36.040340 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Jul 14 21:43:36.050877 ignition[827]: INFO : Ignition 2.14.0 Jul 14 21:43:36.051698 ignition[827]: INFO : Stage: mount Jul 14 21:43:36.052450 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:43:36.053345 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:43:36.055240 ignition[827]: INFO : mount: mount passed Jul 14 21:43:36.055932 ignition[827]: INFO : Ignition finished successfully Jul 14 21:43:36.057956 systemd[1]: Finished ignition-mount.service. Jul 14 21:43:36.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:36.059044 systemd[1]: Finished sysroot-boot.service. Jul 14 21:43:36.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:36.591125 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 14 21:43:36.597787 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (836) Jul 14 21:43:36.597835 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:43:36.598953 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:43:36.598972 kernel: BTRFS info (device vda6): has skinny extents Jul 14 21:43:36.602405 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 14 21:43:36.603978 systemd[1]: Starting ignition-files.service... Jul 14 21:43:36.618721 ignition[856]: INFO : Ignition 2.14.0 Jul 14 21:43:36.619677 ignition[856]: INFO : Stage: files Jul 14 21:43:36.620434 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:43:36.621190 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:43:36.622815 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Jul 14 21:43:36.627448 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 21:43:36.627448 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 21:43:36.632385 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 21:43:36.633963 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 21:43:36.633963 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 21:43:36.633963 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 14 21:43:36.633963 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 14 21:43:36.633313 unknown[856]: wrote ssh authorized keys file for user: core Jul 14 21:43:36.687448 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 21:43:37.520684 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 14 21:43:37.520684 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 14 21:43:37.524419 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 21:43:37.524419 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:43:37.524419 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:43:37.524419 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:43:37.524419 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:43:37.524419 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:43:37.524419 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:43:37.524419 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:43:37.524419 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:43:37.524419 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 21:43:37.524419 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 21:43:37.524419 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 21:43:37.524419 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 14 21:43:37.617013 systemd-networkd[738]: eth0: Gained IPv6LL Jul 14 21:43:38.015695 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 14 21:43:38.401559 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 21:43:38.401559 ignition[856]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 14 21:43:38.405854 ignition[856]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:43:38.405854 ignition[856]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:43:38.405854 ignition[856]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 14 21:43:38.405854 ignition[856]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 14 21:43:38.405854 ignition[856]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:43:38.405854 ignition[856]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:43:38.405854 ignition[856]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 14 21:43:38.405854 ignition[856]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jul 14 21:43:38.405854 ignition[856]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 21:43:38.405854 ignition[856]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 21:43:38.405854 ignition[856]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:43:38.435128 ignition[856]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:43:38.436248 ignition[856]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 21:43:38.436248 ignition[856]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:43:38.436248 ignition[856]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:43:38.436248 ignition[856]: INFO : files: files passed Jul 14 21:43:38.436248 ignition[856]: INFO : Ignition finished successfully Jul 14 21:43:38.444697 kernel: kauditd_printk_skb: 23 callbacks suppressed Jul 14 21:43:38.444718 kernel: audit: type=1130 audit(1752529418.437:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.436558 systemd[1]: Finished ignition-files.service. Jul 14 21:43:38.438935 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 14 21:43:38.446371 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 14 21:43:38.442133 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 14 21:43:38.448864 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:43:38.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.442914 systemd[1]: Starting ignition-quench.service... Jul 14 21:43:38.456524 kernel: audit: type=1130 audit(1752529418.448:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.456544 kernel: audit: type=1130 audit(1752529418.452:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.456554 kernel: audit: type=1131 audit(1752529418.452:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.447988 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 14 21:43:38.449704 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 21:43:38.449836 systemd[1]: Finished ignition-quench.service. Jul 14 21:43:38.452899 systemd[1]: Reached target ignition-complete.target. Jul 14 21:43:38.457847 systemd[1]: Starting initrd-parse-etc.service... Jul 14 21:43:38.470601 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 21:43:38.475536 kernel: audit: type=1130 audit(1752529418.470:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.475559 kernel: audit: type=1131 audit(1752529418.470:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.470716 systemd[1]: Finished initrd-parse-etc.service. Jul 14 21:43:38.471500 systemd[1]: Reached target initrd-fs.target. Jul 14 21:43:38.476078 systemd[1]: Reached target initrd.target. Jul 14 21:43:38.477077 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 14 21:43:38.477815 systemd[1]: Starting dracut-pre-pivot.service... Jul 14 21:43:38.488209 systemd[1]: Finished dracut-pre-pivot.service. Jul 14 21:43:38.493559 kernel: audit: type=1130 audit(1752529418.488:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.489586 systemd[1]: Starting initrd-cleanup.service... Jul 14 21:43:38.498505 systemd[1]: Stopped target nss-lookup.target. Jul 14 21:43:38.499184 systemd[1]: Stopped target remote-cryptsetup.target. Jul 14 21:43:38.500344 systemd[1]: Stopped target timers.target. Jul 14 21:43:38.501320 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 21:43:38.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.501427 systemd[1]: Stopped dracut-pre-pivot.service. Jul 14 21:43:38.505517 kernel: audit: type=1131 audit(1752529418.502:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.502460 systemd[1]: Stopped target initrd.target. Jul 14 21:43:38.505174 systemd[1]: Stopped target basic.target. Jul 14 21:43:38.506090 systemd[1]: Stopped target ignition-complete.target. Jul 14 21:43:38.507077 systemd[1]: Stopped target ignition-diskful.target. Jul 14 21:43:38.508115 systemd[1]: Stopped target initrd-root-device.target. Jul 14 21:43:38.509208 systemd[1]: Stopped target remote-fs.target. Jul 14 21:43:38.510207 systemd[1]: Stopped target remote-fs-pre.target. Jul 14 21:43:38.511283 systemd[1]: Stopped target sysinit.target. Jul 14 21:43:38.512223 systemd[1]: Stopped target local-fs.target. Jul 14 21:43:38.513192 systemd[1]: Stopped target local-fs-pre.target. Jul 14 21:43:38.514175 systemd[1]: Stopped target swap.target. Jul 14 21:43:38.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.515066 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 21:43:38.519213 kernel: audit: type=1131 audit(1752529418.515:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.515179 systemd[1]: Stopped dracut-pre-mount.service. Jul 14 21:43:38.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.516184 systemd[1]: Stopped target cryptsetup.target. Jul 14 21:43:38.522869 kernel: audit: type=1131 audit(1752529418.519:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.518656 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 21:43:38.518785 systemd[1]: Stopped dracut-initqueue.service. Jul 14 21:43:38.519865 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 21:43:38.519963 systemd[1]: Stopped ignition-fetch-offline.service. Jul 14 21:43:38.522546 systemd[1]: Stopped target paths.target. Jul 14 21:43:38.523385 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 21:43:38.526801 systemd[1]: Stopped systemd-ask-password-console.path. Jul 14 21:43:38.527656 systemd[1]: Stopped target slices.target. Jul 14 21:43:38.528638 systemd[1]: Stopped target sockets.target. Jul 14 21:43:38.529558 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 21:43:38.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.529670 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 14 21:43:38.531532 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 21:43:38.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.532002 systemd[1]: Stopped ignition-files.service. Jul 14 21:43:38.535118 systemd[1]: Stopping ignition-mount.service... Jul 14 21:43:38.536278 iscsid[746]: iscsid shutting down. Jul 14 21:43:38.537968 systemd[1]: Stopping iscsid.service... Jul 14 21:43:38.539212 systemd[1]: Stopping sysroot-boot.service... Jul 14 21:43:38.539721 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 21:43:38.539871 systemd[1]: Stopped systemd-udev-trigger.service. Jul 14 21:43:38.540888 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 21:43:38.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.543707 ignition[896]: INFO : Ignition 2.14.0 Jul 14 21:43:38.543707 ignition[896]: INFO : Stage: umount Jul 14 21:43:38.543707 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:43:38.543707 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:43:38.543707 ignition[896]: INFO : umount: umount passed Jul 14 21:43:38.543707 ignition[896]: INFO : Ignition finished successfully Jul 14 21:43:38.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.540982 systemd[1]: Stopped dracut-pre-trigger.service. Jul 14 21:43:38.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.543365 systemd[1]: iscsid.service: Deactivated successfully. Jul 14 21:43:38.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.543479 systemd[1]: Stopped iscsid.service. Jul 14 21:43:38.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.544708 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 21:43:38.544822 systemd[1]: Stopped ignition-mount.service. Jul 14 21:43:38.546029 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 21:43:38.546116 systemd[1]: Finished initrd-cleanup.service. Jul 14 21:43:38.547813 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 21:43:38.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.547849 systemd[1]: Closed iscsid.socket. Jul 14 21:43:38.549623 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 21:43:38.549725 systemd[1]: Stopped ignition-disks.service. Jul 14 21:43:38.550350 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 21:43:38.550433 systemd[1]: Stopped ignition-kargs.service. Jul 14 21:43:38.551445 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 21:43:38.551982 systemd[1]: Stopped ignition-setup.service. Jul 14 21:43:38.552660 systemd[1]: Stopping iscsiuio.service... Jul 14 21:43:38.557604 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 21:43:38.557999 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 14 21:43:38.558084 systemd[1]: Stopped iscsiuio.service. Jul 14 21:43:38.558890 systemd[1]: Stopped target network.target. Jul 14 21:43:38.559802 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 21:43:38.559830 systemd[1]: Closed iscsiuio.socket. Jul 14 21:43:38.560942 systemd[1]: Stopping systemd-networkd.service... Jul 14 21:43:38.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.561877 systemd[1]: Stopping systemd-resolved.service... Jul 14 21:43:38.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.566885 systemd-networkd[738]: eth0: DHCPv6 lease lost Jul 14 21:43:38.568947 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 21:43:38.569055 systemd[1]: Stopped systemd-networkd.service. Jul 14 21:43:38.575000 audit: BPF prog-id=9 op=UNLOAD Jul 14 21:43:38.577000 audit: BPF prog-id=6 op=UNLOAD Jul 14 21:43:38.573107 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 21:43:38.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.573202 systemd[1]: Stopped systemd-resolved.service. Jul 14 21:43:38.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.574814 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 21:43:38.574846 systemd[1]: Closed systemd-networkd.socket. Jul 14 21:43:38.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.576277 systemd[1]: Stopping network-cleanup.service... Jul 14 21:43:38.577249 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 21:43:38.577309 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 14 21:43:38.579107 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:43:38.579157 systemd[1]: Stopped systemd-sysctl.service. Jul 14 21:43:38.580825 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 21:43:38.580877 systemd[1]: Stopped systemd-modules-load.service. Jul 14 21:43:38.585728 systemd[1]: Stopping systemd-udevd.service... Jul 14 21:43:38.591749 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 14 21:43:38.593862 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 21:43:38.593991 systemd[1]: Stopped sysroot-boot.service. Jul 14 21:43:38.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.595387 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 21:43:38.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.595499 systemd[1]: Stopped systemd-udevd.service. Jul 14 21:43:38.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.596486 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 21:43:38.596566 systemd[1]: Stopped network-cleanup.service. Jul 14 21:43:38.597450 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 21:43:38.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.597482 systemd[1]: Closed systemd-udevd-control.socket. Jul 14 21:43:38.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.598333 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 21:43:38.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.598363 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 14 21:43:38.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.599413 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 21:43:38.599455 systemd[1]: Stopped dracut-pre-udev.service. Jul 14 21:43:38.600382 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 21:43:38.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.600415 systemd[1]: Stopped dracut-cmdline.service. Jul 14 21:43:38.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.601471 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:43:38.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.601505 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 14 21:43:38.602481 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 21:43:38.602513 systemd[1]: Stopped initrd-setup-root.service. Jul 14 21:43:38.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.604327 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 14 21:43:38.605409 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 21:43:38.605464 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 14 21:43:38.607200 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 21:43:38.607238 systemd[1]: Stopped kmod-static-nodes.service. Jul 14 21:43:38.607996 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:43:38.608036 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 14 21:43:38.609890 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 14 21:43:38.610302 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 21:43:38.610386 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 14 21:43:38.611684 systemd[1]: Reached target initrd-switch-root.target. Jul 14 21:43:38.613321 systemd[1]: Starting initrd-switch-root.service... Jul 14 21:43:38.620110 systemd[1]: Switching root. Jul 14 21:43:38.638066 systemd-journald[290]: Journal stopped Jul 14 21:43:40.623269 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Jul 14 21:43:40.623328 kernel: SELinux: Class mctp_socket not defined in policy. Jul 14 21:43:40.623346 kernel: SELinux: Class anon_inode not defined in policy. Jul 14 21:43:40.623356 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 14 21:43:40.623366 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 21:43:40.623376 kernel: SELinux: policy capability open_perms=1 Jul 14 21:43:40.623388 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 21:43:40.623397 kernel: SELinux: policy capability always_check_network=0 Jul 14 21:43:40.623407 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 21:43:40.623416 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 21:43:40.623426 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 21:43:40.623436 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 21:43:40.623446 systemd[1]: Successfully loaded SELinux policy in 35.163ms. Jul 14 21:43:40.623467 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.497ms. Jul 14 21:43:40.623478 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 14 21:43:40.623490 systemd[1]: Detected virtualization kvm. Jul 14 21:43:40.623501 systemd[1]: Detected architecture arm64. Jul 14 21:43:40.623511 systemd[1]: Detected first boot. Jul 14 21:43:40.623525 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:43:40.623536 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 14 21:43:40.623547 systemd[1]: Populated /etc with preset unit settings. Jul 14 21:43:40.623560 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 21:43:40.623573 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 21:43:40.623584 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:43:40.623595 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 21:43:40.623605 systemd[1]: Stopped initrd-switch-root.service. Jul 14 21:43:40.623615 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 21:43:40.623625 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 14 21:43:40.623636 systemd[1]: Created slice system-addon\x2drun.slice. Jul 14 21:43:40.623647 systemd[1]: Created slice system-getty.slice. Jul 14 21:43:40.623661 systemd[1]: Created slice system-modprobe.slice. Jul 14 21:43:40.623671 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 14 21:43:40.623682 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 14 21:43:40.623693 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 14 21:43:40.623703 systemd[1]: Created slice user.slice. Jul 14 21:43:40.623714 systemd[1]: Started systemd-ask-password-console.path. Jul 14 21:43:40.623731 systemd[1]: Started systemd-ask-password-wall.path. Jul 14 21:43:40.623746 systemd[1]: Set up automount boot.automount. Jul 14 21:43:40.623782 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 14 21:43:40.623799 systemd[1]: Stopped target initrd-switch-root.target. Jul 14 21:43:40.623810 systemd[1]: Stopped target initrd-fs.target. Jul 14 21:43:40.623823 systemd[1]: Stopped target initrd-root-fs.target. Jul 14 21:43:40.623834 systemd[1]: Reached target integritysetup.target. Jul 14 21:43:40.623844 systemd[1]: Reached target remote-cryptsetup.target. Jul 14 21:43:40.623854 systemd[1]: Reached target remote-fs.target. Jul 14 21:43:40.623866 systemd[1]: Reached target slices.target. Jul 14 21:43:40.623876 systemd[1]: Reached target swap.target. Jul 14 21:43:40.623887 systemd[1]: Reached target torcx.target. Jul 14 21:43:40.623897 systemd[1]: Reached target veritysetup.target. Jul 14 21:43:40.623907 systemd[1]: Listening on systemd-coredump.socket. Jul 14 21:43:40.623917 systemd[1]: Listening on systemd-initctl.socket. Jul 14 21:43:40.623928 systemd[1]: Listening on systemd-networkd.socket. Jul 14 21:43:40.623939 systemd[1]: Listening on systemd-udevd-control.socket. Jul 14 21:43:40.623949 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 14 21:43:40.623961 systemd[1]: Listening on systemd-userdbd.socket. Jul 14 21:43:40.623974 systemd[1]: Mounting dev-hugepages.mount... Jul 14 21:43:40.623984 systemd[1]: Mounting dev-mqueue.mount... Jul 14 21:43:40.623995 systemd[1]: Mounting media.mount... Jul 14 21:43:40.624005 systemd[1]: Mounting sys-kernel-debug.mount... Jul 14 21:43:40.624015 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 14 21:43:40.624025 systemd[1]: Mounting tmp.mount... Jul 14 21:43:40.624036 systemd[1]: Starting flatcar-tmpfiles.service... Jul 14 21:43:40.624046 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:43:40.624058 systemd[1]: Starting kmod-static-nodes.service... Jul 14 21:43:40.624069 systemd[1]: Starting modprobe@configfs.service... Jul 14 21:43:40.624079 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:43:40.624089 systemd[1]: Starting modprobe@drm.service... Jul 14 21:43:40.624099 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:43:40.624109 systemd[1]: Starting modprobe@fuse.service... Jul 14 21:43:40.624119 systemd[1]: Starting modprobe@loop.service... Jul 14 21:43:40.624130 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 21:43:40.624141 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 21:43:40.624152 systemd[1]: Stopped systemd-fsck-root.service. Jul 14 21:43:40.624162 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 21:43:40.624172 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 21:43:40.624182 kernel: loop: module loaded Jul 14 21:43:40.624192 systemd[1]: Stopped systemd-journald.service. Jul 14 21:43:40.624202 kernel: fuse: init (API version 7.34) Jul 14 21:43:40.624211 systemd[1]: Starting systemd-journald.service... Jul 14 21:43:40.624222 systemd[1]: Starting systemd-modules-load.service... Jul 14 21:43:40.624232 systemd[1]: Starting systemd-network-generator.service... Jul 14 21:43:40.624242 systemd[1]: Starting systemd-remount-fs.service... Jul 14 21:43:40.624253 systemd[1]: Starting systemd-udev-trigger.service... Jul 14 21:43:40.624263 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 21:43:40.624274 systemd[1]: Stopped verity-setup.service. Jul 14 21:43:40.624284 systemd[1]: Mounted dev-hugepages.mount. Jul 14 21:43:40.624294 systemd[1]: Mounted dev-mqueue.mount. Jul 14 21:43:40.624305 systemd[1]: Mounted media.mount. Jul 14 21:43:40.624319 systemd[1]: Mounted sys-kernel-debug.mount. Jul 14 21:43:40.624329 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 14 21:43:40.624339 systemd[1]: Mounted tmp.mount. Jul 14 21:43:40.624351 systemd[1]: Finished kmod-static-nodes.service. Jul 14 21:43:40.624364 systemd-journald[991]: Journal started Jul 14 21:43:40.624404 systemd-journald[991]: Runtime Journal (/run/log/journal/b894782f72e645b19fe5a53cbc6fe091) is 6.0M, max 48.7M, 42.6M free. Jul 14 21:43:40.624434 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 21:43:38.704000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 21:43:38.794000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 14 21:43:38.794000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 14 21:43:38.794000 audit: BPF prog-id=10 op=LOAD Jul 14 21:43:38.794000 audit: BPF prog-id=10 op=UNLOAD Jul 14 21:43:38.794000 audit: BPF prog-id=11 op=LOAD Jul 14 21:43:38.794000 audit: BPF prog-id=11 op=UNLOAD Jul 14 21:43:38.845000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 14 21:43:38.845000 audit[929]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400023d8b4 a1=40001bede0 a2=40001c5040 a3=32 items=0 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:43:38.845000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 14 21:43:38.847000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 14 21:43:38.847000 audit[929]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=400023d989 a2=1ed a3=0 items=2 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:43:38.847000 audit: CWD cwd="/" Jul 14 21:43:38.847000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 21:43:38.847000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 21:43:38.847000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 14 21:43:40.516000 audit: BPF prog-id=12 op=LOAD Jul 14 21:43:40.516000 audit: BPF prog-id=3 op=UNLOAD Jul 14 21:43:40.516000 audit: BPF prog-id=13 op=LOAD Jul 14 21:43:40.516000 audit: BPF prog-id=14 op=LOAD Jul 14 21:43:40.516000 audit: BPF prog-id=4 op=UNLOAD Jul 14 21:43:40.516000 audit: BPF prog-id=5 op=UNLOAD Jul 14 21:43:40.517000 audit: BPF prog-id=15 op=LOAD Jul 14 21:43:40.517000 audit: BPF prog-id=12 op=UNLOAD Jul 14 21:43:40.517000 audit: BPF prog-id=16 op=LOAD Jul 14 21:43:40.517000 audit: BPF prog-id=17 op=LOAD Jul 14 21:43:40.517000 audit: BPF prog-id=13 op=UNLOAD Jul 14 21:43:40.517000 audit: BPF prog-id=14 op=UNLOAD Jul 14 21:43:40.518000 audit: BPF prog-id=18 op=LOAD Jul 14 21:43:40.518000 audit: BPF prog-id=15 op=UNLOAD Jul 14 21:43:40.518000 audit: BPF prog-id=19 op=LOAD Jul 14 21:43:40.518000 audit: BPF prog-id=20 op=LOAD Jul 14 21:43:40.518000 audit: BPF prog-id=16 op=UNLOAD Jul 14 21:43:40.518000 audit: BPF prog-id=17 op=UNLOAD Jul 14 21:43:40.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.528000 audit: BPF prog-id=18 op=UNLOAD Jul 14 21:43:40.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.599000 audit: BPF prog-id=21 op=LOAD Jul 14 21:43:40.599000 audit: BPF prog-id=22 op=LOAD Jul 14 21:43:40.599000 audit: BPF prog-id=23 op=LOAD Jul 14 21:43:40.599000 audit: BPF prog-id=19 op=UNLOAD Jul 14 21:43:40.599000 audit: BPF prog-id=20 op=UNLOAD Jul 14 21:43:40.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.619000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 14 21:43:40.619000 audit[991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc1f59210 a2=4000 a3=1 items=0 ppid=1 pid=991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:43:40.619000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 14 21:43:40.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.840895 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 21:43:40.516061 systemd[1]: Queued start job for default target multi-user.target. Jul 14 21:43:38.841213 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 14 21:43:40.625844 systemd[1]: Finished modprobe@configfs.service. Jul 14 21:43:40.516074 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 14 21:43:38.841232 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 14 21:43:40.519596 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 21:43:38.841262 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:38Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 14 21:43:38.841272 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:38Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 14 21:43:38.841308 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:38Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 14 21:43:38.841320 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:38Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 14 21:43:38.841514 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:38Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 14 21:43:40.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:38.841553 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 14 21:43:38.841565 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 14 21:43:38.846039 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 14 21:43:38.846080 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 14 21:43:38.846100 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.101: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.101 Jul 14 21:43:38.846114 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 14 21:43:38.846135 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.101: no such file or directory" path=/var/lib/torcx/store/3510.3.101 Jul 14 21:43:38.846149 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 14 21:43:40.279979 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:40Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 21:43:40.280237 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:40Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 21:43:40.280340 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:40Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 21:43:40.280506 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:40Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 21:43:40.280555 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:40Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 14 21:43:40.280608 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T21:43:40Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 14 21:43:40.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.627808 systemd[1]: Started systemd-journald.service. Jul 14 21:43:40.628056 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:43:40.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.628894 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:43:40.629716 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:43:40.630053 systemd[1]: Finished modprobe@drm.service. Jul 14 21:43:40.630889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:43:40.631043 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:43:40.631931 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 21:43:40.632087 systemd[1]: Finished modprobe@fuse.service. Jul 14 21:43:40.632967 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:43:40.633119 systemd[1]: Finished modprobe@loop.service. Jul 14 21:43:40.634113 systemd[1]: Finished systemd-modules-load.service. Jul 14 21:43:40.635925 systemd[1]: Finished systemd-network-generator.service. Jul 14 21:43:40.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.636953 systemd[1]: Finished systemd-remount-fs.service. Jul 14 21:43:40.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.638022 systemd[1]: Reached target network-pre.target. Jul 14 21:43:40.640736 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 14 21:43:40.642767 systemd[1]: Mounting sys-kernel-config.mount... Jul 14 21:43:40.643320 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 21:43:40.645934 systemd[1]: Starting systemd-hwdb-update.service... Jul 14 21:43:40.647597 systemd[1]: Starting systemd-journal-flush.service... Jul 14 21:43:40.648316 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:43:40.649577 systemd[1]: Starting systemd-random-seed.service... Jul 14 21:43:40.650285 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:43:40.651327 systemd[1]: Starting systemd-sysctl.service... Jul 14 21:43:40.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.653474 systemd[1]: Finished flatcar-tmpfiles.service. Jul 14 21:43:40.661830 systemd-journald[991]: Time spent on flushing to /var/log/journal/b894782f72e645b19fe5a53cbc6fe091 is 14.243ms for 1002 entries. Jul 14 21:43:40.661830 systemd-journald[991]: System Journal (/var/log/journal/b894782f72e645b19fe5a53cbc6fe091) is 8.0M, max 195.6M, 187.6M free. Jul 14 21:43:40.687995 systemd-journald[991]: Received client request to flush runtime journal. Jul 14 21:43:40.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.658937 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 14 21:43:40.659673 systemd[1]: Mounted sys-kernel-config.mount. Jul 14 21:43:40.663926 systemd[1]: Starting systemd-sysusers.service... Jul 14 21:43:40.692893 udevadm[1037]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 14 21:43:40.664874 systemd[1]: Finished systemd-random-seed.service. Jul 14 21:43:40.665617 systemd[1]: Reached target first-boot-complete.target. Jul 14 21:43:40.673471 systemd[1]: Finished systemd-sysctl.service. Jul 14 21:43:40.678108 systemd[1]: Finished systemd-udev-trigger.service. Jul 14 21:43:40.680043 systemd[1]: Starting systemd-udev-settle.service... Jul 14 21:43:40.689341 systemd[1]: Finished systemd-journal-flush.service. Jul 14 21:43:40.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:40.692614 systemd[1]: Finished systemd-sysusers.service. Jul 14 21:43:40.694586 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 14 21:43:40.712072 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 14 21:43:40.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.023209 systemd[1]: Finished systemd-hwdb-update.service. Jul 14 21:43:41.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.023000 audit: BPF prog-id=24 op=LOAD Jul 14 21:43:41.023000 audit: BPF prog-id=25 op=LOAD Jul 14 21:43:41.023000 audit: BPF prog-id=7 op=UNLOAD Jul 14 21:43:41.023000 audit: BPF prog-id=8 op=UNLOAD Jul 14 21:43:41.025186 systemd[1]: Starting systemd-udevd.service... Jul 14 21:43:41.040399 systemd-udevd[1041]: Using default interface naming scheme 'v252'. Jul 14 21:43:41.054892 systemd[1]: Started systemd-udevd.service. Jul 14 21:43:41.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.056000 audit: BPF prog-id=26 op=LOAD Jul 14 21:43:41.058307 systemd[1]: Starting systemd-networkd.service... Jul 14 21:43:41.066000 audit: BPF prog-id=27 op=LOAD Jul 14 21:43:41.066000 audit: BPF prog-id=28 op=LOAD Jul 14 21:43:41.066000 audit: BPF prog-id=29 op=LOAD Jul 14 21:43:41.068034 systemd[1]: Starting systemd-userdbd.service... Jul 14 21:43:41.080796 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 14 21:43:41.098520 systemd[1]: Started systemd-userdbd.service. Jul 14 21:43:41.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.123741 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 14 21:43:41.168248 systemd-networkd[1050]: lo: Link UP Jul 14 21:43:41.168258 systemd-networkd[1050]: lo: Gained carrier Jul 14 21:43:41.168615 systemd-networkd[1050]: Enumeration completed Jul 14 21:43:41.168719 systemd[1]: Started systemd-networkd.service. Jul 14 21:43:41.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.169543 systemd-networkd[1050]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:43:41.172504 systemd-networkd[1050]: eth0: Link UP Jul 14 21:43:41.172515 systemd-networkd[1050]: eth0: Gained carrier Jul 14 21:43:41.190153 systemd[1]: Finished systemd-udev-settle.service. Jul 14 21:43:41.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.192172 systemd[1]: Starting lvm2-activation-early.service... Jul 14 21:43:41.194967 systemd-networkd[1050]: eth0: DHCPv4 address 10.0.0.9/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:43:41.210858 lvm[1074]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:43:41.237657 systemd[1]: Finished lvm2-activation-early.service. Jul 14 21:43:41.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.238500 systemd[1]: Reached target cryptsetup.target. Jul 14 21:43:41.240296 systemd[1]: Starting lvm2-activation.service... Jul 14 21:43:41.244076 lvm[1075]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:43:41.266707 systemd[1]: Finished lvm2-activation.service. Jul 14 21:43:41.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.267475 systemd[1]: Reached target local-fs-pre.target. Jul 14 21:43:41.268123 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 21:43:41.268154 systemd[1]: Reached target local-fs.target. Jul 14 21:43:41.268717 systemd[1]: Reached target machines.target. Jul 14 21:43:41.270570 systemd[1]: Starting ldconfig.service... Jul 14 21:43:41.271583 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:43:41.271638 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:43:41.272681 systemd[1]: Starting systemd-boot-update.service... Jul 14 21:43:41.274499 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 14 21:43:41.276389 systemd[1]: Starting systemd-machine-id-commit.service... Jul 14 21:43:41.279146 systemd[1]: Starting systemd-sysext.service... Jul 14 21:43:41.283629 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1077 (bootctl) Jul 14 21:43:41.284751 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 14 21:43:41.286879 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 14 21:43:41.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.291819 systemd[1]: Unmounting usr-share-oem.mount... Jul 14 21:43:41.299793 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 14 21:43:41.299997 systemd[1]: Unmounted usr-share-oem.mount. Jul 14 21:43:41.356346 systemd[1]: Finished systemd-machine-id-commit.service. Jul 14 21:43:41.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.359927 kernel: loop0: detected capacity change from 0 to 203944 Jul 14 21:43:41.369774 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 21:43:41.376478 systemd-fsck[1089]: fsck.fat 4.2 (2021-01-31) Jul 14 21:43:41.376478 systemd-fsck[1089]: /dev/vda1: 236 files, 117310/258078 clusters Jul 14 21:43:41.378121 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 14 21:43:41.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.383798 kernel: loop1: detected capacity change from 0 to 203944 Jul 14 21:43:41.388150 (sd-sysext)[1092]: Using extensions 'kubernetes'. Jul 14 21:43:41.388479 (sd-sysext)[1092]: Merged extensions into '/usr'. Jul 14 21:43:41.405894 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:43:41.407246 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:43:41.409285 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:43:41.411168 systemd[1]: Starting modprobe@loop.service... Jul 14 21:43:41.411987 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:43:41.412187 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:43:41.413029 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:43:41.413194 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:43:41.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.414384 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:43:41.414510 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:43:41.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.415685 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:43:41.415851 systemd[1]: Finished modprobe@loop.service. Jul 14 21:43:41.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.417009 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:43:41.417119 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:43:41.467628 ldconfig[1076]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 21:43:41.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.476381 systemd[1]: Finished ldconfig.service. Jul 14 21:43:41.615693 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 21:43:41.617625 systemd[1]: Mounting boot.mount... Jul 14 21:43:41.619365 systemd[1]: Mounting usr-share-oem.mount... Jul 14 21:43:41.626206 systemd[1]: Mounted boot.mount. Jul 14 21:43:41.627007 systemd[1]: Mounted usr-share-oem.mount. Jul 14 21:43:41.631116 systemd[1]: Finished systemd-sysext.service. Jul 14 21:43:41.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.633657 systemd[1]: Starting ensure-sysext.service... Jul 14 21:43:41.635662 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 14 21:43:41.636901 systemd[1]: Finished systemd-boot-update.service. Jul 14 21:43:41.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.640879 systemd[1]: Reloading. Jul 14 21:43:41.645593 systemd-tmpfiles[1100]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 14 21:43:41.646475 systemd-tmpfiles[1100]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 21:43:41.647946 systemd-tmpfiles[1100]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 21:43:41.682663 /usr/lib/systemd/system-generators/torcx-generator[1120]: time="2025-07-14T21:43:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 21:43:41.682700 /usr/lib/systemd/system-generators/torcx-generator[1120]: time="2025-07-14T21:43:41Z" level=info msg="torcx already run" Jul 14 21:43:41.741313 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 21:43:41.741335 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 21:43:41.757031 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:43:41.798000 audit: BPF prog-id=30 op=LOAD Jul 14 21:43:41.798000 audit: BPF prog-id=26 op=UNLOAD Jul 14 21:43:41.798000 audit: BPF prog-id=31 op=LOAD Jul 14 21:43:41.798000 audit: BPF prog-id=21 op=UNLOAD Jul 14 21:43:41.799000 audit: BPF prog-id=32 op=LOAD Jul 14 21:43:41.799000 audit: BPF prog-id=33 op=LOAD Jul 14 21:43:41.799000 audit: BPF prog-id=22 op=UNLOAD Jul 14 21:43:41.799000 audit: BPF prog-id=23 op=UNLOAD Jul 14 21:43:41.800000 audit: BPF prog-id=34 op=LOAD Jul 14 21:43:41.800000 audit: BPF prog-id=27 op=UNLOAD Jul 14 21:43:41.800000 audit: BPF prog-id=35 op=LOAD Jul 14 21:43:41.800000 audit: BPF prog-id=36 op=LOAD Jul 14 21:43:41.800000 audit: BPF prog-id=28 op=UNLOAD Jul 14 21:43:41.800000 audit: BPF prog-id=29 op=UNLOAD Jul 14 21:43:41.800000 audit: BPF prog-id=37 op=LOAD Jul 14 21:43:41.800000 audit: BPF prog-id=38 op=LOAD Jul 14 21:43:41.801000 audit: BPF prog-id=24 op=UNLOAD Jul 14 21:43:41.801000 audit: BPF prog-id=25 op=UNLOAD Jul 14 21:43:41.803973 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 14 21:43:41.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.808388 systemd[1]: Starting audit-rules.service... Jul 14 21:43:41.810494 systemd[1]: Starting clean-ca-certificates.service... Jul 14 21:43:41.812790 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 14 21:43:41.814000 audit: BPF prog-id=39 op=LOAD Jul 14 21:43:41.819879 systemd[1]: Starting systemd-resolved.service... Jul 14 21:43:41.822000 audit: BPF prog-id=40 op=LOAD Jul 14 21:43:41.824071 systemd[1]: Starting systemd-timesyncd.service... Jul 14 21:43:41.825952 systemd[1]: Starting systemd-update-utmp.service... Jul 14 21:43:41.828968 systemd[1]: Finished clean-ca-certificates.service. Jul 14 21:43:41.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.830000 audit[1170]: SYSTEM_BOOT pid=1170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.834073 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:43:41.835553 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:43:41.837673 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:43:41.839547 systemd[1]: Starting modprobe@loop.service... Jul 14 21:43:41.840269 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:43:41.840432 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:43:41.840576 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:43:41.841639 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 14 21:43:41.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.843003 systemd[1]: Finished systemd-update-utmp.service. Jul 14 21:43:41.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.844066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:43:41.844189 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:43:41.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.845349 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:43:41.845485 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:43:41.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.846682 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:43:41.846903 systemd[1]: Finished modprobe@loop.service. Jul 14 21:43:41.849021 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:43:41.849135 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:43:41.850500 systemd[1]: Starting systemd-update-done.service... Jul 14 21:43:41.854157 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:43:41.855794 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:43:41.858254 systemd[1]: Starting modprobe@drm.service... Jul 14 21:43:41.860606 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:43:41.862684 systemd[1]: Starting modprobe@loop.service... Jul 14 21:43:41.863429 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:43:41.863558 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:43:41.864899 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 14 21:43:41.865879 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:43:41.867105 systemd[1]: Finished systemd-update-done.service. Jul 14 21:43:41.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.868351 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:43:41.868482 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:43:41.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.869566 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:43:41.869682 systemd[1]: Finished modprobe@drm.service. Jul 14 21:43:41.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.870841 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:43:41.870955 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:43:41.872025 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:43:41.872146 systemd[1]: Finished modprobe@loop.service. Jul 14 21:43:41.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.874369 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:43:41.874437 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:43:41.875869 systemd[1]: Finished ensure-sysext.service. Jul 14 21:43:41.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:43:41.875000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 14 21:43:41.875000 audit[1188]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffda2dc3a0 a2=420 a3=0 items=0 ppid=1159 pid=1188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:43:41.875000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 14 21:43:41.877126 augenrules[1188]: No rules Jul 14 21:43:41.878058 systemd[1]: Finished audit-rules.service. Jul 14 21:43:41.889036 systemd[1]: Started systemd-timesyncd.service. Jul 14 21:43:42.315705 systemd-timesyncd[1169]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 21:43:42.315768 systemd-timesyncd[1169]: Initial clock synchronization to Mon 2025-07-14 21:43:42.315609 UTC. Jul 14 21:43:42.315940 systemd[1]: Reached target time-set.target. Jul 14 21:43:42.318507 systemd-resolved[1163]: Positive Trust Anchors: Jul 14 21:43:42.318518 systemd-resolved[1163]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:43:42.318545 systemd-resolved[1163]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 14 21:43:42.333957 systemd-resolved[1163]: Defaulting to hostname 'linux'. Jul 14 21:43:42.335441 systemd[1]: Started systemd-resolved.service. Jul 14 21:43:42.336144 systemd[1]: Reached target network.target. Jul 14 21:43:42.336723 systemd[1]: Reached target nss-lookup.target. Jul 14 21:43:42.337279 systemd[1]: Reached target sysinit.target. Jul 14 21:43:42.337913 systemd[1]: Started motdgen.path. Jul 14 21:43:42.338439 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 14 21:43:42.339419 systemd[1]: Started logrotate.timer. Jul 14 21:43:42.340115 systemd[1]: Started mdadm.timer. Jul 14 21:43:42.340630 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 14 21:43:42.341214 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 21:43:42.341242 systemd[1]: Reached target paths.target. Jul 14 21:43:42.341835 systemd[1]: Reached target timers.target. Jul 14 21:43:42.342706 systemd[1]: Listening on dbus.socket. Jul 14 21:43:42.344318 systemd[1]: Starting docker.socket... Jul 14 21:43:42.347493 systemd[1]: Listening on sshd.socket. Jul 14 21:43:42.348189 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:43:42.348678 systemd[1]: Listening on docker.socket. Jul 14 21:43:42.349303 systemd[1]: Reached target sockets.target. Jul 14 21:43:42.349895 systemd[1]: Reached target basic.target. Jul 14 21:43:42.350449 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 14 21:43:42.350480 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 14 21:43:42.351473 systemd[1]: Starting containerd.service... Jul 14 21:43:42.353117 systemd[1]: Starting dbus.service... Jul 14 21:43:42.354781 systemd[1]: Starting enable-oem-cloudinit.service... Jul 14 21:43:42.356536 systemd[1]: Starting extend-filesystems.service... Jul 14 21:43:42.357264 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 14 21:43:42.358795 systemd[1]: Starting motdgen.service... Jul 14 21:43:42.360143 jq[1198]: false Jul 14 21:43:42.361205 systemd[1]: Starting prepare-helm.service... Jul 14 21:43:42.362929 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 14 21:43:42.364807 systemd[1]: Starting sshd-keygen.service... Jul 14 21:43:42.368163 systemd[1]: Starting systemd-logind.service... Jul 14 21:43:42.368811 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:43:42.368896 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 21:43:42.369360 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 21:43:42.370068 systemd[1]: Starting update-engine.service... Jul 14 21:43:42.371730 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 14 21:43:42.374669 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 21:43:42.375340 jq[1213]: true Jul 14 21:43:42.374876 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 14 21:43:42.376483 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 21:43:42.377088 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 14 21:43:42.395049 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 21:43:42.395236 systemd[1]: Finished motdgen.service. Jul 14 21:43:42.400894 extend-filesystems[1199]: Found loop1 Jul 14 21:43:42.401815 tar[1216]: linux-arm64/helm Jul 14 21:43:42.402065 extend-filesystems[1199]: Found vda Jul 14 21:43:42.402658 extend-filesystems[1199]: Found vda1 Jul 14 21:43:42.403231 extend-filesystems[1199]: Found vda2 Jul 14 21:43:42.403832 extend-filesystems[1199]: Found vda3 Jul 14 21:43:42.404507 extend-filesystems[1199]: Found usr Jul 14 21:43:42.405238 extend-filesystems[1199]: Found vda4 Jul 14 21:43:42.410477 extend-filesystems[1199]: Found vda6 Jul 14 21:43:42.410477 extend-filesystems[1199]: Found vda7 Jul 14 21:43:42.410477 extend-filesystems[1199]: Found vda9 Jul 14 21:43:42.410477 extend-filesystems[1199]: Checking size of /dev/vda9 Jul 14 21:43:42.412902 jq[1217]: true Jul 14 21:43:42.420699 dbus-daemon[1197]: [system] SELinux support is enabled Jul 14 21:43:42.420909 systemd[1]: Started dbus.service. Jul 14 21:43:42.423715 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 21:43:42.423762 systemd[1]: Reached target system-config.target. Jul 14 21:43:42.424439 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 21:43:42.424471 systemd[1]: Reached target user-config.target. Jul 14 21:43:42.434577 extend-filesystems[1199]: Resized partition /dev/vda9 Jul 14 21:43:42.440491 extend-filesystems[1243]: resize2fs 1.46.5 (30-Dec-2021) Jul 14 21:43:42.453634 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 21:43:42.468650 update_engine[1212]: I0714 21:43:42.468271 1212 main.cc:92] Flatcar Update Engine starting Jul 14 21:43:42.470556 systemd[1]: Started update-engine.service. Jul 14 21:43:42.470862 systemd-logind[1209]: Watching system buttons on /dev/input/event0 (Power Button) Jul 14 21:43:42.471588 systemd-logind[1209]: New seat seat0. Jul 14 21:43:42.473122 systemd[1]: Started locksmithd.service. Jul 14 21:43:42.476008 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 21:43:42.476348 update_engine[1212]: I0714 21:43:42.476318 1212 update_check_scheduler.cc:74] Next update check in 9m9s Jul 14 21:43:42.488411 systemd[1]: Started systemd-logind.service. Jul 14 21:43:42.490168 extend-filesystems[1243]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 21:43:42.490168 extend-filesystems[1243]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 21:43:42.490168 extend-filesystems[1243]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 21:43:42.493584 extend-filesystems[1199]: Resized filesystem in /dev/vda9 Jul 14 21:43:42.490940 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 21:43:42.491106 systemd[1]: Finished extend-filesystems.service. Jul 14 21:43:42.494982 bash[1247]: Updated "/home/core/.ssh/authorized_keys" Jul 14 21:43:42.495857 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 14 21:43:42.521082 env[1220]: time="2025-07-14T21:43:42.521026611Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 14 21:43:42.543164 env[1220]: time="2025-07-14T21:43:42.543073451Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 21:43:42.543260 env[1220]: time="2025-07-14T21:43:42.543218531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:43:42.543356 locksmithd[1249]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 21:43:42.544683 env[1220]: time="2025-07-14T21:43:42.544643651Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.187-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:43:42.544683 env[1220]: time="2025-07-14T21:43:42.544676091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:43:42.544907 env[1220]: time="2025-07-14T21:43:42.544876891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:43:42.544907 env[1220]: time="2025-07-14T21:43:42.544899211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 21:43:42.544976 env[1220]: time="2025-07-14T21:43:42.544912451Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 14 21:43:42.544976 env[1220]: time="2025-07-14T21:43:42.544922251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 21:43:42.545019 env[1220]: time="2025-07-14T21:43:42.544988851Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:43:42.545295 env[1220]: time="2025-07-14T21:43:42.545261691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:43:42.545406 env[1220]: time="2025-07-14T21:43:42.545385131Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:43:42.545406 env[1220]: time="2025-07-14T21:43:42.545405171Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 21:43:42.545470 env[1220]: time="2025-07-14T21:43:42.545454931Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 14 21:43:42.545502 env[1220]: time="2025-07-14T21:43:42.545471251Z" level=info msg="metadata content store policy set" policy=shared Jul 14 21:43:42.549028 env[1220]: time="2025-07-14T21:43:42.548994011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 21:43:42.549028 env[1220]: time="2025-07-14T21:43:42.549026531Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 21:43:42.549128 env[1220]: time="2025-07-14T21:43:42.549040251Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 21:43:42.549128 env[1220]: time="2025-07-14T21:43:42.549071291Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 21:43:42.549128 env[1220]: time="2025-07-14T21:43:42.549086611Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 21:43:42.549128 env[1220]: time="2025-07-14T21:43:42.549100251Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 21:43:42.549128 env[1220]: time="2025-07-14T21:43:42.549113011Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 21:43:42.549460 env[1220]: time="2025-07-14T21:43:42.549439211Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 21:43:42.549498 env[1220]: time="2025-07-14T21:43:42.549466931Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 14 21:43:42.549498 env[1220]: time="2025-07-14T21:43:42.549481931Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 21:43:42.549498 env[1220]: time="2025-07-14T21:43:42.549494051Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 21:43:42.549553 env[1220]: time="2025-07-14T21:43:42.549509811Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 21:43:42.549667 env[1220]: time="2025-07-14T21:43:42.549648891Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 21:43:42.549745 env[1220]: time="2025-07-14T21:43:42.549731331Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 21:43:42.549978 env[1220]: time="2025-07-14T21:43:42.549964251Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 21:43:42.550011 env[1220]: time="2025-07-14T21:43:42.549993251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 21:43:42.550011 env[1220]: time="2025-07-14T21:43:42.550006971Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 21:43:42.550154 env[1220]: time="2025-07-14T21:43:42.550143491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 21:43:42.550182 env[1220]: time="2025-07-14T21:43:42.550159211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 21:43:42.550182 env[1220]: time="2025-07-14T21:43:42.550172371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 21:43:42.550242 env[1220]: time="2025-07-14T21:43:42.550183771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 21:43:42.550242 env[1220]: time="2025-07-14T21:43:42.550205571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 21:43:42.550242 env[1220]: time="2025-07-14T21:43:42.550218331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 21:43:42.550242 env[1220]: time="2025-07-14T21:43:42.550229091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 21:43:42.550242 env[1220]: time="2025-07-14T21:43:42.550241131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 21:43:42.550338 env[1220]: time="2025-07-14T21:43:42.550254171Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 21:43:42.550436 env[1220]: time="2025-07-14T21:43:42.550378171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 21:43:42.550436 env[1220]: time="2025-07-14T21:43:42.550405171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 21:43:42.550436 env[1220]: time="2025-07-14T21:43:42.550417651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 21:43:42.550436 env[1220]: time="2025-07-14T21:43:42.550428931Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 21:43:42.550540 env[1220]: time="2025-07-14T21:43:42.550442211Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 14 21:43:42.550540 env[1220]: time="2025-07-14T21:43:42.550453491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 21:43:42.550540 env[1220]: time="2025-07-14T21:43:42.550477731Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 14 21:43:42.550540 env[1220]: time="2025-07-14T21:43:42.550515411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 21:43:42.550796 env[1220]: time="2025-07-14T21:43:42.550740771Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 21:43:42.553982 env[1220]: time="2025-07-14T21:43:42.550802931Z" level=info msg="Connect containerd service" Jul 14 21:43:42.553982 env[1220]: time="2025-07-14T21:43:42.550831971Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 21:43:42.553982 env[1220]: time="2025-07-14T21:43:42.551573371Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:43:42.553982 env[1220]: time="2025-07-14T21:43:42.552012611Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 21:43:42.553982 env[1220]: time="2025-07-14T21:43:42.552052211Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 21:43:42.553982 env[1220]: time="2025-07-14T21:43:42.553294851Z" level=info msg="containerd successfully booted in 0.034239s" Jul 14 21:43:42.552177 systemd[1]: Started containerd.service. Jul 14 21:43:42.555419 env[1220]: time="2025-07-14T21:43:42.554412931Z" level=info msg="Start subscribing containerd event" Jul 14 21:43:42.555419 env[1220]: time="2025-07-14T21:43:42.554592891Z" level=info msg="Start recovering state" Jul 14 21:43:42.555419 env[1220]: time="2025-07-14T21:43:42.554669251Z" level=info msg="Start event monitor" Jul 14 21:43:42.555419 env[1220]: time="2025-07-14T21:43:42.554692051Z" level=info msg="Start snapshots syncer" Jul 14 21:43:42.555419 env[1220]: time="2025-07-14T21:43:42.554703571Z" level=info msg="Start cni network conf syncer for default" Jul 14 21:43:42.555419 env[1220]: time="2025-07-14T21:43:42.554714411Z" level=info msg="Start streaming server" Jul 14 21:43:42.796155 tar[1216]: linux-arm64/LICENSE Jul 14 21:43:42.796262 tar[1216]: linux-arm64/README.md Jul 14 21:43:42.800354 systemd[1]: Finished prepare-helm.service. Jul 14 21:43:43.290735 systemd-networkd[1050]: eth0: Gained IPv6LL Jul 14 21:43:43.293026 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 14 21:43:43.294120 systemd[1]: Reached target network-online.target. Jul 14 21:43:43.296210 systemd[1]: Starting kubelet.service... Jul 14 21:43:43.719812 sshd_keygen[1214]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 21:43:43.738672 systemd[1]: Finished sshd-keygen.service. Jul 14 21:43:43.740875 systemd[1]: Starting issuegen.service... Jul 14 21:43:43.745608 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 21:43:43.745768 systemd[1]: Finished issuegen.service. Jul 14 21:43:43.747806 systemd[1]: Starting systemd-user-sessions.service... Jul 14 21:43:43.754287 systemd[1]: Finished systemd-user-sessions.service. Jul 14 21:43:43.756459 systemd[1]: Started getty@tty1.service. Jul 14 21:43:43.758453 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 14 21:43:43.759631 systemd[1]: Reached target getty.target. Jul 14 21:43:43.888190 systemd[1]: Started kubelet.service. Jul 14 21:43:43.889322 systemd[1]: Reached target multi-user.target. Jul 14 21:43:43.891437 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 14 21:43:43.898412 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 14 21:43:43.898589 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 14 21:43:43.899494 systemd[1]: Startup finished in 612ms (kernel) + 5.075s (initrd) + 4.809s (userspace) = 10.496s. Jul 14 21:43:44.382044 kubelet[1279]: E0714 21:43:44.382004 1279 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:43:44.384086 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:43:44.384212 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:43:47.156952 systemd[1]: Created slice system-sshd.slice. Jul 14 21:43:47.158033 systemd[1]: Started sshd@0-10.0.0.9:22-10.0.0.1:35242.service. Jul 14 21:43:47.211149 sshd[1288]: Accepted publickey for core from 10.0.0.1 port 35242 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:43:47.215677 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:43:47.225923 systemd-logind[1209]: New session 1 of user core. Jul 14 21:43:47.226891 systemd[1]: Created slice user-500.slice. Jul 14 21:43:47.228093 systemd[1]: Starting user-runtime-dir@500.service... Jul 14 21:43:47.239183 systemd[1]: Finished user-runtime-dir@500.service. Jul 14 21:43:47.240834 systemd[1]: Starting user@500.service... Jul 14 21:43:47.245350 (systemd)[1291]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:43:47.308605 systemd[1291]: Queued start job for default target default.target. Jul 14 21:43:47.309121 systemd[1291]: Reached target paths.target. Jul 14 21:43:47.309153 systemd[1291]: Reached target sockets.target. Jul 14 21:43:47.309166 systemd[1291]: Reached target timers.target. Jul 14 21:43:47.309176 systemd[1291]: Reached target basic.target. Jul 14 21:43:47.309216 systemd[1291]: Reached target default.target. Jul 14 21:43:47.309240 systemd[1291]: Startup finished in 57ms. Jul 14 21:43:47.309819 systemd[1]: Started user@500.service. Jul 14 21:43:47.310790 systemd[1]: Started session-1.scope. Jul 14 21:43:47.367298 systemd[1]: Started sshd@1-10.0.0.9:22-10.0.0.1:35252.service. Jul 14 21:43:47.418341 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 35252 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:43:47.418919 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:43:47.424242 systemd-logind[1209]: New session 2 of user core. Jul 14 21:43:47.425198 systemd[1]: Started session-2.scope. Jul 14 21:43:47.483917 sshd[1300]: pam_unix(sshd:session): session closed for user core Jul 14 21:43:47.489979 systemd[1]: sshd@1-10.0.0.9:22-10.0.0.1:35252.service: Deactivated successfully. Jul 14 21:43:47.491735 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 21:43:47.492410 systemd-logind[1209]: Session 2 logged out. Waiting for processes to exit. Jul 14 21:43:47.494728 systemd[1]: Started sshd@2-10.0.0.9:22-10.0.0.1:35266.service. Jul 14 21:43:47.496522 systemd-logind[1209]: Removed session 2. Jul 14 21:43:47.534323 sshd[1306]: Accepted publickey for core from 10.0.0.1 port 35266 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:43:47.535697 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:43:47.540654 systemd-logind[1209]: New session 3 of user core. Jul 14 21:43:47.540852 systemd[1]: Started session-3.scope. Jul 14 21:43:47.597423 sshd[1306]: pam_unix(sshd:session): session closed for user core Jul 14 21:43:47.601089 systemd[1]: Started sshd@3-10.0.0.9:22-10.0.0.1:35268.service. Jul 14 21:43:47.601767 systemd[1]: sshd@2-10.0.0.9:22-10.0.0.1:35266.service: Deactivated successfully. Jul 14 21:43:47.605078 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 21:43:47.605752 systemd-logind[1209]: Session 3 logged out. Waiting for processes to exit. Jul 14 21:43:47.606722 systemd-logind[1209]: Removed session 3. Jul 14 21:43:47.638607 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 35268 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:43:47.639897 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:43:47.643711 systemd-logind[1209]: New session 4 of user core. Jul 14 21:43:47.644249 systemd[1]: Started session-4.scope. Jul 14 21:43:47.705185 sshd[1311]: pam_unix(sshd:session): session closed for user core Jul 14 21:43:47.713979 systemd[1]: Started sshd@4-10.0.0.9:22-10.0.0.1:35282.service. Jul 14 21:43:47.714720 systemd[1]: sshd@3-10.0.0.9:22-10.0.0.1:35268.service: Deactivated successfully. Jul 14 21:43:47.715634 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 21:43:47.719847 systemd-logind[1209]: Session 4 logged out. Waiting for processes to exit. Jul 14 21:43:47.721392 systemd-logind[1209]: Removed session 4. Jul 14 21:43:47.759681 sshd[1317]: Accepted publickey for core from 10.0.0.1 port 35282 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:43:47.760937 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:43:47.764281 systemd-logind[1209]: New session 5 of user core. Jul 14 21:43:47.767716 systemd[1]: Started session-5.scope. Jul 14 21:43:47.825816 sudo[1321]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 21:43:47.826043 sudo[1321]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 14 21:43:47.876388 systemd[1]: Starting docker.service... Jul 14 21:43:47.962848 env[1333]: time="2025-07-14T21:43:47.962721371Z" level=info msg="Starting up" Jul 14 21:43:47.964790 env[1333]: time="2025-07-14T21:43:47.964755891Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 14 21:43:47.964911 env[1333]: time="2025-07-14T21:43:47.964897251Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 14 21:43:47.964981 env[1333]: time="2025-07-14T21:43:47.964964611Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 14 21:43:47.965052 env[1333]: time="2025-07-14T21:43:47.965039571Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 14 21:43:47.967097 env[1333]: time="2025-07-14T21:43:47.967052531Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 14 21:43:47.967097 env[1333]: time="2025-07-14T21:43:47.967078611Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 14 21:43:47.967097 env[1333]: time="2025-07-14T21:43:47.967093811Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 14 21:43:47.967097 env[1333]: time="2025-07-14T21:43:47.967104771Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 14 21:43:48.301929 env[1333]: time="2025-07-14T21:43:48.301841331Z" level=info msg="Loading containers: start." Jul 14 21:43:48.424627 kernel: Initializing XFRM netlink socket Jul 14 21:43:48.449802 env[1333]: time="2025-07-14T21:43:48.449765491Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 14 21:43:48.512474 systemd-networkd[1050]: docker0: Link UP Jul 14 21:43:48.531993 env[1333]: time="2025-07-14T21:43:48.531934731Z" level=info msg="Loading containers: done." Jul 14 21:43:48.556661 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3948498906-merged.mount: Deactivated successfully. Jul 14 21:43:48.560459 env[1333]: time="2025-07-14T21:43:48.560408571Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 21:43:48.560631 env[1333]: time="2025-07-14T21:43:48.560612691Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 14 21:43:48.560724 env[1333]: time="2025-07-14T21:43:48.560709691Z" level=info msg="Daemon has completed initialization" Jul 14 21:43:48.589778 systemd[1]: Started docker.service. Jul 14 21:43:48.594948 env[1333]: time="2025-07-14T21:43:48.594895531Z" level=info msg="API listen on /run/docker.sock" Jul 14 21:43:49.241392 env[1220]: time="2025-07-14T21:43:49.241328451Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 14 21:43:49.916912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount284535592.mount: Deactivated successfully. Jul 14 21:43:51.489852 env[1220]: time="2025-07-14T21:43:51.489781851Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:51.495653 env[1220]: time="2025-07-14T21:43:51.495585451Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:51.497453 env[1220]: time="2025-07-14T21:43:51.497410651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:51.501035 env[1220]: time="2025-07-14T21:43:51.500993291Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:51.502115 env[1220]: time="2025-07-14T21:43:51.502073851Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 14 21:43:51.506361 env[1220]: time="2025-07-14T21:43:51.506327251Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 14 21:43:52.894425 env[1220]: time="2025-07-14T21:43:52.894371491Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:52.897944 env[1220]: time="2025-07-14T21:43:52.897888491Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:52.901697 env[1220]: time="2025-07-14T21:43:52.901630971Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:52.904749 env[1220]: time="2025-07-14T21:43:52.904699891Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:52.905005 env[1220]: time="2025-07-14T21:43:52.904943531Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 14 21:43:52.905646 env[1220]: time="2025-07-14T21:43:52.905616531Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 14 21:43:54.104355 env[1220]: time="2025-07-14T21:43:54.104304211Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:54.105850 env[1220]: time="2025-07-14T21:43:54.105801811Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:54.107815 env[1220]: time="2025-07-14T21:43:54.107775611Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:54.110823 env[1220]: time="2025-07-14T21:43:54.110784451Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:54.111327 env[1220]: time="2025-07-14T21:43:54.111296971Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 14 21:43:54.111844 env[1220]: time="2025-07-14T21:43:54.111807491Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 14 21:43:54.446955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 21:43:54.447133 systemd[1]: Stopped kubelet.service. Jul 14 21:43:54.448732 systemd[1]: Starting kubelet.service... Jul 14 21:43:54.547778 systemd[1]: Started kubelet.service. Jul 14 21:43:54.596049 kubelet[1466]: E0714 21:43:54.595992 1466 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:43:54.598585 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:43:54.598723 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:43:55.237910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3053488759.mount: Deactivated successfully. Jul 14 21:43:55.844473 env[1220]: time="2025-07-14T21:43:55.844418451Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:55.845956 env[1220]: time="2025-07-14T21:43:55.845916211Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:55.847315 env[1220]: time="2025-07-14T21:43:55.847277291Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:55.848578 env[1220]: time="2025-07-14T21:43:55.848547891Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:55.849063 env[1220]: time="2025-07-14T21:43:55.849026411Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 14 21:43:55.851141 env[1220]: time="2025-07-14T21:43:55.851107491Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 21:43:56.532367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2490580122.mount: Deactivated successfully. Jul 14 21:43:57.491439 env[1220]: time="2025-07-14T21:43:57.491390051Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:57.493026 env[1220]: time="2025-07-14T21:43:57.492989331Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:57.495155 env[1220]: time="2025-07-14T21:43:57.495113451Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:57.497236 env[1220]: time="2025-07-14T21:43:57.497206291Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:57.498114 env[1220]: time="2025-07-14T21:43:57.498088971Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 14 21:43:57.499007 env[1220]: time="2025-07-14T21:43:57.498982771Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 21:43:57.976800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount515879228.mount: Deactivated successfully. Jul 14 21:43:57.981113 env[1220]: time="2025-07-14T21:43:57.981055091Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:57.982482 env[1220]: time="2025-07-14T21:43:57.982443931Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:57.983885 env[1220]: time="2025-07-14T21:43:57.983845251Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:57.985156 env[1220]: time="2025-07-14T21:43:57.985112811Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:43:57.986409 env[1220]: time="2025-07-14T21:43:57.986364931Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 14 21:43:57.986964 env[1220]: time="2025-07-14T21:43:57.986936571Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 14 21:43:58.700833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2288562961.mount: Deactivated successfully. Jul 14 21:44:01.005756 env[1220]: time="2025-07-14T21:44:01.005460091Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:01.009937 env[1220]: time="2025-07-14T21:44:01.009885851Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:01.011921 env[1220]: time="2025-07-14T21:44:01.011881371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:01.014052 env[1220]: time="2025-07-14T21:44:01.014016571Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:01.015075 env[1220]: time="2025-07-14T21:44:01.015031251Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 14 21:44:04.696975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 21:44:04.697151 systemd[1]: Stopped kubelet.service. Jul 14 21:44:04.698632 systemd[1]: Starting kubelet.service... Jul 14 21:44:04.799444 systemd[1]: Started kubelet.service. Jul 14 21:44:04.857142 kubelet[1499]: E0714 21:44:04.857095 1499 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:44:04.858772 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:44:04.858897 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:44:06.422910 systemd[1]: Stopped kubelet.service. Jul 14 21:44:06.424996 systemd[1]: Starting kubelet.service... Jul 14 21:44:06.448464 systemd[1]: Reloading. Jul 14 21:44:06.504233 /usr/lib/systemd/system-generators/torcx-generator[1534]: time="2025-07-14T21:44:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 21:44:06.504261 /usr/lib/systemd/system-generators/torcx-generator[1534]: time="2025-07-14T21:44:06Z" level=info msg="torcx already run" Jul 14 21:44:06.622921 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 21:44:06.622942 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 21:44:06.638300 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:44:06.705639 systemd[1]: Started kubelet.service. Jul 14 21:44:06.706908 systemd[1]: Stopping kubelet.service... Jul 14 21:44:06.707147 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:44:06.707311 systemd[1]: Stopped kubelet.service. Jul 14 21:44:06.708812 systemd[1]: Starting kubelet.service... Jul 14 21:44:06.806370 systemd[1]: Started kubelet.service. Jul 14 21:44:06.855221 kubelet[1579]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:44:06.855585 kubelet[1579]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 21:44:06.855671 kubelet[1579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:44:06.855799 kubelet[1579]: I0714 21:44:06.855765 1579 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:44:08.031361 kubelet[1579]: I0714 21:44:08.031316 1579 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 21:44:08.031752 kubelet[1579]: I0714 21:44:08.031737 1579 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:44:08.032094 kubelet[1579]: I0714 21:44:08.032075 1579 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 21:44:08.082701 kubelet[1579]: E0714 21:44:08.082654 1579 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:08.083843 kubelet[1579]: I0714 21:44:08.083797 1579 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:44:08.092935 kubelet[1579]: E0714 21:44:08.092900 1579 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:44:08.093077 kubelet[1579]: I0714 21:44:08.093061 1579 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:44:08.096722 kubelet[1579]: I0714 21:44:08.096695 1579 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:44:08.097696 kubelet[1579]: I0714 21:44:08.097670 1579 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 21:44:08.097952 kubelet[1579]: I0714 21:44:08.097918 1579 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:44:08.098187 kubelet[1579]: I0714 21:44:08.098017 1579 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:44:08.098305 kubelet[1579]: I0714 21:44:08.098293 1579 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:44:08.098367 kubelet[1579]: I0714 21:44:08.098358 1579 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 21:44:08.098687 kubelet[1579]: I0714 21:44:08.098672 1579 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:44:08.103316 kubelet[1579]: I0714 21:44:08.103288 1579 kubelet.go:408] "Attempting to sync node with API server" Jul 14 21:44:08.103448 kubelet[1579]: I0714 21:44:08.103436 1579 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:44:08.103543 kubelet[1579]: I0714 21:44:08.103532 1579 kubelet.go:314] "Adding apiserver pod source" Jul 14 21:44:08.103660 kubelet[1579]: I0714 21:44:08.103651 1579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:44:08.105785 kubelet[1579]: W0714 21:44:08.105733 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Jul 14 21:44:08.105864 kubelet[1579]: E0714 21:44:08.105795 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:08.106176 kubelet[1579]: W0714 21:44:08.106139 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Jul 14 21:44:08.106218 kubelet[1579]: E0714 21:44:08.106183 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:08.116320 kubelet[1579]: I0714 21:44:08.116290 1579 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 14 21:44:08.117033 kubelet[1579]: I0714 21:44:08.117010 1579 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:44:08.117267 kubelet[1579]: W0714 21:44:08.117256 1579 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 21:44:08.118336 kubelet[1579]: I0714 21:44:08.118315 1579 server.go:1274] "Started kubelet" Jul 14 21:44:08.119427 kubelet[1579]: I0714 21:44:08.119233 1579 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:44:08.119515 kubelet[1579]: I0714 21:44:08.119443 1579 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:44:08.120051 kubelet[1579]: I0714 21:44:08.119963 1579 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:44:08.120474 kubelet[1579]: I0714 21:44:08.120453 1579 server.go:449] "Adding debug handlers to kubelet server" Jul 14 21:44:08.120617 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 14 21:44:08.120672 kubelet[1579]: I0714 21:44:08.120591 1579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:44:08.121454 kubelet[1579]: I0714 21:44:08.121432 1579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:44:08.123116 kubelet[1579]: I0714 21:44:08.123028 1579 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 21:44:08.123195 kubelet[1579]: I0714 21:44:08.123135 1579 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 21:44:08.123195 kubelet[1579]: I0714 21:44:08.123191 1579 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:44:08.123814 kubelet[1579]: W0714 21:44:08.123769 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Jul 14 21:44:08.123894 kubelet[1579]: E0714 21:44:08.123847 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:08.124452 kubelet[1579]: E0714 21:44:08.124429 1579 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:44:08.124667 kubelet[1579]: E0714 21:44:08.124638 1579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:44:08.124795 kubelet[1579]: I0714 21:44:08.124775 1579 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:44:08.124795 kubelet[1579]: I0714 21:44:08.124793 1579 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:44:08.124933 kubelet[1579]: I0714 21:44:08.124858 1579 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:44:08.124933 kubelet[1579]: E0714 21:44:08.124901 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="200ms" Jul 14 21:44:08.130926 kubelet[1579]: E0714 21:44:08.129744 1579 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.9:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.9:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523c3f6d807cd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 21:44:08.118287571 +0000 UTC m=+1.308592441,LastTimestamp:2025-07-14 21:44:08.118287571 +0000 UTC m=+1.308592441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 21:44:08.143102 kubelet[1579]: I0714 21:44:08.143076 1579 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 21:44:08.143256 kubelet[1579]: I0714 21:44:08.143244 1579 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 21:44:08.143315 kubelet[1579]: I0714 21:44:08.143306 1579 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:44:08.143809 kubelet[1579]: I0714 21:44:08.143623 1579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:44:08.144803 kubelet[1579]: I0714 21:44:08.144773 1579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:44:08.144803 kubelet[1579]: I0714 21:44:08.144804 1579 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 21:44:08.147787 kubelet[1579]: I0714 21:44:08.144824 1579 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 21:44:08.147787 kubelet[1579]: E0714 21:44:08.144870 1579 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:44:08.147865 kubelet[1579]: W0714 21:44:08.147780 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Jul 14 21:44:08.147865 kubelet[1579]: E0714 21:44:08.147827 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:08.224870 kubelet[1579]: E0714 21:44:08.224827 1579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:44:08.245991 kubelet[1579]: E0714 21:44:08.245961 1579 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 21:44:08.325746 kubelet[1579]: E0714 21:44:08.325636 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="400ms" Jul 14 21:44:08.325891 kubelet[1579]: E0714 21:44:08.325867 1579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:44:08.330403 kubelet[1579]: I0714 21:44:08.330380 1579 policy_none.go:49] "None policy: Start" Jul 14 21:44:08.331383 kubelet[1579]: I0714 21:44:08.331339 1579 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 21:44:08.331383 kubelet[1579]: I0714 21:44:08.331376 1579 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:44:08.402927 systemd[1]: Created slice kubepods.slice. Jul 14 21:44:08.407744 systemd[1]: Created slice kubepods-besteffort.slice. Jul 14 21:44:08.422040 systemd[1]: Created slice kubepods-burstable.slice. Jul 14 21:44:08.423287 kubelet[1579]: I0714 21:44:08.423261 1579 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:44:08.423732 kubelet[1579]: I0714 21:44:08.423714 1579 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:44:08.423852 kubelet[1579]: I0714 21:44:08.423816 1579 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:44:08.424279 kubelet[1579]: I0714 21:44:08.424257 1579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:44:08.425759 kubelet[1579]: E0714 21:44:08.425736 1579 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 21:44:08.453996 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 14 21:44:08.477045 systemd[1]: Created slice kubepods-burstable-podfc55bae11de9f21dc28c65c7df3ecad1.slice. Jul 14 21:44:08.486742 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 14 21:44:08.525319 kubelet[1579]: I0714 21:44:08.525288 1579 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:44:08.526086 kubelet[1579]: E0714 21:44:08.526053 1579 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Jul 14 21:44:08.526206 kubelet[1579]: I0714 21:44:08.526065 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc55bae11de9f21dc28c65c7df3ecad1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fc55bae11de9f21dc28c65c7df3ecad1\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:08.526304 kubelet[1579]: I0714 21:44:08.526290 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:08.526393 kubelet[1579]: I0714 21:44:08.526376 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:08.526479 kubelet[1579]: I0714 21:44:08.526464 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:08.526564 kubelet[1579]: I0714 21:44:08.526549 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:08.526669 kubelet[1579]: I0714 21:44:08.526654 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:08.526750 kubelet[1579]: I0714 21:44:08.526738 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:44:08.526823 kubelet[1579]: I0714 21:44:08.526810 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc55bae11de9f21dc28c65c7df3ecad1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fc55bae11de9f21dc28c65c7df3ecad1\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:08.526898 kubelet[1579]: I0714 21:44:08.526884 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc55bae11de9f21dc28c65c7df3ecad1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fc55bae11de9f21dc28c65c7df3ecad1\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:08.726930 kubelet[1579]: E0714 21:44:08.726874 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="800ms" Jul 14 21:44:08.727236 kubelet[1579]: I0714 21:44:08.727210 1579 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:44:08.727537 kubelet[1579]: E0714 21:44:08.727499 1579 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Jul 14 21:44:08.775931 kubelet[1579]: E0714 21:44:08.775881 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:08.776626 env[1220]: time="2025-07-14T21:44:08.776515811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 14 21:44:08.786578 kubelet[1579]: E0714 21:44:08.786528 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:08.787083 env[1220]: time="2025-07-14T21:44:08.787037331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fc55bae11de9f21dc28c65c7df3ecad1,Namespace:kube-system,Attempt:0,}" Jul 14 21:44:08.788618 kubelet[1579]: E0714 21:44:08.788572 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:08.789119 env[1220]: time="2025-07-14T21:44:08.789086851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 14 21:44:09.129336 kubelet[1579]: I0714 21:44:09.129228 1579 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:44:09.129960 kubelet[1579]: E0714 21:44:09.129926 1579 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Jul 14 21:44:09.148625 kubelet[1579]: W0714 21:44:09.148544 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Jul 14 21:44:09.148625 kubelet[1579]: E0714 21:44:09.148589 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:09.255379 kubelet[1579]: W0714 21:44:09.255294 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Jul 14 21:44:09.255379 kubelet[1579]: E0714 21:44:09.255373 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:09.362248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2219495740.mount: Deactivated successfully. Jul 14 21:44:09.374625 env[1220]: time="2025-07-14T21:44:09.374567451Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:09.375799 env[1220]: time="2025-07-14T21:44:09.375745411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:09.379556 env[1220]: time="2025-07-14T21:44:09.379453611Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:09.381661 env[1220]: time="2025-07-14T21:44:09.381625891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:09.384036 env[1220]: time="2025-07-14T21:44:09.383997931Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:09.384743 env[1220]: time="2025-07-14T21:44:09.384713011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:09.388062 env[1220]: time="2025-07-14T21:44:09.388027811Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:09.389467 env[1220]: time="2025-07-14T21:44:09.389434451Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:09.392319 env[1220]: time="2025-07-14T21:44:09.392289291Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:09.393932 env[1220]: time="2025-07-14T21:44:09.393900811Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:09.395255 env[1220]: time="2025-07-14T21:44:09.395219891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:09.396467 env[1220]: time="2025-07-14T21:44:09.396438091Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:09.415863 kubelet[1579]: W0714 21:44:09.415750 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Jul 14 21:44:09.415863 kubelet[1579]: E0714 21:44:09.415826 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:09.444922 env[1220]: time="2025-07-14T21:44:09.444821371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:44:09.444922 env[1220]: time="2025-07-14T21:44:09.444865131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:44:09.444922 env[1220]: time="2025-07-14T21:44:09.444875731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:44:09.445855 env[1220]: time="2025-07-14T21:44:09.445794891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:44:09.445963 env[1220]: time="2025-07-14T21:44:09.445862611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:44:09.445963 env[1220]: time="2025-07-14T21:44:09.445889491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:44:09.446193 env[1220]: time="2025-07-14T21:44:09.446140851Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b925cde27e2cae1bdc66893b2de82778c6db929bd7d2a02cd60aeaf279d4d4f pid=1638 runtime=io.containerd.runc.v2 Jul 14 21:44:09.446506 env[1220]: time="2025-07-14T21:44:09.446436691Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ddf1132ebfc6363b0f99c83dd7fa07fefe82e393c14516ef69253224912f6fc pid=1635 runtime=io.containerd.runc.v2 Jul 14 21:44:09.446589 env[1220]: time="2025-07-14T21:44:09.446541411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:44:09.446681 env[1220]: time="2025-07-14T21:44:09.446611451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:44:09.446681 env[1220]: time="2025-07-14T21:44:09.446644051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:44:09.446969 env[1220]: time="2025-07-14T21:44:09.446918971Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a64806a9001d7ce3bd67f83fa3c06bdcadf6aeccd6069f07fd947b52a7fa6d0 pid=1636 runtime=io.containerd.runc.v2 Jul 14 21:44:09.460672 systemd[1]: Started cri-containerd-8b925cde27e2cae1bdc66893b2de82778c6db929bd7d2a02cd60aeaf279d4d4f.scope. Jul 14 21:44:09.467568 systemd[1]: Started cri-containerd-4ddf1132ebfc6363b0f99c83dd7fa07fefe82e393c14516ef69253224912f6fc.scope. Jul 14 21:44:09.483745 systemd[1]: Started cri-containerd-7a64806a9001d7ce3bd67f83fa3c06bdcadf6aeccd6069f07fd947b52a7fa6d0.scope. Jul 14 21:44:09.527751 kubelet[1579]: E0714 21:44:09.527702 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="1.6s" Jul 14 21:44:09.573344 env[1220]: time="2025-07-14T21:44:09.567561451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b925cde27e2cae1bdc66893b2de82778c6db929bd7d2a02cd60aeaf279d4d4f\"" Jul 14 21:44:09.573344 env[1220]: time="2025-07-14T21:44:09.570433051Z" level=info msg="CreateContainer within sandbox \"8b925cde27e2cae1bdc66893b2de82778c6db929bd7d2a02cd60aeaf279d4d4f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 21:44:09.573521 kubelet[1579]: E0714 21:44:09.568669 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:09.598507 env[1220]: time="2025-07-14T21:44:09.598445651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fc55bae11de9f21dc28c65c7df3ecad1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ddf1132ebfc6363b0f99c83dd7fa07fefe82e393c14516ef69253224912f6fc\"" Jul 14 21:44:09.599418 kubelet[1579]: E0714 21:44:09.599214 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:09.601090 env[1220]: time="2025-07-14T21:44:09.600884531Z" level=info msg="CreateContainer within sandbox \"8b925cde27e2cae1bdc66893b2de82778c6db929bd7d2a02cd60aeaf279d4d4f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"233839beb879e887686971838c3c033e13664437126a0d24e0b66084b68dc87f\"" Jul 14 21:44:09.601516 env[1220]: time="2025-07-14T21:44:09.601485371Z" level=info msg="StartContainer for \"233839beb879e887686971838c3c033e13664437126a0d24e0b66084b68dc87f\"" Jul 14 21:44:09.601571 env[1220]: time="2025-07-14T21:44:09.601505291Z" level=info msg="CreateContainer within sandbox \"4ddf1132ebfc6363b0f99c83dd7fa07fefe82e393c14516ef69253224912f6fc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 21:44:09.615006 env[1220]: time="2025-07-14T21:44:09.614957851Z" level=info msg="CreateContainer within sandbox \"4ddf1132ebfc6363b0f99c83dd7fa07fefe82e393c14516ef69253224912f6fc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fa5a80fbafe34907888eec4e6f29fe3bf276f54c574723b88d4b94cbb5407d82\"" Jul 14 21:44:09.615882 env[1220]: time="2025-07-14T21:44:09.615831131Z" level=info msg="StartContainer for \"fa5a80fbafe34907888eec4e6f29fe3bf276f54c574723b88d4b94cbb5407d82\"" Jul 14 21:44:09.617818 systemd[1]: Started cri-containerd-233839beb879e887686971838c3c033e13664437126a0d24e0b66084b68dc87f.scope. Jul 14 21:44:09.620939 env[1220]: time="2025-07-14T21:44:09.620888211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a64806a9001d7ce3bd67f83fa3c06bdcadf6aeccd6069f07fd947b52a7fa6d0\"" Jul 14 21:44:09.621982 kubelet[1579]: E0714 21:44:09.621801 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:09.623774 env[1220]: time="2025-07-14T21:44:09.623732651Z" level=info msg="CreateContainer within sandbox \"7a64806a9001d7ce3bd67f83fa3c06bdcadf6aeccd6069f07fd947b52a7fa6d0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 21:44:09.637851 systemd[1]: Started cri-containerd-fa5a80fbafe34907888eec4e6f29fe3bf276f54c574723b88d4b94cbb5407d82.scope. Jul 14 21:44:09.657999 env[1220]: time="2025-07-14T21:44:09.657951211Z" level=info msg="CreateContainer within sandbox \"7a64806a9001d7ce3bd67f83fa3c06bdcadf6aeccd6069f07fd947b52a7fa6d0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5120966f60287088ff03031568738aba5a8e581dad9ac07da69b5550822db14c\"" Jul 14 21:44:09.658643 env[1220]: time="2025-07-14T21:44:09.658615211Z" level=info msg="StartContainer for \"5120966f60287088ff03031568738aba5a8e581dad9ac07da69b5550822db14c\"" Jul 14 21:44:09.676048 kubelet[1579]: W0714 21:44:09.675925 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Jul 14 21:44:09.676048 kubelet[1579]: E0714 21:44:09.676004 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:44:09.683557 systemd[1]: Started cri-containerd-5120966f60287088ff03031568738aba5a8e581dad9ac07da69b5550822db14c.scope. Jul 14 21:44:09.741542 env[1220]: time="2025-07-14T21:44:09.741494571Z" level=info msg="StartContainer for \"fa5a80fbafe34907888eec4e6f29fe3bf276f54c574723b88d4b94cbb5407d82\" returns successfully" Jul 14 21:44:09.758908 env[1220]: time="2025-07-14T21:44:09.751487691Z" level=info msg="StartContainer for \"5120966f60287088ff03031568738aba5a8e581dad9ac07da69b5550822db14c\" returns successfully" Jul 14 21:44:09.759379 env[1220]: time="2025-07-14T21:44:09.759029691Z" level=info msg="StartContainer for \"233839beb879e887686971838c3c033e13664437126a0d24e0b66084b68dc87f\" returns successfully" Jul 14 21:44:09.931553 kubelet[1579]: I0714 21:44:09.931331 1579 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:44:09.931970 kubelet[1579]: E0714 21:44:09.931722 1579 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Jul 14 21:44:10.152270 kubelet[1579]: E0714 21:44:10.152230 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:10.154259 kubelet[1579]: E0714 21:44:10.154231 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:10.155976 kubelet[1579]: E0714 21:44:10.155952 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:11.158096 kubelet[1579]: E0714 21:44:11.158062 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:11.158435 kubelet[1579]: E0714 21:44:11.158255 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:11.532843 kubelet[1579]: I0714 21:44:11.532805 1579 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:44:11.605772 kubelet[1579]: E0714 21:44:11.605707 1579 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 21:44:11.711121 kubelet[1579]: I0714 21:44:11.711076 1579 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 21:44:11.711121 kubelet[1579]: E0714 21:44:11.711122 1579 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 21:44:11.755201 kubelet[1579]: E0714 21:44:11.755160 1579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:44:11.855755 kubelet[1579]: E0714 21:44:11.855648 1579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:44:11.956502 kubelet[1579]: E0714 21:44:11.956444 1579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:44:12.057090 kubelet[1579]: E0714 21:44:12.057046 1579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:44:12.164012 kubelet[1579]: E0714 21:44:12.163671 1579 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:12.164012 kubelet[1579]: E0714 21:44:12.163870 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:13.106710 kubelet[1579]: I0714 21:44:13.106675 1579 apiserver.go:52] "Watching apiserver" Jul 14 21:44:13.123761 kubelet[1579]: I0714 21:44:13.123674 1579 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 21:44:13.777510 systemd[1]: Reloading. Jul 14 21:44:13.818908 /usr/lib/systemd/system-generators/torcx-generator[1876]: time="2025-07-14T21:44:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 21:44:13.818942 /usr/lib/systemd/system-generators/torcx-generator[1876]: time="2025-07-14T21:44:13Z" level=info msg="torcx already run" Jul 14 21:44:13.887511 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 21:44:13.887530 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 21:44:13.904276 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:44:13.985108 systemd[1]: Stopping kubelet.service... Jul 14 21:44:14.012005 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:44:14.012217 systemd[1]: Stopped kubelet.service. Jul 14 21:44:14.012270 systemd[1]: kubelet.service: Consumed 1.679s CPU time. Jul 14 21:44:14.014036 systemd[1]: Starting kubelet.service... Jul 14 21:44:14.112319 systemd[1]: Started kubelet.service. Jul 14 21:44:14.149442 kubelet[1919]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:44:14.149442 kubelet[1919]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 21:44:14.149442 kubelet[1919]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:44:14.149827 kubelet[1919]: I0714 21:44:14.149478 1919 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:44:14.156478 kubelet[1919]: I0714 21:44:14.156433 1919 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 21:44:14.156478 kubelet[1919]: I0714 21:44:14.156476 1919 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:44:14.156734 kubelet[1919]: I0714 21:44:14.156703 1919 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 21:44:14.159100 kubelet[1919]: I0714 21:44:14.159076 1919 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 21:44:14.161855 kubelet[1919]: I0714 21:44:14.161818 1919 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:44:14.165729 kubelet[1919]: E0714 21:44:14.165696 1919 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:44:14.165729 kubelet[1919]: I0714 21:44:14.165725 1919 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:44:14.168669 kubelet[1919]: I0714 21:44:14.168646 1919 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:44:14.168797 kubelet[1919]: I0714 21:44:14.168784 1919 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 21:44:14.168912 kubelet[1919]: I0714 21:44:14.168888 1919 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:44:14.169067 kubelet[1919]: I0714 21:44:14.168915 1919 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:44:14.169176 kubelet[1919]: I0714 21:44:14.169075 1919 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:44:14.169176 kubelet[1919]: I0714 21:44:14.169083 1919 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 21:44:14.169176 kubelet[1919]: I0714 21:44:14.169113 1919 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:44:14.169279 kubelet[1919]: I0714 21:44:14.169211 1919 kubelet.go:408] "Attempting to sync node with API server" Jul 14 21:44:14.169279 kubelet[1919]: I0714 21:44:14.169224 1919 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:44:14.169279 kubelet[1919]: I0714 21:44:14.169242 1919 kubelet.go:314] "Adding apiserver pod source" Jul 14 21:44:14.169449 kubelet[1919]: I0714 21:44:14.169363 1919 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:44:14.179240 kubelet[1919]: I0714 21:44:14.170885 1919 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 14 21:44:14.179240 kubelet[1919]: I0714 21:44:14.171317 1919 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:44:14.179240 kubelet[1919]: I0714 21:44:14.171736 1919 server.go:1274] "Started kubelet" Jul 14 21:44:14.179240 kubelet[1919]: I0714 21:44:14.173333 1919 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:44:14.179240 kubelet[1919]: I0714 21:44:14.173485 1919 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:44:14.179240 kubelet[1919]: I0714 21:44:14.173563 1919 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:44:14.179240 kubelet[1919]: I0714 21:44:14.174633 1919 server.go:449] "Adding debug handlers to kubelet server" Jul 14 21:44:14.179240 kubelet[1919]: I0714 21:44:14.175408 1919 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:44:14.179240 kubelet[1919]: I0714 21:44:14.175636 1919 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 21:44:14.179240 kubelet[1919]: I0714 21:44:14.176580 1919 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 21:44:14.179240 kubelet[1919]: I0714 21:44:14.176733 1919 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:44:14.179240 kubelet[1919]: I0714 21:44:14.176792 1919 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:44:14.179240 kubelet[1919]: E0714 21:44:14.177269 1919 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:44:14.179645 kubelet[1919]: I0714 21:44:14.179284 1919 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:44:14.192277 kubelet[1919]: I0714 21:44:14.192035 1919 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:44:14.192277 kubelet[1919]: I0714 21:44:14.192060 1919 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:44:14.202978 kubelet[1919]: I0714 21:44:14.201815 1919 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:44:14.202978 kubelet[1919]: I0714 21:44:14.202944 1919 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:44:14.202978 kubelet[1919]: I0714 21:44:14.202964 1919 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 21:44:14.202978 kubelet[1919]: I0714 21:44:14.202982 1919 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 21:44:14.203182 kubelet[1919]: E0714 21:44:14.203033 1919 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:44:14.234712 kubelet[1919]: I0714 21:44:14.234665 1919 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 21:44:14.234712 kubelet[1919]: I0714 21:44:14.234698 1919 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 21:44:14.234712 kubelet[1919]: I0714 21:44:14.234721 1919 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:44:14.234900 kubelet[1919]: I0714 21:44:14.234871 1919 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 21:44:14.234900 kubelet[1919]: I0714 21:44:14.234883 1919 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 21:44:14.234900 kubelet[1919]: I0714 21:44:14.234901 1919 policy_none.go:49] "None policy: Start" Jul 14 21:44:14.235551 kubelet[1919]: I0714 21:44:14.235521 1919 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 21:44:14.235645 kubelet[1919]: I0714 21:44:14.235558 1919 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:44:14.235744 kubelet[1919]: I0714 21:44:14.235727 1919 state_mem.go:75] "Updated machine memory state" Jul 14 21:44:14.239534 kubelet[1919]: I0714 21:44:14.239513 1919 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:44:14.239904 kubelet[1919]: I0714 21:44:14.239885 1919 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:44:14.240016 kubelet[1919]: I0714 21:44:14.239984 1919 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:44:14.240286 kubelet[1919]: I0714 21:44:14.240268 1919 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:44:14.345654 kubelet[1919]: I0714 21:44:14.345610 1919 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:44:14.351848 kubelet[1919]: I0714 21:44:14.351807 1919 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 14 21:44:14.351977 kubelet[1919]: I0714 21:44:14.351895 1919 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 21:44:14.478353 kubelet[1919]: I0714 21:44:14.478304 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:14.478540 kubelet[1919]: I0714 21:44:14.478389 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:14.478540 kubelet[1919]: I0714 21:44:14.478410 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:14.478540 kubelet[1919]: I0714 21:44:14.478430 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:44:14.478540 kubelet[1919]: I0714 21:44:14.478447 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc55bae11de9f21dc28c65c7df3ecad1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fc55bae11de9f21dc28c65c7df3ecad1\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:14.478540 kubelet[1919]: I0714 21:44:14.478474 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc55bae11de9f21dc28c65c7df3ecad1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fc55bae11de9f21dc28c65c7df3ecad1\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:14.478702 kubelet[1919]: I0714 21:44:14.478492 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:14.478702 kubelet[1919]: I0714 21:44:14.478508 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:44:14.478702 kubelet[1919]: I0714 21:44:14.478524 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc55bae11de9f21dc28c65c7df3ecad1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fc55bae11de9f21dc28c65c7df3ecad1\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:44:14.613352 kubelet[1919]: E0714 21:44:14.613307 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:14.613506 kubelet[1919]: E0714 21:44:14.613398 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:14.613538 kubelet[1919]: E0714 21:44:14.613516 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:15.170363 kubelet[1919]: I0714 21:44:15.170326 1919 apiserver.go:52] "Watching apiserver" Jul 14 21:44:15.176896 kubelet[1919]: I0714 21:44:15.176873 1919 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 21:44:15.221901 kubelet[1919]: E0714 21:44:15.221855 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:15.222093 kubelet[1919]: E0714 21:44:15.222066 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:15.222262 kubelet[1919]: E0714 21:44:15.222240 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:15.247530 kubelet[1919]: I0714 21:44:15.247456 1919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.247438328 podStartE2EDuration="1.247438328s" podCreationTimestamp="2025-07-14 21:44:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:44:15.239937281 +0000 UTC m=+1.122997031" watchObservedRunningTime="2025-07-14 21:44:15.247438328 +0000 UTC m=+1.130498078" Jul 14 21:44:15.255405 kubelet[1919]: I0714 21:44:15.255354 1919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.255338335 podStartE2EDuration="1.255338335s" podCreationTimestamp="2025-07-14 21:44:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:44:15.247960328 +0000 UTC m=+1.131020078" watchObservedRunningTime="2025-07-14 21:44:15.255338335 +0000 UTC m=+1.138398045" Jul 14 21:44:15.255562 kubelet[1919]: I0714 21:44:15.255442 1919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.255437655 podStartE2EDuration="1.255437655s" podCreationTimestamp="2025-07-14 21:44:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:44:15.254795215 +0000 UTC m=+1.137854965" watchObservedRunningTime="2025-07-14 21:44:15.255437655 +0000 UTC m=+1.138497405" Jul 14 21:44:15.851426 sudo[1321]: pam_unix(sudo:session): session closed for user root Jul 14 21:44:15.854263 sshd[1317]: pam_unix(sshd:session): session closed for user core Jul 14 21:44:15.857503 systemd[1]: sshd@4-10.0.0.9:22-10.0.0.1:35282.service: Deactivated successfully. Jul 14 21:44:15.858431 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 21:44:15.858589 systemd[1]: session-5.scope: Consumed 6.464s CPU time. Jul 14 21:44:15.859083 systemd-logind[1209]: Session 5 logged out. Waiting for processes to exit. Jul 14 21:44:15.860115 systemd-logind[1209]: Removed session 5. Jul 14 21:44:16.223682 kubelet[1919]: E0714 21:44:16.223651 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:20.176798 kubelet[1919]: I0714 21:44:20.176671 1919 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 21:44:20.177149 env[1220]: time="2025-07-14T21:44:20.177007407Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 21:44:20.177317 kubelet[1919]: I0714 21:44:20.177205 1919 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 21:44:21.019742 kubelet[1919]: E0714 21:44:21.019709 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:21.071522 kubelet[1919]: W0714 21:44:21.071492 1919 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 14 21:44:21.071741 kubelet[1919]: E0714 21:44:21.071716 1919 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 14 21:44:21.076218 systemd[1]: Created slice kubepods-besteffort-podf355c336_247b_4ebb_8422_84bddb4e6ba4.slice. Jul 14 21:44:21.098387 systemd[1]: Created slice kubepods-burstable-podc48c8e3c_5127_44b8_ade6_0e7bc1b1b7a3.slice. Jul 14 21:44:21.129642 kubelet[1919]: I0714 21:44:21.129577 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fgfk\" (UniqueName: \"kubernetes.io/projected/c48c8e3c-5127-44b8-ade6-0e7bc1b1b7a3-kube-api-access-7fgfk\") pod \"kube-flannel-ds-jg5ph\" (UID: \"c48c8e3c-5127-44b8-ade6-0e7bc1b1b7a3\") " pod="kube-flannel/kube-flannel-ds-jg5ph" Jul 14 21:44:21.129642 kubelet[1919]: I0714 21:44:21.129631 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/c48c8e3c-5127-44b8-ade6-0e7bc1b1b7a3-cni\") pod \"kube-flannel-ds-jg5ph\" (UID: \"c48c8e3c-5127-44b8-ade6-0e7bc1b1b7a3\") " pod="kube-flannel/kube-flannel-ds-jg5ph" Jul 14 21:44:21.129838 kubelet[1919]: I0714 21:44:21.129678 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/c48c8e3c-5127-44b8-ade6-0e7bc1b1b7a3-flannel-cfg\") pod \"kube-flannel-ds-jg5ph\" (UID: \"c48c8e3c-5127-44b8-ade6-0e7bc1b1b7a3\") " pod="kube-flannel/kube-flannel-ds-jg5ph" Jul 14 21:44:21.129838 kubelet[1919]: I0714 21:44:21.129719 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f355c336-247b-4ebb-8422-84bddb4e6ba4-xtables-lock\") pod \"kube-proxy-snmjn\" (UID: \"f355c336-247b-4ebb-8422-84bddb4e6ba4\") " pod="kube-system/kube-proxy-snmjn" Jul 14 21:44:21.129838 kubelet[1919]: I0714 21:44:21.129749 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhzkz\" (UniqueName: \"kubernetes.io/projected/f355c336-247b-4ebb-8422-84bddb4e6ba4-kube-api-access-rhzkz\") pod \"kube-proxy-snmjn\" (UID: \"f355c336-247b-4ebb-8422-84bddb4e6ba4\") " pod="kube-system/kube-proxy-snmjn" Jul 14 21:44:21.129838 kubelet[1919]: I0714 21:44:21.129791 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f355c336-247b-4ebb-8422-84bddb4e6ba4-kube-proxy\") pod \"kube-proxy-snmjn\" (UID: \"f355c336-247b-4ebb-8422-84bddb4e6ba4\") " pod="kube-system/kube-proxy-snmjn" Jul 14 21:44:21.129838 kubelet[1919]: I0714 21:44:21.129808 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c48c8e3c-5127-44b8-ade6-0e7bc1b1b7a3-xtables-lock\") pod \"kube-flannel-ds-jg5ph\" (UID: \"c48c8e3c-5127-44b8-ade6-0e7bc1b1b7a3\") " pod="kube-flannel/kube-flannel-ds-jg5ph" Jul 14 21:44:21.129952 kubelet[1919]: I0714 21:44:21.129825 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f355c336-247b-4ebb-8422-84bddb4e6ba4-lib-modules\") pod \"kube-proxy-snmjn\" (UID: \"f355c336-247b-4ebb-8422-84bddb4e6ba4\") " pod="kube-system/kube-proxy-snmjn" Jul 14 21:44:21.129952 kubelet[1919]: I0714 21:44:21.129864 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c48c8e3c-5127-44b8-ade6-0e7bc1b1b7a3-run\") pod \"kube-flannel-ds-jg5ph\" (UID: \"c48c8e3c-5127-44b8-ade6-0e7bc1b1b7a3\") " pod="kube-flannel/kube-flannel-ds-jg5ph" Jul 14 21:44:21.129952 kubelet[1919]: I0714 21:44:21.129882 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/c48c8e3c-5127-44b8-ade6-0e7bc1b1b7a3-cni-plugin\") pod \"kube-flannel-ds-jg5ph\" (UID: \"c48c8e3c-5127-44b8-ade6-0e7bc1b1b7a3\") " pod="kube-flannel/kube-flannel-ds-jg5ph" Jul 14 21:44:21.232071 kubelet[1919]: E0714 21:44:21.232020 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:21.239848 kubelet[1919]: I0714 21:44:21.239805 1919 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 14 21:44:21.403626 kubelet[1919]: E0714 21:44:21.403506 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:21.404088 env[1220]: time="2025-07-14T21:44:21.404045505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-jg5ph,Uid:c48c8e3c-5127-44b8-ade6-0e7bc1b1b7a3,Namespace:kube-flannel,Attempt:0,}" Jul 14 21:44:21.421553 env[1220]: time="2025-07-14T21:44:21.421481356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:44:21.421553 env[1220]: time="2025-07-14T21:44:21.421518756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:44:21.421553 env[1220]: time="2025-07-14T21:44:21.421528716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:44:21.421754 env[1220]: time="2025-07-14T21:44:21.421676196Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf294004cfbc447e3b4ce759cd88609d85a0017263604e1cd16d336c5ed61051 pid=1993 runtime=io.containerd.runc.v2 Jul 14 21:44:21.435264 systemd[1]: Started cri-containerd-bf294004cfbc447e3b4ce759cd88609d85a0017263604e1cd16d336c5ed61051.scope. Jul 14 21:44:21.476029 env[1220]: time="2025-07-14T21:44:21.475984189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-jg5ph,Uid:c48c8e3c-5127-44b8-ade6-0e7bc1b1b7a3,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"bf294004cfbc447e3b4ce759cd88609d85a0017263604e1cd16d336c5ed61051\"" Jul 14 21:44:21.476937 kubelet[1919]: E0714 21:44:21.476906 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:21.479266 env[1220]: time="2025-07-14T21:44:21.479230031Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jul 14 21:44:21.988518 kubelet[1919]: E0714 21:44:21.988382 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:21.988955 env[1220]: time="2025-07-14T21:44:21.988913620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-snmjn,Uid:f355c336-247b-4ebb-8422-84bddb4e6ba4,Namespace:kube-system,Attempt:0,}" Jul 14 21:44:22.011940 env[1220]: time="2025-07-14T21:44:22.011854394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:44:22.011940 env[1220]: time="2025-07-14T21:44:22.011896914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:44:22.011940 env[1220]: time="2025-07-14T21:44:22.011908514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:44:22.012142 env[1220]: time="2025-07-14T21:44:22.012107834Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21ba2007c0c0622f29bc1ec9c89401b61dfad91e8b72612f005ca64f4cbf06fd pid=2036 runtime=io.containerd.runc.v2 Jul 14 21:44:22.023452 systemd[1]: Started cri-containerd-21ba2007c0c0622f29bc1ec9c89401b61dfad91e8b72612f005ca64f4cbf06fd.scope. Jul 14 21:44:22.060083 env[1220]: time="2025-07-14T21:44:22.059920621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-snmjn,Uid:f355c336-247b-4ebb-8422-84bddb4e6ba4,Namespace:kube-system,Attempt:0,} returns sandbox id \"21ba2007c0c0622f29bc1ec9c89401b61dfad91e8b72612f005ca64f4cbf06fd\"" Jul 14 21:44:22.060767 kubelet[1919]: E0714 21:44:22.060727 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:22.062676 env[1220]: time="2025-07-14T21:44:22.062642743Z" level=info msg="CreateContainer within sandbox \"21ba2007c0c0622f29bc1ec9c89401b61dfad91e8b72612f005ca64f4cbf06fd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 21:44:22.074533 env[1220]: time="2025-07-14T21:44:22.074477630Z" level=info msg="CreateContainer within sandbox \"21ba2007c0c0622f29bc1ec9c89401b61dfad91e8b72612f005ca64f4cbf06fd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3a963e1809925026b80f1f222f371683a8159fd963cf804bf1b216f8d2aa39f1\"" Jul 14 21:44:22.075060 env[1220]: time="2025-07-14T21:44:22.075027710Z" level=info msg="StartContainer for \"3a963e1809925026b80f1f222f371683a8159fd963cf804bf1b216f8d2aa39f1\"" Jul 14 21:44:22.090167 systemd[1]: Started cri-containerd-3a963e1809925026b80f1f222f371683a8159fd963cf804bf1b216f8d2aa39f1.scope. Jul 14 21:44:22.127863 env[1220]: time="2025-07-14T21:44:22.127819980Z" level=info msg="StartContainer for \"3a963e1809925026b80f1f222f371683a8159fd963cf804bf1b216f8d2aa39f1\" returns successfully" Jul 14 21:44:22.236367 kubelet[1919]: E0714 21:44:22.236281 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:22.390101 kubelet[1919]: E0714 21:44:22.389952 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:22.405951 kubelet[1919]: I0714 21:44:22.405782 1919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-snmjn" podStartSLOduration=1.4057557379999999 podStartE2EDuration="1.405755738s" podCreationTimestamp="2025-07-14 21:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:44:22.25066657 +0000 UTC m=+8.133726400" watchObservedRunningTime="2025-07-14 21:44:22.405755738 +0000 UTC m=+8.288815448" Jul 14 21:44:22.783003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount136526577.mount: Deactivated successfully. Jul 14 21:44:22.820440 env[1220]: time="2025-07-14T21:44:22.820394254Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:22.822633 env[1220]: time="2025-07-14T21:44:22.822590655Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:22.824671 env[1220]: time="2025-07-14T21:44:22.824636656Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:22.826748 env[1220]: time="2025-07-14T21:44:22.826706218Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:22.827252 env[1220]: time="2025-07-14T21:44:22.827222698Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jul 14 21:44:22.831215 env[1220]: time="2025-07-14T21:44:22.831167580Z" level=info msg="CreateContainer within sandbox \"bf294004cfbc447e3b4ce759cd88609d85a0017263604e1cd16d336c5ed61051\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jul 14 21:44:22.840226 env[1220]: time="2025-07-14T21:44:22.840181945Z" level=info msg="CreateContainer within sandbox \"bf294004cfbc447e3b4ce759cd88609d85a0017263604e1cd16d336c5ed61051\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"4db9aba3692cab31d8d9c9e9092613979470c5b21e6cc4bf2d1757f88e5480c9\"" Jul 14 21:44:22.840733 env[1220]: time="2025-07-14T21:44:22.840705586Z" level=info msg="StartContainer for \"4db9aba3692cab31d8d9c9e9092613979470c5b21e6cc4bf2d1757f88e5480c9\"" Jul 14 21:44:22.845105 kubelet[1919]: E0714 21:44:22.845073 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:22.866744 systemd[1]: Started cri-containerd-4db9aba3692cab31d8d9c9e9092613979470c5b21e6cc4bf2d1757f88e5480c9.scope. Jul 14 21:44:22.901145 env[1220]: time="2025-07-14T21:44:22.901096140Z" level=info msg="StartContainer for \"4db9aba3692cab31d8d9c9e9092613979470c5b21e6cc4bf2d1757f88e5480c9\" returns successfully" Jul 14 21:44:22.906423 systemd[1]: cri-containerd-4db9aba3692cab31d8d9c9e9092613979470c5b21e6cc4bf2d1757f88e5480c9.scope: Deactivated successfully. Jul 14 21:44:22.949577 env[1220]: time="2025-07-14T21:44:22.949523448Z" level=info msg="shim disconnected" id=4db9aba3692cab31d8d9c9e9092613979470c5b21e6cc4bf2d1757f88e5480c9 Jul 14 21:44:22.949577 env[1220]: time="2025-07-14T21:44:22.949573208Z" level=warning msg="cleaning up after shim disconnected" id=4db9aba3692cab31d8d9c9e9092613979470c5b21e6cc4bf2d1757f88e5480c9 namespace=k8s.io Jul 14 21:44:22.949577 env[1220]: time="2025-07-14T21:44:22.949584648Z" level=info msg="cleaning up dead shim" Jul 14 21:44:22.956269 env[1220]: time="2025-07-14T21:44:22.956221211Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:44:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2279 runtime=io.containerd.runc.v2\n" Jul 14 21:44:23.245245 kubelet[1919]: E0714 21:44:23.245216 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:23.245776 kubelet[1919]: E0714 21:44:23.245751 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:23.246040 kubelet[1919]: E0714 21:44:23.246018 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:23.246790 env[1220]: time="2025-07-14T21:44:23.246757288Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jul 14 21:44:24.247735 kubelet[1919]: E0714 21:44:24.246725 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:24.451057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount68170744.mount: Deactivated successfully. Jul 14 21:44:25.188754 env[1220]: time="2025-07-14T21:44:25.188711639Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:25.190131 env[1220]: time="2025-07-14T21:44:25.190089240Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:25.191729 env[1220]: time="2025-07-14T21:44:25.191702281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:25.193514 env[1220]: time="2025-07-14T21:44:25.193478002Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:44:25.194330 env[1220]: time="2025-07-14T21:44:25.194298722Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jul 14 21:44:25.198102 env[1220]: time="2025-07-14T21:44:25.198057964Z" level=info msg="CreateContainer within sandbox \"bf294004cfbc447e3b4ce759cd88609d85a0017263604e1cd16d336c5ed61051\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 14 21:44:25.325016 env[1220]: time="2025-07-14T21:44:25.324953223Z" level=info msg="CreateContainer within sandbox \"bf294004cfbc447e3b4ce759cd88609d85a0017263604e1cd16d336c5ed61051\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d253f47997b0bbb2d10750ada37bf6b77b8a180607f8221f5f151dbd733f4385\"" Jul 14 21:44:25.325796 env[1220]: time="2025-07-14T21:44:25.325758824Z" level=info msg="StartContainer for \"d253f47997b0bbb2d10750ada37bf6b77b8a180607f8221f5f151dbd733f4385\"" Jul 14 21:44:25.341533 systemd[1]: Started cri-containerd-d253f47997b0bbb2d10750ada37bf6b77b8a180607f8221f5f151dbd733f4385.scope. Jul 14 21:44:25.379268 env[1220]: time="2025-07-14T21:44:25.379216609Z" level=info msg="StartContainer for \"d253f47997b0bbb2d10750ada37bf6b77b8a180607f8221f5f151dbd733f4385\" returns successfully" Jul 14 21:44:25.381047 systemd[1]: cri-containerd-d253f47997b0bbb2d10750ada37bf6b77b8a180607f8221f5f151dbd733f4385.scope: Deactivated successfully. Jul 14 21:44:25.392208 kubelet[1919]: I0714 21:44:25.392009 1919 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 14 21:44:25.398035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d253f47997b0bbb2d10750ada37bf6b77b8a180607f8221f5f151dbd733f4385-rootfs.mount: Deactivated successfully. Jul 14 21:44:25.444045 systemd[1]: Created slice kubepods-burstable-podcbde8377_4089_4d16_9303_fe5e7d2cf264.slice. Jul 14 21:44:25.450128 systemd[1]: Created slice kubepods-burstable-pod52edfe48_6ed4_4315_a021_9edeaf0de226.slice. Jul 14 21:44:25.458284 kubelet[1919]: I0714 21:44:25.456448 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbde8377-4089-4d16-9303-fe5e7d2cf264-config-volume\") pod \"coredns-7c65d6cfc9-wsf54\" (UID: \"cbde8377-4089-4d16-9303-fe5e7d2cf264\") " pod="kube-system/coredns-7c65d6cfc9-wsf54" Jul 14 21:44:25.458509 kubelet[1919]: I0714 21:44:25.458491 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ssht\" (UniqueName: \"kubernetes.io/projected/52edfe48-6ed4-4315-a021-9edeaf0de226-kube-api-access-2ssht\") pod \"coredns-7c65d6cfc9-srhpk\" (UID: \"52edfe48-6ed4-4315-a021-9edeaf0de226\") " pod="kube-system/coredns-7c65d6cfc9-srhpk" Jul 14 21:44:25.458609 kubelet[1919]: I0714 21:44:25.458588 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79bdq\" (UniqueName: \"kubernetes.io/projected/cbde8377-4089-4d16-9303-fe5e7d2cf264-kube-api-access-79bdq\") pod \"coredns-7c65d6cfc9-wsf54\" (UID: \"cbde8377-4089-4d16-9303-fe5e7d2cf264\") " pod="kube-system/coredns-7c65d6cfc9-wsf54" Jul 14 21:44:25.458707 kubelet[1919]: I0714 21:44:25.458683 1919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52edfe48-6ed4-4315-a021-9edeaf0de226-config-volume\") pod \"coredns-7c65d6cfc9-srhpk\" (UID: \"52edfe48-6ed4-4315-a021-9edeaf0de226\") " pod="kube-system/coredns-7c65d6cfc9-srhpk" Jul 14 21:44:25.621529 env[1220]: time="2025-07-14T21:44:25.621478682Z" level=info msg="shim disconnected" id=d253f47997b0bbb2d10750ada37bf6b77b8a180607f8221f5f151dbd733f4385 Jul 14 21:44:25.621529 env[1220]: time="2025-07-14T21:44:25.621525442Z" level=warning msg="cleaning up after shim disconnected" id=d253f47997b0bbb2d10750ada37bf6b77b8a180607f8221f5f151dbd733f4385 namespace=k8s.io Jul 14 21:44:25.621750 env[1220]: time="2025-07-14T21:44:25.621540402Z" level=info msg="cleaning up dead shim" Jul 14 21:44:25.628426 env[1220]: time="2025-07-14T21:44:25.628374645Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:44:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2337 runtime=io.containerd.runc.v2\n" Jul 14 21:44:25.747860 kubelet[1919]: E0714 21:44:25.747686 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:25.748344 env[1220]: time="2025-07-14T21:44:25.748305382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wsf54,Uid:cbde8377-4089-4d16-9303-fe5e7d2cf264,Namespace:kube-system,Attempt:0,}" Jul 14 21:44:25.752633 kubelet[1919]: E0714 21:44:25.752607 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:25.753124 env[1220]: time="2025-07-14T21:44:25.753091504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-srhpk,Uid:52edfe48-6ed4-4315-a021-9edeaf0de226,Namespace:kube-system,Attempt:0,}" Jul 14 21:44:25.804019 env[1220]: time="2025-07-14T21:44:25.803945088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wsf54,Uid:cbde8377-4089-4d16-9303-fe5e7d2cf264,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21dbd5c1fd01aab9a1bbe71dd671a48bb012c79dba1467525f23866ffa4b2294\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 14 21:44:25.804731 kubelet[1919]: E0714 21:44:25.804307 1919 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21dbd5c1fd01aab9a1bbe71dd671a48bb012c79dba1467525f23866ffa4b2294\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 14 21:44:25.804731 kubelet[1919]: E0714 21:44:25.804375 1919 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21dbd5c1fd01aab9a1bbe71dd671a48bb012c79dba1467525f23866ffa4b2294\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-wsf54" Jul 14 21:44:25.804731 kubelet[1919]: E0714 21:44:25.804394 1919 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21dbd5c1fd01aab9a1bbe71dd671a48bb012c79dba1467525f23866ffa4b2294\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-wsf54" Jul 14 21:44:25.804731 kubelet[1919]: E0714 21:44:25.804446 1919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wsf54_kube-system(cbde8377-4089-4d16-9303-fe5e7d2cf264)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wsf54_kube-system(cbde8377-4089-4d16-9303-fe5e7d2cf264)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21dbd5c1fd01aab9a1bbe71dd671a48bb012c79dba1467525f23866ffa4b2294\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7c65d6cfc9-wsf54" podUID="cbde8377-4089-4d16-9303-fe5e7d2cf264" Jul 14 21:44:25.805710 env[1220]: time="2025-07-14T21:44:25.805660209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-srhpk,Uid:52edfe48-6ed4-4315-a021-9edeaf0de226,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4d3fa5df82d3ead152e7506ba37312d757263cc1beac6689abab1868bde6f49\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 14 21:44:25.806086 kubelet[1919]: E0714 21:44:25.805956 1919 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d3fa5df82d3ead152e7506ba37312d757263cc1beac6689abab1868bde6f49\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 14 21:44:25.806086 kubelet[1919]: E0714 21:44:25.805996 1919 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d3fa5df82d3ead152e7506ba37312d757263cc1beac6689abab1868bde6f49\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-srhpk" Jul 14 21:44:25.806086 kubelet[1919]: E0714 21:44:25.806014 1919 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d3fa5df82d3ead152e7506ba37312d757263cc1beac6689abab1868bde6f49\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-srhpk" Jul 14 21:44:25.806086 kubelet[1919]: E0714 21:44:25.806042 1919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-srhpk_kube-system(52edfe48-6ed4-4315-a021-9edeaf0de226)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-srhpk_kube-system(52edfe48-6ed4-4315-a021-9edeaf0de226)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4d3fa5df82d3ead152e7506ba37312d757263cc1beac6689abab1868bde6f49\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7c65d6cfc9-srhpk" podUID="52edfe48-6ed4-4315-a021-9edeaf0de226" Jul 14 21:44:26.252429 kubelet[1919]: E0714 21:44:26.252366 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:26.257177 env[1220]: time="2025-07-14T21:44:26.256620093Z" level=info msg="CreateContainer within sandbox \"bf294004cfbc447e3b4ce759cd88609d85a0017263604e1cd16d336c5ed61051\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jul 14 21:44:26.271506 env[1220]: time="2025-07-14T21:44:26.271431419Z" level=info msg="CreateContainer within sandbox \"bf294004cfbc447e3b4ce759cd88609d85a0017263604e1cd16d336c5ed61051\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"f62aec2790765b642251df5df65a485c1d152778b1ed34739c9164d943f017dd\"" Jul 14 21:44:26.272666 env[1220]: time="2025-07-14T21:44:26.272627780Z" level=info msg="StartContainer for \"f62aec2790765b642251df5df65a485c1d152778b1ed34739c9164d943f017dd\"" Jul 14 21:44:26.287729 systemd[1]: Started cri-containerd-f62aec2790765b642251df5df65a485c1d152778b1ed34739c9164d943f017dd.scope. Jul 14 21:44:26.330294 env[1220]: time="2025-07-14T21:44:26.330238685Z" level=info msg="StartContainer for \"f62aec2790765b642251df5df65a485c1d152778b1ed34739c9164d943f017dd\" returns successfully" Jul 14 21:44:26.354862 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21dbd5c1fd01aab9a1bbe71dd671a48bb012c79dba1467525f23866ffa4b2294-shm.mount: Deactivated successfully. Jul 14 21:44:27.254793 kubelet[1919]: E0714 21:44:27.254764 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:27.403690 update_engine[1212]: I0714 21:44:27.403638 1212 update_attempter.cc:509] Updating boot flags... Jul 14 21:44:27.425426 systemd-networkd[1050]: flannel.1: Link UP Jul 14 21:44:27.425444 systemd-networkd[1050]: flannel.1: Gained carrier Jul 14 21:44:28.256484 kubelet[1919]: E0714 21:44:28.256443 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:29.178723 systemd-networkd[1050]: flannel.1: Gained IPv6LL Jul 14 21:44:37.204513 kubelet[1919]: E0714 21:44:37.204470 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:37.205807 env[1220]: time="2025-07-14T21:44:37.205423521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-srhpk,Uid:52edfe48-6ed4-4315-a021-9edeaf0de226,Namespace:kube-system,Attempt:0,}" Jul 14 21:44:37.234398 systemd-networkd[1050]: cni0: Link UP Jul 14 21:44:37.234411 systemd-networkd[1050]: cni0: Gained carrier Jul 14 21:44:37.238171 systemd-networkd[1050]: cni0: Lost carrier Jul 14 21:44:37.244572 kernel: cni0: port 1(veth3dde7539) entered blocking state Jul 14 21:44:37.244692 kernel: cni0: port 1(veth3dde7539) entered disabled state Jul 14 21:44:37.244719 kernel: device veth3dde7539 entered promiscuous mode Jul 14 21:44:37.244741 kernel: cni0: port 1(veth3dde7539) entered blocking state Jul 14 21:44:37.240968 systemd-networkd[1050]: veth3dde7539: Link UP Jul 14 21:44:37.249641 kernel: cni0: port 1(veth3dde7539) entered forwarding state Jul 14 21:44:37.251623 kernel: cni0: port 1(veth3dde7539) entered disabled state Jul 14 21:44:37.268672 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth3dde7539: link becomes ready Jul 14 21:44:37.268779 kernel: cni0: port 1(veth3dde7539) entered blocking state Jul 14 21:44:37.268806 kernel: cni0: port 1(veth3dde7539) entered forwarding state Jul 14 21:44:37.268916 systemd-networkd[1050]: veth3dde7539: Gained carrier Jul 14 21:44:37.269159 systemd-networkd[1050]: cni0: Gained carrier Jul 14 21:44:37.271255 env[1220]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001148e8), "name":"cbr0", "type":"bridge"} Jul 14 21:44:37.271255 env[1220]: delegateAdd: netconf sent to delegate plugin: Jul 14 21:44:37.284074 env[1220]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-07-14T21:44:37.284004497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:44:37.284217 env[1220]: time="2025-07-14T21:44:37.284086578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:44:37.284217 env[1220]: time="2025-07-14T21:44:37.284113578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:44:37.284382 env[1220]: time="2025-07-14T21:44:37.284292098Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3d4162c24b800d19c3bb81a16974a63fa24d97b504ee7e3bc56f99171a09098 pid=2626 runtime=io.containerd.runc.v2 Jul 14 21:44:37.296198 systemd[1]: run-containerd-runc-k8s.io-b3d4162c24b800d19c3bb81a16974a63fa24d97b504ee7e3bc56f99171a09098-runc.LV3tG9.mount: Deactivated successfully. Jul 14 21:44:37.298758 systemd[1]: Started cri-containerd-b3d4162c24b800d19c3bb81a16974a63fa24d97b504ee7e3bc56f99171a09098.scope. Jul 14 21:44:37.319533 systemd-resolved[1163]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:44:37.336205 env[1220]: time="2025-07-14T21:44:37.335634909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-srhpk,Uid:52edfe48-6ed4-4315-a021-9edeaf0de226,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3d4162c24b800d19c3bb81a16974a63fa24d97b504ee7e3bc56f99171a09098\"" Jul 14 21:44:37.336633 kubelet[1919]: E0714 21:44:37.336608 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:37.339830 env[1220]: time="2025-07-14T21:44:37.339783510Z" level=info msg="CreateContainer within sandbox \"b3d4162c24b800d19c3bb81a16974a63fa24d97b504ee7e3bc56f99171a09098\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:44:37.350677 env[1220]: time="2025-07-14T21:44:37.350615712Z" level=info msg="CreateContainer within sandbox \"b3d4162c24b800d19c3bb81a16974a63fa24d97b504ee7e3bc56f99171a09098\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a9216b6661c71f35880a56cbc5745e5c709e63135f29f3ec570ffd8becfddaa\"" Jul 14 21:44:37.351276 env[1220]: time="2025-07-14T21:44:37.351248312Z" level=info msg="StartContainer for \"7a9216b6661c71f35880a56cbc5745e5c709e63135f29f3ec570ffd8becfddaa\"" Jul 14 21:44:37.365706 systemd[1]: Started cri-containerd-7a9216b6661c71f35880a56cbc5745e5c709e63135f29f3ec570ffd8becfddaa.scope. Jul 14 21:44:37.418997 env[1220]: time="2025-07-14T21:44:37.418947327Z" level=info msg="StartContainer for \"7a9216b6661c71f35880a56cbc5745e5c709e63135f29f3ec570ffd8becfddaa\" returns successfully" Jul 14 21:44:38.274003 kubelet[1919]: E0714 21:44:38.273956 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:38.285031 kubelet[1919]: I0714 21:44:38.284927 1919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-jg5ph" podStartSLOduration=13.568293939 podStartE2EDuration="17.28490991s" podCreationTimestamp="2025-07-14 21:44:21 +0000 UTC" firstStartedPulling="2025-07-14 21:44:21.478729031 +0000 UTC m=+7.361788781" lastFinishedPulling="2025-07-14 21:44:25.195345002 +0000 UTC m=+11.078404752" observedRunningTime="2025-07-14 21:44:27.274705373 +0000 UTC m=+13.157765123" watchObservedRunningTime="2025-07-14 21:44:38.28490991 +0000 UTC m=+24.167969660" Jul 14 21:44:38.295102 kubelet[1919]: I0714 21:44:38.295029 1919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-srhpk" podStartSLOduration=17.295013312000002 podStartE2EDuration="17.295013312s" podCreationTimestamp="2025-07-14 21:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:44:38.28529343 +0000 UTC m=+24.168353180" watchObservedRunningTime="2025-07-14 21:44:38.295013312 +0000 UTC m=+24.178073062" Jul 14 21:44:38.522721 systemd-networkd[1050]: veth3dde7539: Gained IPv6LL Jul 14 21:44:38.842755 systemd-networkd[1050]: cni0: Gained IPv6LL Jul 14 21:44:39.275855 kubelet[1919]: E0714 21:44:39.275809 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:39.406245 systemd[1]: Started sshd@5-10.0.0.9:22-10.0.0.1:35320.service. Jul 14 21:44:39.444086 sshd[2725]: Accepted publickey for core from 10.0.0.1 port 35320 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:44:39.445523 sshd[2725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:44:39.452438 systemd-logind[1209]: New session 6 of user core. Jul 14 21:44:39.452928 systemd[1]: Started session-6.scope. Jul 14 21:44:39.572762 sshd[2725]: pam_unix(sshd:session): session closed for user core Jul 14 21:44:39.575392 systemd-logind[1209]: Session 6 logged out. Waiting for processes to exit. Jul 14 21:44:39.575554 systemd[1]: sshd@5-10.0.0.9:22-10.0.0.1:35320.service: Deactivated successfully. Jul 14 21:44:39.576353 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 21:44:39.577009 systemd-logind[1209]: Removed session 6. Jul 14 21:44:40.204090 kubelet[1919]: E0714 21:44:40.203583 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:40.204458 env[1220]: time="2025-07-14T21:44:40.204390961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wsf54,Uid:cbde8377-4089-4d16-9303-fe5e7d2cf264,Namespace:kube-system,Attempt:0,}" Jul 14 21:44:40.222239 systemd-networkd[1050]: veth0249a95d: Link UP Jul 14 21:44:40.225141 kernel: cni0: port 2(veth0249a95d) entered blocking state Jul 14 21:44:40.225237 kernel: cni0: port 2(veth0249a95d) entered disabled state Jul 14 21:44:40.225263 kernel: device veth0249a95d entered promiscuous mode Jul 14 21:44:40.231887 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 21:44:40.231991 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth0249a95d: link becomes ready Jul 14 21:44:40.232014 kernel: cni0: port 2(veth0249a95d) entered blocking state Jul 14 21:44:40.233000 kernel: cni0: port 2(veth0249a95d) entered forwarding state Jul 14 21:44:40.233099 systemd-networkd[1050]: veth0249a95d: Gained carrier Jul 14 21:44:40.235113 env[1220]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} Jul 14 21:44:40.235113 env[1220]: delegateAdd: netconf sent to delegate plugin: Jul 14 21:44:40.247850 env[1220]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-07-14T21:44:40.247779289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:44:40.248000 env[1220]: time="2025-07-14T21:44:40.247867529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:44:40.248000 env[1220]: time="2025-07-14T21:44:40.247898369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:44:40.248088 env[1220]: time="2025-07-14T21:44:40.248057489Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/160344db14bd94e9c5369862aa0dd64d5018c71c9fb645beacd7f63c7cc9e890 pid=2791 runtime=io.containerd.runc.v2 Jul 14 21:44:40.264390 systemd[1]: run-containerd-runc-k8s.io-160344db14bd94e9c5369862aa0dd64d5018c71c9fb645beacd7f63c7cc9e890-runc.P7bALu.mount: Deactivated successfully. Jul 14 21:44:40.266004 systemd[1]: Started cri-containerd-160344db14bd94e9c5369862aa0dd64d5018c71c9fb645beacd7f63c7cc9e890.scope. Jul 14 21:44:40.277927 kubelet[1919]: E0714 21:44:40.277898 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:40.294418 systemd-resolved[1163]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:44:40.311939 env[1220]: time="2025-07-14T21:44:40.311880261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wsf54,Uid:cbde8377-4089-4d16-9303-fe5e7d2cf264,Namespace:kube-system,Attempt:0,} returns sandbox id \"160344db14bd94e9c5369862aa0dd64d5018c71c9fb645beacd7f63c7cc9e890\"" Jul 14 21:44:40.312637 kubelet[1919]: E0714 21:44:40.312610 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:40.316208 env[1220]: time="2025-07-14T21:44:40.316166461Z" level=info msg="CreateContainer within sandbox \"160344db14bd94e9c5369862aa0dd64d5018c71c9fb645beacd7f63c7cc9e890\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:44:40.326363 env[1220]: time="2025-07-14T21:44:40.326305583Z" level=info msg="CreateContainer within sandbox \"160344db14bd94e9c5369862aa0dd64d5018c71c9fb645beacd7f63c7cc9e890\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0ed4fb064ea6381a48dd1d88e17d11c9d3b77f447b6f4cd9af1d8eb3b43dfe3\"" Jul 14 21:44:40.327026 env[1220]: time="2025-07-14T21:44:40.326925903Z" level=info msg="StartContainer for \"a0ed4fb064ea6381a48dd1d88e17d11c9d3b77f447b6f4cd9af1d8eb3b43dfe3\"" Jul 14 21:44:40.342131 systemd[1]: Started cri-containerd-a0ed4fb064ea6381a48dd1d88e17d11c9d3b77f447b6f4cd9af1d8eb3b43dfe3.scope. Jul 14 21:44:40.388374 env[1220]: time="2025-07-14T21:44:40.388314194Z" level=info msg="StartContainer for \"a0ed4fb064ea6381a48dd1d88e17d11c9d3b77f447b6f4cd9af1d8eb3b43dfe3\" returns successfully" Jul 14 21:44:41.280533 kubelet[1919]: E0714 21:44:41.280489 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:41.300238 kubelet[1919]: I0714 21:44:41.300180 1919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wsf54" podStartSLOduration=20.300159633 podStartE2EDuration="20.300159633s" podCreationTimestamp="2025-07-14 21:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:44:41.291018632 +0000 UTC m=+27.174078342" watchObservedRunningTime="2025-07-14 21:44:41.300159633 +0000 UTC m=+27.183219383" Jul 14 21:44:42.234759 systemd-networkd[1050]: veth0249a95d: Gained IPv6LL Jul 14 21:44:42.282298 kubelet[1919]: E0714 21:44:42.282268 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:43.283613 kubelet[1919]: E0714 21:44:43.283564 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:44:44.577341 systemd[1]: Started sshd@6-10.0.0.9:22-10.0.0.1:52170.service. Jul 14 21:44:44.614800 sshd[2891]: Accepted publickey for core from 10.0.0.1 port 52170 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:44:44.616101 sshd[2891]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:44:44.620264 systemd-logind[1209]: New session 7 of user core. Jul 14 21:44:44.620621 systemd[1]: Started session-7.scope. Jul 14 21:44:44.728887 sshd[2891]: pam_unix(sshd:session): session closed for user core Jul 14 21:44:44.731320 systemd[1]: sshd@6-10.0.0.9:22-10.0.0.1:52170.service: Deactivated successfully. Jul 14 21:44:44.732134 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 21:44:44.732671 systemd-logind[1209]: Session 7 logged out. Waiting for processes to exit. Jul 14 21:44:44.733312 systemd-logind[1209]: Removed session 7. Jul 14 21:44:49.735230 systemd[1]: Started sshd@7-10.0.0.9:22-10.0.0.1:52186.service. Jul 14 21:44:49.776606 sshd[2926]: Accepted publickey for core from 10.0.0.1 port 52186 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:44:49.778226 sshd[2926]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:44:49.783911 systemd-logind[1209]: New session 8 of user core. Jul 14 21:44:49.785110 systemd[1]: Started session-8.scope. Jul 14 21:44:49.913205 sshd[2926]: pam_unix(sshd:session): session closed for user core Jul 14 21:44:49.916524 systemd[1]: sshd@7-10.0.0.9:22-10.0.0.1:52186.service: Deactivated successfully. Jul 14 21:44:49.917245 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 21:44:49.917941 systemd-logind[1209]: Session 8 logged out. Waiting for processes to exit. Jul 14 21:44:49.919084 systemd[1]: Started sshd@8-10.0.0.9:22-10.0.0.1:52202.service. Jul 14 21:44:49.920785 systemd-logind[1209]: Removed session 8. Jul 14 21:44:49.955477 sshd[2940]: Accepted publickey for core from 10.0.0.1 port 52202 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:44:49.956684 sshd[2940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:44:49.960049 systemd-logind[1209]: New session 9 of user core. Jul 14 21:44:49.960980 systemd[1]: Started session-9.scope. Jul 14 21:44:50.111903 sshd[2940]: pam_unix(sshd:session): session closed for user core Jul 14 21:44:50.115041 systemd[1]: Started sshd@9-10.0.0.9:22-10.0.0.1:52208.service. Jul 14 21:44:50.118084 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 21:44:50.119489 systemd[1]: sshd@8-10.0.0.9:22-10.0.0.1:52202.service: Deactivated successfully. Jul 14 21:44:50.120480 systemd-logind[1209]: Session 9 logged out. Waiting for processes to exit. Jul 14 21:44:50.124832 systemd-logind[1209]: Removed session 9. Jul 14 21:44:50.172029 sshd[2951]: Accepted publickey for core from 10.0.0.1 port 52208 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:44:50.173432 sshd[2951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:44:50.176885 systemd-logind[1209]: New session 10 of user core. Jul 14 21:44:50.177857 systemd[1]: Started session-10.scope. Jul 14 21:44:50.290778 sshd[2951]: pam_unix(sshd:session): session closed for user core Jul 14 21:44:50.293736 systemd[1]: sshd@9-10.0.0.9:22-10.0.0.1:52208.service: Deactivated successfully. Jul 14 21:44:50.294434 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 21:44:50.295303 systemd-logind[1209]: Session 10 logged out. Waiting for processes to exit. Jul 14 21:44:50.296164 systemd-logind[1209]: Removed session 10. Jul 14 21:44:55.297669 systemd[1]: Started sshd@10-10.0.0.9:22-10.0.0.1:53808.service. Jul 14 21:44:55.360727 sshd[2988]: Accepted publickey for core from 10.0.0.1 port 53808 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:44:55.362404 sshd[2988]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:44:55.366220 systemd-logind[1209]: New session 11 of user core. Jul 14 21:44:55.366842 systemd[1]: Started session-11.scope. Jul 14 21:44:55.510252 sshd[2988]: pam_unix(sshd:session): session closed for user core Jul 14 21:44:55.512935 systemd[1]: sshd@10-10.0.0.9:22-10.0.0.1:53808.service: Deactivated successfully. Jul 14 21:44:55.513717 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 21:44:55.514286 systemd-logind[1209]: Session 11 logged out. Waiting for processes to exit. Jul 14 21:44:55.515099 systemd-logind[1209]: Removed session 11. Jul 14 21:45:00.515424 systemd[1]: Started sshd@11-10.0.0.9:22-10.0.0.1:53814.service. Jul 14 21:45:00.552440 sshd[3022]: Accepted publickey for core from 10.0.0.1 port 53814 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:00.554161 sshd[3022]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:00.558670 systemd-logind[1209]: New session 12 of user core. Jul 14 21:45:00.558732 systemd[1]: Started session-12.scope. Jul 14 21:45:00.669569 sshd[3022]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:00.672299 systemd[1]: sshd@11-10.0.0.9:22-10.0.0.1:53814.service: Deactivated successfully. Jul 14 21:45:00.673072 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 21:45:00.673589 systemd-logind[1209]: Session 12 logged out. Waiting for processes to exit. Jul 14 21:45:00.674281 systemd-logind[1209]: Removed session 12. Jul 14 21:45:05.674197 systemd[1]: Started sshd@12-10.0.0.9:22-10.0.0.1:35210.service. Jul 14 21:45:05.723441 sshd[3057]: Accepted publickey for core from 10.0.0.1 port 35210 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:05.724727 sshd[3057]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:05.729172 systemd-logind[1209]: New session 13 of user core. Jul 14 21:45:05.733756 systemd[1]: Started session-13.scope. Jul 14 21:45:05.871938 sshd[3057]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:05.874456 systemd[1]: sshd@12-10.0.0.9:22-10.0.0.1:35210.service: Deactivated successfully. Jul 14 21:45:05.875168 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 21:45:05.875674 systemd-logind[1209]: Session 13 logged out. Waiting for processes to exit. Jul 14 21:45:05.876378 systemd-logind[1209]: Removed session 13. Jul 14 21:45:10.875754 systemd[1]: Started sshd@13-10.0.0.9:22-10.0.0.1:35214.service. Jul 14 21:45:10.915765 sshd[3093]: Accepted publickey for core from 10.0.0.1 port 35214 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:10.917441 sshd[3093]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:10.921489 systemd-logind[1209]: New session 14 of user core. Jul 14 21:45:10.922019 systemd[1]: Started session-14.scope. Jul 14 21:45:11.046390 sshd[3093]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:11.050375 systemd[1]: Started sshd@14-10.0.0.9:22-10.0.0.1:35226.service. Jul 14 21:45:11.052101 systemd-logind[1209]: Session 14 logged out. Waiting for processes to exit. Jul 14 21:45:11.052344 systemd[1]: sshd@13-10.0.0.9:22-10.0.0.1:35214.service: Deactivated successfully. Jul 14 21:45:11.053125 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 21:45:11.053839 systemd-logind[1209]: Removed session 14. Jul 14 21:45:11.087206 sshd[3105]: Accepted publickey for core from 10.0.0.1 port 35226 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:11.089270 sshd[3105]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:11.092678 systemd-logind[1209]: New session 15 of user core. Jul 14 21:45:11.093667 systemd[1]: Started session-15.scope. Jul 14 21:45:11.246472 sshd[3105]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:11.250284 systemd[1]: Started sshd@15-10.0.0.9:22-10.0.0.1:35232.service. Jul 14 21:45:11.251776 systemd-logind[1209]: Session 15 logged out. Waiting for processes to exit. Jul 14 21:45:11.251976 systemd[1]: sshd@14-10.0.0.9:22-10.0.0.1:35226.service: Deactivated successfully. Jul 14 21:45:11.252616 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 21:45:11.253339 systemd-logind[1209]: Removed session 15. Jul 14 21:45:11.286366 sshd[3116]: Accepted publickey for core from 10.0.0.1 port 35232 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:11.287964 sshd[3116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:11.291173 systemd-logind[1209]: New session 16 of user core. Jul 14 21:45:11.292126 systemd[1]: Started session-16.scope. Jul 14 21:45:12.686286 sshd[3116]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:12.688325 systemd[1]: Started sshd@16-10.0.0.9:22-10.0.0.1:55278.service. Jul 14 21:45:12.689813 systemd-logind[1209]: Session 16 logged out. Waiting for processes to exit. Jul 14 21:45:12.690052 systemd[1]: sshd@15-10.0.0.9:22-10.0.0.1:35232.service: Deactivated successfully. Jul 14 21:45:12.690879 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 21:45:12.691526 systemd-logind[1209]: Removed session 16. Jul 14 21:45:12.736934 sshd[3156]: Accepted publickey for core from 10.0.0.1 port 55278 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:12.738593 sshd[3156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:12.742488 systemd-logind[1209]: New session 17 of user core. Jul 14 21:45:12.743342 systemd[1]: Started session-17.scope. Jul 14 21:45:12.967586 sshd[3156]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:12.970499 systemd[1]: Started sshd@17-10.0.0.9:22-10.0.0.1:55290.service. Jul 14 21:45:12.973137 systemd[1]: sshd@16-10.0.0.9:22-10.0.0.1:55278.service: Deactivated successfully. Jul 14 21:45:12.973912 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 21:45:12.974463 systemd-logind[1209]: Session 17 logged out. Waiting for processes to exit. Jul 14 21:45:12.975641 systemd-logind[1209]: Removed session 17. Jul 14 21:45:13.008212 sshd[3168]: Accepted publickey for core from 10.0.0.1 port 55290 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:13.009567 sshd[3168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:13.012899 systemd-logind[1209]: New session 18 of user core. Jul 14 21:45:13.013742 systemd[1]: Started session-18.scope. Jul 14 21:45:13.121243 sshd[3168]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:13.123724 systemd[1]: sshd@17-10.0.0.9:22-10.0.0.1:55290.service: Deactivated successfully. Jul 14 21:45:13.124410 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 21:45:13.124969 systemd-logind[1209]: Session 18 logged out. Waiting for processes to exit. Jul 14 21:45:13.125888 systemd-logind[1209]: Removed session 18. Jul 14 21:45:18.126087 systemd[1]: Started sshd@18-10.0.0.9:22-10.0.0.1:55302.service. Jul 14 21:45:18.162133 sshd[3208]: Accepted publickey for core from 10.0.0.1 port 55302 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:18.164270 sshd[3208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:18.170575 systemd[1]: Started session-19.scope. Jul 14 21:45:18.171761 systemd-logind[1209]: New session 19 of user core. Jul 14 21:45:18.288624 sshd[3208]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:18.291660 systemd[1]: sshd@18-10.0.0.9:22-10.0.0.1:55302.service: Deactivated successfully. Jul 14 21:45:18.292391 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 21:45:18.294451 systemd-logind[1209]: Session 19 logged out. Waiting for processes to exit. Jul 14 21:45:18.295804 systemd-logind[1209]: Removed session 19. Jul 14 21:45:23.293144 systemd[1]: Started sshd@19-10.0.0.9:22-10.0.0.1:39810.service. Jul 14 21:45:23.335090 sshd[3245]: Accepted publickey for core from 10.0.0.1 port 39810 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:23.336525 sshd[3245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:23.340685 systemd-logind[1209]: New session 20 of user core. Jul 14 21:45:23.341433 systemd[1]: Started session-20.scope. Jul 14 21:45:23.476052 sshd[3245]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:23.478890 systemd[1]: sshd@19-10.0.0.9:22-10.0.0.1:39810.service: Deactivated successfully. Jul 14 21:45:23.479638 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 21:45:23.480319 systemd-logind[1209]: Session 20 logged out. Waiting for processes to exit. Jul 14 21:45:23.481188 systemd-logind[1209]: Removed session 20. Jul 14 21:45:28.483613 systemd[1]: Started sshd@20-10.0.0.9:22-10.0.0.1:39822.service. Jul 14 21:45:28.530154 sshd[3279]: Accepted publickey for core from 10.0.0.1 port 39822 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:45:28.532029 sshd[3279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:45:28.538109 systemd-logind[1209]: New session 21 of user core. Jul 14 21:45:28.539454 systemd[1]: Started session-21.scope. Jul 14 21:45:28.650298 sshd[3279]: pam_unix(sshd:session): session closed for user core Jul 14 21:45:28.652850 systemd[1]: sshd@20-10.0.0.9:22-10.0.0.1:39822.service: Deactivated successfully. Jul 14 21:45:28.653624 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 21:45:28.654149 systemd-logind[1209]: Session 21 logged out. Waiting for processes to exit. Jul 14 21:45:28.655105 systemd-logind[1209]: Removed session 21. Jul 14 21:45:29.204477 kubelet[1919]: E0714 21:45:29.204425 1919 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"