Feb 9 09:46:40.736504 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 09:46:40.736524 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:46:40.736532 kernel: efi: EFI v2.70 by EDK II Feb 9 09:46:40.736538 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 09:46:40.736543 kernel: random: crng init done Feb 9 09:46:40.736548 kernel: ACPI: Early table checksum verification disabled Feb 9 09:46:40.736554 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 09:46:40.736561 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 09:46:40.736566 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:46:40.736572 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:46:40.736577 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:46:40.736582 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:46:40.736587 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:46:40.736593 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:46:40.736601 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:46:40.736606 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:46:40.736612 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:46:40.736618 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 09:46:40.736623 kernel: NUMA: Failed to initialise from firmware Feb 9 09:46:40.736629 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:46:40.736635 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 9 09:46:40.736640 kernel: Zone ranges: Feb 9 09:46:40.736646 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:46:40.736652 kernel: DMA32 empty Feb 9 09:46:40.736658 kernel: Normal empty Feb 9 09:46:40.736663 kernel: Movable zone start for each node Feb 9 09:46:40.736669 kernel: Early memory node ranges Feb 9 09:46:40.736674 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 09:46:40.736680 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 09:46:40.736685 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 09:46:40.736691 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 09:46:40.736697 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 09:46:40.736702 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 09:46:40.736708 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 09:46:40.736713 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:46:40.736720 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 09:46:40.736725 kernel: psci: probing for conduit method from ACPI. Feb 9 09:46:40.736731 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 09:46:40.736737 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:46:40.736743 kernel: psci: Trusted OS migration not required Feb 9 09:46:40.736750 kernel: psci: SMC Calling Convention v1.1 Feb 9 09:46:40.736756 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 09:46:40.736764 kernel: ACPI: SRAT not present Feb 9 09:46:40.736770 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:46:40.736776 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:46:40.736782 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 09:46:40.736788 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:46:40.736794 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:46:40.736800 kernel: CPU features: detected: Hardware dirty bit management Feb 9 09:46:40.736806 kernel: CPU features: detected: Spectre-v4 Feb 9 09:46:40.736812 kernel: CPU features: detected: Spectre-BHB Feb 9 09:46:40.736819 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:46:40.736825 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:46:40.736831 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 09:46:40.736837 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 09:46:40.736842 kernel: Policy zone: DMA Feb 9 09:46:40.736849 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:46:40.736856 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:46:40.736862 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:46:40.736868 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:46:40.736874 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:46:40.736880 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 9 09:46:40.736888 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 09:46:40.736894 kernel: trace event string verifier disabled Feb 9 09:46:40.736900 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:46:40.736906 kernel: rcu: RCU event tracing is enabled. Feb 9 09:46:40.736912 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 09:46:40.736918 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:46:40.736924 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:46:40.736930 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:46:40.736936 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 09:46:40.736942 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:46:40.736948 kernel: GICv3: 256 SPIs implemented Feb 9 09:46:40.736955 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:46:40.736961 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:46:40.736967 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:46:40.736973 kernel: GICv3: 16 PPIs implemented Feb 9 09:46:40.736979 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 09:46:40.736985 kernel: ACPI: SRAT not present Feb 9 09:46:40.736990 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 09:46:40.736997 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 09:46:40.737003 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 09:46:40.737009 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 09:46:40.737015 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 09:46:40.737046 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:46:40.737054 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 09:46:40.737060 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 09:46:40.737066 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 09:46:40.737072 kernel: arm-pv: using stolen time PV Feb 9 09:46:40.737078 kernel: Console: colour dummy device 80x25 Feb 9 09:46:40.737085 kernel: ACPI: Core revision 20210730 Feb 9 09:46:40.737091 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 09:46:40.737097 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:46:40.737103 kernel: LSM: Security Framework initializing Feb 9 09:46:40.737109 kernel: SELinux: Initializing. Feb 9 09:46:40.737117 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:46:40.737123 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:46:40.737129 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:46:40.737135 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 09:46:40.737142 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 09:46:40.737148 kernel: Remapping and enabling EFI services. Feb 9 09:46:40.737154 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:46:40.737160 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:46:40.737166 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 09:46:40.737174 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 09:46:40.737180 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:46:40.737186 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 09:46:40.737192 kernel: Detected PIPT I-cache on CPU2 Feb 9 09:46:40.737199 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 09:46:40.737205 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 09:46:40.737211 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:46:40.737217 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 09:46:40.737223 kernel: Detected PIPT I-cache on CPU3 Feb 9 09:46:40.737230 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 09:46:40.737237 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 09:46:40.737244 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:46:40.737250 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 09:46:40.737256 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 09:46:40.737266 kernel: SMP: Total of 4 processors activated. Feb 9 09:46:40.737274 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:46:40.737281 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 09:46:40.737287 kernel: CPU features: detected: Common not Private translations Feb 9 09:46:40.737294 kernel: CPU features: detected: CRC32 instructions Feb 9 09:46:40.737300 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 09:46:40.737306 kernel: CPU features: detected: LSE atomic instructions Feb 9 09:46:40.737313 kernel: CPU features: detected: Privileged Access Never Feb 9 09:46:40.737321 kernel: CPU features: detected: RAS Extension Support Feb 9 09:46:40.737327 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 09:46:40.737334 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:46:40.737340 kernel: alternatives: patching kernel code Feb 9 09:46:40.737346 kernel: devtmpfs: initialized Feb 9 09:46:40.737354 kernel: KASLR enabled Feb 9 09:46:40.737361 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:46:40.737367 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 09:46:40.737374 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:46:40.737380 kernel: SMBIOS 3.0.0 present. Feb 9 09:46:40.737387 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 09:46:40.737393 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:46:40.737400 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:46:40.737406 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:46:40.737414 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:46:40.737421 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:46:40.737427 kernel: audit: type=2000 audit(0.040:1): state=initialized audit_enabled=0 res=1 Feb 9 09:46:40.737434 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:46:40.737440 kernel: cpuidle: using governor menu Feb 9 09:46:40.737447 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:46:40.737453 kernel: ASID allocator initialised with 32768 entries Feb 9 09:46:40.737459 kernel: ACPI: bus type PCI registered Feb 9 09:46:40.737466 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:46:40.737474 kernel: Serial: AMBA PL011 UART driver Feb 9 09:46:40.737481 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:46:40.737487 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:46:40.737494 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:46:40.737500 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:46:40.737507 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:46:40.737513 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:46:40.737520 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:46:40.737526 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:46:40.737534 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:46:40.737540 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:46:40.737547 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:46:40.737553 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:46:40.737559 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:46:40.737566 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:46:40.737572 kernel: ACPI: Interpreter enabled Feb 9 09:46:40.737579 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:46:40.737585 kernel: ACPI: MCFG table detected, 1 entries Feb 9 09:46:40.737593 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 09:46:40.737600 kernel: printk: console [ttyAMA0] enabled Feb 9 09:46:40.737606 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 09:46:40.737718 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 09:46:40.737780 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 09:46:40.737837 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 09:46:40.737894 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 09:46:40.737953 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 09:46:40.737962 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 09:46:40.737969 kernel: PCI host bridge to bus 0000:00 Feb 9 09:46:40.738057 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 09:46:40.738112 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 09:46:40.738163 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 09:46:40.738215 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 09:46:40.738291 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 09:46:40.738360 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 09:46:40.738420 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 09:46:40.738477 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 09:46:40.738535 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 09:46:40.738593 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 09:46:40.738651 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 09:46:40.738711 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 09:46:40.738763 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 09:46:40.738822 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 09:46:40.738874 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 09:46:40.738882 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 09:46:40.738889 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 09:46:40.738896 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 09:46:40.738902 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 09:46:40.738910 kernel: iommu: Default domain type: Translated Feb 9 09:46:40.738917 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:46:40.738924 kernel: vgaarb: loaded Feb 9 09:46:40.738930 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:46:40.738937 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:46:40.738944 kernel: PTP clock support registered Feb 9 09:46:40.738950 kernel: Registered efivars operations Feb 9 09:46:40.738956 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:46:40.738963 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:46:40.738971 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:46:40.738977 kernel: pnp: PnP ACPI init Feb 9 09:46:40.739102 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 09:46:40.739114 kernel: pnp: PnP ACPI: found 1 devices Feb 9 09:46:40.739121 kernel: NET: Registered PF_INET protocol family Feb 9 09:46:40.739128 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:46:40.739134 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:46:40.739141 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:46:40.739150 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:46:40.739157 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:46:40.739164 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:46:40.739170 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:46:40.739177 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:46:40.739183 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:46:40.739190 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:46:40.739196 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 09:46:40.739203 kernel: kvm [1]: HYP mode not available Feb 9 09:46:40.739211 kernel: Initialise system trusted keyrings Feb 9 09:46:40.739217 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:46:40.739224 kernel: Key type asymmetric registered Feb 9 09:46:40.739230 kernel: Asymmetric key parser 'x509' registered Feb 9 09:46:40.739237 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:46:40.739243 kernel: io scheduler mq-deadline registered Feb 9 09:46:40.739249 kernel: io scheduler kyber registered Feb 9 09:46:40.739256 kernel: io scheduler bfq registered Feb 9 09:46:40.739263 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 09:46:40.739270 kernel: ACPI: button: Power Button [PWRB] Feb 9 09:46:40.739277 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 09:46:40.739345 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 09:46:40.739354 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:46:40.739361 kernel: thunder_xcv, ver 1.0 Feb 9 09:46:40.739367 kernel: thunder_bgx, ver 1.0 Feb 9 09:46:40.739374 kernel: nicpf, ver 1.0 Feb 9 09:46:40.739380 kernel: nicvf, ver 1.0 Feb 9 09:46:40.739448 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:46:40.739507 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:46:40 UTC (1707472000) Feb 9 09:46:40.739517 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:46:40.739523 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:46:40.739530 kernel: Segment Routing with IPv6 Feb 9 09:46:40.739537 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:46:40.739543 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:46:40.739550 kernel: Key type dns_resolver registered Feb 9 09:46:40.739557 kernel: registered taskstats version 1 Feb 9 09:46:40.739564 kernel: Loading compiled-in X.509 certificates Feb 9 09:46:40.739572 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:46:40.739578 kernel: Key type .fscrypt registered Feb 9 09:46:40.739585 kernel: Key type fscrypt-provisioning registered Feb 9 09:46:40.739591 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:46:40.739598 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:46:40.739604 kernel: ima: No architecture policies found Feb 9 09:46:40.739611 kernel: Freeing unused kernel memory: 34688K Feb 9 09:46:40.739618 kernel: Run /init as init process Feb 9 09:46:40.739625 kernel: with arguments: Feb 9 09:46:40.739632 kernel: /init Feb 9 09:46:40.739638 kernel: with environment: Feb 9 09:46:40.739645 kernel: HOME=/ Feb 9 09:46:40.739651 kernel: TERM=linux Feb 9 09:46:40.739657 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:46:40.739666 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:46:40.739674 systemd[1]: Detected virtualization kvm. Feb 9 09:46:40.739683 systemd[1]: Detected architecture arm64. Feb 9 09:46:40.739690 systemd[1]: Running in initrd. Feb 9 09:46:40.739697 systemd[1]: No hostname configured, using default hostname. Feb 9 09:46:40.739704 systemd[1]: Hostname set to . Feb 9 09:46:40.739711 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:46:40.739718 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:46:40.739725 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:46:40.739732 systemd[1]: Reached target cryptsetup.target. Feb 9 09:46:40.739740 systemd[1]: Reached target paths.target. Feb 9 09:46:40.739747 systemd[1]: Reached target slices.target. Feb 9 09:46:40.739754 systemd[1]: Reached target swap.target. Feb 9 09:46:40.739761 systemd[1]: Reached target timers.target. Feb 9 09:46:40.739769 systemd[1]: Listening on iscsid.socket. Feb 9 09:46:40.739776 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:46:40.739783 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:46:40.739791 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:46:40.739798 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:46:40.739805 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:46:40.739812 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:46:40.739819 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:46:40.739826 systemd[1]: Reached target sockets.target. Feb 9 09:46:40.739833 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:46:40.739840 systemd[1]: Finished network-cleanup.service. Feb 9 09:46:40.739847 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:46:40.739856 systemd[1]: Starting systemd-journald.service... Feb 9 09:46:40.739863 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:46:40.739870 systemd[1]: Starting systemd-resolved.service... Feb 9 09:46:40.739877 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:46:40.739884 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:46:40.739891 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:46:40.739898 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:46:40.739905 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:46:40.739912 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:46:40.739923 systemd-journald[289]: Journal started Feb 9 09:46:40.739958 systemd-journald[289]: Runtime Journal (/run/log/journal/df799c1f0cf641e6bb965aa18c9cea9d) is 6.0M, max 48.7M, 42.6M free. Feb 9 09:46:40.731052 systemd-modules-load[290]: Inserted module 'overlay' Feb 9 09:46:40.741378 systemd[1]: Started systemd-journald.service. Feb 9 09:46:40.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.744040 kernel: audit: type=1130 audit(1707472000.741:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.747982 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:46:40.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.752078 kernel: audit: type=1130 audit(1707472000.748:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.752106 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:46:40.754259 systemd-modules-load[290]: Inserted module 'br_netfilter' Feb 9 09:46:40.755570 kernel: Bridge firewalling registered Feb 9 09:46:40.754997 systemd-resolved[291]: Positive Trust Anchors: Feb 9 09:46:40.755004 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:46:40.755058 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:46:40.759573 systemd-resolved[291]: Defaulting to hostname 'linux'. Feb 9 09:46:40.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.760311 systemd[1]: Started systemd-resolved.service. Feb 9 09:46:40.767837 kernel: audit: type=1130 audit(1707472000.762:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.767854 kernel: audit: type=1130 audit(1707472000.765:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.767864 kernel: SCSI subsystem initialized Feb 9 09:46:40.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.762709 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:46:40.765462 systemd[1]: Reached target nss-lookup.target. Feb 9 09:46:40.769213 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:46:40.775595 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:46:40.775631 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:46:40.775641 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:46:40.777780 dracut-cmdline[309]: dracut-dracut-053 Feb 9 09:46:40.777781 systemd-modules-load[290]: Inserted module 'dm_multipath' Feb 9 09:46:40.778509 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:46:40.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.780307 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:46:40.782994 kernel: audit: type=1130 audit(1707472000.779:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.783089 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:46:40.788761 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:46:40.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.792051 kernel: audit: type=1130 audit(1707472000.789:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.839051 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:46:40.849052 kernel: iscsi: registered transport (tcp) Feb 9 09:46:40.862057 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:46:40.862075 kernel: QLogic iSCSI HBA Driver Feb 9 09:46:40.895103 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:46:40.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.896701 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:46:40.899004 kernel: audit: type=1130 audit(1707472000.895:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.942048 kernel: raid6: neonx8 gen() 13687 MB/s Feb 9 09:46:40.959037 kernel: raid6: neonx8 xor() 10825 MB/s Feb 9 09:46:40.976036 kernel: raid6: neonx4 gen() 13576 MB/s Feb 9 09:46:40.993036 kernel: raid6: neonx4 xor() 10919 MB/s Feb 9 09:46:41.010039 kernel: raid6: neonx2 gen() 12263 MB/s Feb 9 09:46:41.027037 kernel: raid6: neonx2 xor() 9953 MB/s Feb 9 09:46:41.044042 kernel: raid6: neonx1 gen() 10521 MB/s Feb 9 09:46:41.061038 kernel: raid6: neonx1 xor() 8760 MB/s Feb 9 09:46:41.078045 kernel: raid6: int64x8 gen() 6300 MB/s Feb 9 09:46:41.095043 kernel: raid6: int64x8 xor() 3550 MB/s Feb 9 09:46:41.112043 kernel: raid6: int64x4 gen() 7265 MB/s Feb 9 09:46:41.129037 kernel: raid6: int64x4 xor() 3858 MB/s Feb 9 09:46:41.146043 kernel: raid6: int64x2 gen() 6156 MB/s Feb 9 09:46:41.163049 kernel: raid6: int64x2 xor() 3324 MB/s Feb 9 09:46:41.180048 kernel: raid6: int64x1 gen() 5044 MB/s Feb 9 09:46:41.197222 kernel: raid6: int64x1 xor() 2646 MB/s Feb 9 09:46:41.197235 kernel: raid6: using algorithm neonx8 gen() 13687 MB/s Feb 9 09:46:41.197243 kernel: raid6: .... xor() 10825 MB/s, rmw enabled Feb 9 09:46:41.197252 kernel: raid6: using neon recovery algorithm Feb 9 09:46:41.208208 kernel: xor: measuring software checksum speed Feb 9 09:46:41.208240 kernel: 8regs : 17319 MB/sec Feb 9 09:46:41.209039 kernel: 32regs : 20755 MB/sec Feb 9 09:46:41.210176 kernel: arm64_neon : 27797 MB/sec Feb 9 09:46:41.210199 kernel: xor: using function: arm64_neon (27797 MB/sec) Feb 9 09:46:41.265050 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:46:41.274620 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:46:41.278088 kernel: audit: type=1130 audit(1707472001.275:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.278110 kernel: audit: type=1334 audit(1707472001.277:10): prog-id=7 op=LOAD Feb 9 09:46:41.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.277000 audit: BPF prog-id=7 op=LOAD Feb 9 09:46:41.278000 audit: BPF prog-id=8 op=LOAD Feb 9 09:46:41.278502 systemd[1]: Starting systemd-udevd.service... Feb 9 09:46:41.292552 systemd-udevd[491]: Using default interface naming scheme 'v252'. Feb 9 09:46:41.295874 systemd[1]: Started systemd-udevd.service. Feb 9 09:46:41.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.299489 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:46:41.309937 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Feb 9 09:46:41.334624 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:46:41.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.336046 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:46:41.368934 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:46:41.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.397057 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 09:46:41.399352 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 09:46:41.399381 kernel: GPT:9289727 != 19775487 Feb 9 09:46:41.399389 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 09:46:41.399404 kernel: GPT:9289727 != 19775487 Feb 9 09:46:41.399412 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 09:46:41.399420 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:46:41.412049 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (544) Feb 9 09:46:41.414099 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:46:41.420281 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:46:41.427825 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:46:41.430662 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:46:41.431622 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:46:41.433880 systemd[1]: Starting disk-uuid.service... Feb 9 09:46:41.439727 disk-uuid[562]: Primary Header is updated. Feb 9 09:46:41.439727 disk-uuid[562]: Secondary Entries is updated. Feb 9 09:46:41.439727 disk-uuid[562]: Secondary Header is updated. Feb 9 09:46:41.442408 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:46:42.454429 disk-uuid[563]: The operation has completed successfully. Feb 9 09:46:42.455442 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:46:42.476376 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:46:42.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:42.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:42.476467 systemd[1]: Finished disk-uuid.service. Feb 9 09:46:42.483160 systemd[1]: Starting verity-setup.service... Feb 9 09:46:42.497055 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:46:42.518340 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:46:42.520538 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:46:42.522587 systemd[1]: Finished verity-setup.service. Feb 9 09:46:42.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:42.568774 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:46:42.570005 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:46:42.569583 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:46:42.570335 systemd[1]: Starting ignition-setup.service... Feb 9 09:46:42.572464 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:46:42.580066 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:46:42.580102 kernel: BTRFS info (device vda6): using free space tree Feb 9 09:46:42.580112 kernel: BTRFS info (device vda6): has skinny extents Feb 9 09:46:42.589237 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:46:42.595071 systemd[1]: Finished ignition-setup.service. Feb 9 09:46:42.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:42.596703 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:46:42.664408 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:46:42.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:42.665000 audit: BPF prog-id=9 op=LOAD Feb 9 09:46:42.666922 systemd[1]: Starting systemd-networkd.service... Feb 9 09:46:42.698162 systemd-networkd[738]: lo: Link UP Feb 9 09:46:42.698173 systemd-networkd[738]: lo: Gained carrier Feb 9 09:46:42.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:42.698530 systemd-networkd[738]: Enumeration completed Feb 9 09:46:42.698657 systemd[1]: Started systemd-networkd.service. Feb 9 09:46:42.698701 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:46:42.699709 systemd[1]: Reached target network.target. Feb 9 09:46:42.700514 systemd-networkd[738]: eth0: Link UP Feb 9 09:46:42.700517 systemd-networkd[738]: eth0: Gained carrier Feb 9 09:46:42.701770 systemd[1]: Starting iscsiuio.service... Feb 9 09:46:42.714843 systemd[1]: Started iscsiuio.service. Feb 9 09:46:42.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:42.716620 systemd[1]: Starting iscsid.service... Feb 9 09:46:42.721143 iscsid[743]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:46:42.721143 iscsid[743]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:46:42.721143 iscsid[743]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:46:42.721143 iscsid[743]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:46:42.721143 iscsid[743]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:46:42.721143 iscsid[743]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:46:42.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:42.726092 systemd[1]: Started iscsid.service. Feb 9 09:46:42.727931 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:46:42.728102 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 09:46:42.740514 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:46:42.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:42.741407 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:46:42.742459 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:46:42.742208 ignition[652]: Ignition 2.14.0 Feb 9 09:46:42.743795 systemd[1]: Reached target remote-fs.target. Feb 9 09:46:42.742215 ignition[652]: Stage: fetch-offline Feb 9 09:46:42.746509 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:46:42.742255 ignition[652]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:46:42.742264 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:46:42.742409 ignition[652]: parsed url from cmdline: "" Feb 9 09:46:42.742412 ignition[652]: no config URL provided Feb 9 09:46:42.742417 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:46:42.742424 ignition[652]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:46:42.742444 ignition[652]: op(1): [started] loading QEMU firmware config module Feb 9 09:46:42.742449 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 09:46:42.751349 ignition[652]: op(1): [finished] loading QEMU firmware config module Feb 9 09:46:42.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:42.756514 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:46:42.796782 ignition[652]: parsing config with SHA512: 22e3160696e4223ce5f6f9d76cd1359ea5ae005d9133585155793a2a8ef79486fe2565cb61412caf8bdc8d3ec5195730dcd3fd3d6f03e2b87dcec1d6e73e5354 Feb 9 09:46:42.830090 unknown[652]: fetched base config from "system" Feb 9 09:46:42.830114 unknown[652]: fetched user config from "qemu" Feb 9 09:46:42.830669 ignition[652]: fetch-offline: fetch-offline passed Feb 9 09:46:42.831918 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:46:42.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:42.830729 ignition[652]: Ignition finished successfully Feb 9 09:46:42.832987 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 09:46:42.833801 systemd[1]: Starting ignition-kargs.service... Feb 9 09:46:42.843150 ignition[759]: Ignition 2.14.0 Feb 9 09:46:42.843159 ignition[759]: Stage: kargs Feb 9 09:46:42.843254 ignition[759]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:46:42.843263 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:46:42.846295 systemd[1]: Finished ignition-kargs.service. Feb 9 09:46:42.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:42.844342 ignition[759]: kargs: kargs passed Feb 9 09:46:42.844392 ignition[759]: Ignition finished successfully Feb 9 09:46:42.847778 systemd[1]: Starting ignition-disks.service... Feb 9 09:46:42.854660 ignition[765]: Ignition 2.14.0 Feb 9 09:46:42.854670 ignition[765]: Stage: disks Feb 9 09:46:42.854764 ignition[765]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:46:42.854773 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:46:42.856134 ignition[765]: disks: disks passed Feb 9 09:46:42.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:42.856829 systemd[1]: Finished ignition-disks.service. Feb 9 09:46:42.856181 ignition[765]: Ignition finished successfully Feb 9 09:46:42.857754 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:46:42.858637 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:46:42.859594 systemd[1]: Reached target local-fs.target. Feb 9 09:46:42.860682 systemd[1]: Reached target sysinit.target. Feb 9 09:46:42.861672 systemd[1]: Reached target basic.target. Feb 9 09:46:42.863484 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:46:42.874771 systemd-fsck[773]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 09:46:42.899893 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:46:42.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:42.901495 systemd[1]: Mounting sysroot.mount... Feb 9 09:46:42.908042 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:46:42.908544 systemd[1]: Mounted sysroot.mount. Feb 9 09:46:42.909277 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:46:42.911430 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:46:42.912166 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 09:46:42.912205 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:46:42.912226 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:46:42.914262 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:46:42.915671 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:46:42.920050 initrd-setup-root[783]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:46:42.924971 initrd-setup-root[791]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:46:42.927871 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:46:42.932216 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:46:42.959109 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:46:42.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:42.960686 systemd[1]: Starting ignition-mount.service... Feb 9 09:46:42.961952 systemd[1]: Starting sysroot-boot.service... Feb 9 09:46:42.966888 bash[824]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 09:46:42.975185 ignition[826]: INFO : Ignition 2.14.0 Feb 9 09:46:42.975925 ignition[826]: INFO : Stage: mount Feb 9 09:46:42.976707 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:46:42.977637 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:46:42.979706 systemd[1]: Finished sysroot-boot.service. Feb 9 09:46:42.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:42.981272 ignition[826]: INFO : mount: mount passed Feb 9 09:46:42.981881 ignition[826]: INFO : Ignition finished successfully Feb 9 09:46:42.983167 systemd[1]: Finished ignition-mount.service. Feb 9 09:46:42.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:43.530909 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:46:43.536054 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (834) Feb 9 09:46:43.538109 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:46:43.538140 kernel: BTRFS info (device vda6): using free space tree Feb 9 09:46:43.538150 kernel: BTRFS info (device vda6): has skinny extents Feb 9 09:46:43.540899 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:46:43.542451 systemd[1]: Starting ignition-files.service... Feb 9 09:46:43.556106 ignition[854]: INFO : Ignition 2.14.0 Feb 9 09:46:43.556106 ignition[854]: INFO : Stage: files Feb 9 09:46:43.557248 ignition[854]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:46:43.557248 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:46:43.557248 ignition[854]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:46:43.560272 ignition[854]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:46:43.560272 ignition[854]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:46:43.565477 ignition[854]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:46:43.566458 ignition[854]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:46:43.567617 unknown[854]: wrote ssh authorized keys file for user: core Feb 9 09:46:43.568463 ignition[854]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:46:43.568463 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:46:43.568463 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 09:46:43.866253 systemd-networkd[738]: eth0: Gained IPv6LL Feb 9 09:46:43.902341 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 09:46:44.094596 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 09:46:44.096745 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:46:44.096745 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:46:44.096745 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:46:44.322599 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 09:46:44.440565 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 09:46:44.442737 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:46:44.442737 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:46:44.442737 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 09:46:44.470263 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 09:46:44.506833 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:46:44.508255 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:46:44.509471 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:46:44.554415 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 09:46:44.830633 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 09:46:44.832862 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:46:44.832862 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:46:44.832862 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:46:44.853235 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 09:46:45.508716 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 09:46:45.510955 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:46:45.510955 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:46:45.510955 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 09:46:45.542888 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 09:46:45.803331 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 09:46:45.803331 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:46:45.806702 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:46:45.806702 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:46:45.806702 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:46:45.806702 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:46:45.806702 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:46:45.806702 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:46:45.806702 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:46:45.806702 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:46:45.806702 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:46:45.806702 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:46:45.806702 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:46:45.806702 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:46:45.806702 ignition[854]: INFO : files: op(f): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:46:45.806702 ignition[854]: INFO : files: op(f): op(10): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(f): op(10): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(f): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(11): [started] processing unit "prepare-critools.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(11): op(12): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(11): op(12): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(11): [finished] processing unit "prepare-critools.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(13): [started] processing unit "prepare-helm.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(13): op(14): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(13): op(14): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(13): [finished] processing unit "prepare-helm.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(15): [started] processing unit "coreos-metadata.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(15): op(16): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(15): op(16): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(15): [finished] processing unit "coreos-metadata.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(17): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:46:45.824232 ignition[854]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Feb 9 09:46:45.847699 ignition[854]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 09:46:45.847699 ignition[854]: INFO : files: op(1a): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 09:46:45.847699 ignition[854]: INFO : files: op(1a): op(1b): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 09:46:45.853509 ignition[854]: INFO : files: op(1a): op(1b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 09:46:45.855550 ignition[854]: INFO : files: op(1a): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 09:46:45.855550 ignition[854]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:46:45.855550 ignition[854]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:46:45.855550 ignition[854]: INFO : files: files passed Feb 9 09:46:45.855550 ignition[854]: INFO : Ignition finished successfully Feb 9 09:46:45.863813 kernel: kauditd_printk_skb: 22 callbacks suppressed Feb 9 09:46:45.863832 kernel: audit: type=1130 audit(1707472005.856:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.855509 systemd[1]: Finished ignition-files.service. Feb 9 09:46:45.857201 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:46:45.869144 kernel: audit: type=1130 audit(1707472005.865:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.869176 kernel: audit: type=1131 audit(1707472005.865:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.861185 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:46:45.871301 initrd-setup-root-after-ignition[878]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 09:46:45.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.861800 systemd[1]: Starting ignition-quench.service... Feb 9 09:46:45.875545 kernel: audit: type=1130 audit(1707472005.870:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.875594 initrd-setup-root-after-ignition[881]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:46:45.864374 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:46:45.864453 systemd[1]: Finished ignition-quench.service. Feb 9 09:46:45.869213 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:46:45.870172 systemd[1]: Reached target ignition-complete.target. Feb 9 09:46:45.873178 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:46:45.884573 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:46:45.884653 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:46:45.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.885861 systemd[1]: Reached target initrd-fs.target. Feb 9 09:46:45.890879 kernel: audit: type=1130 audit(1707472005.885:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.890895 kernel: audit: type=1131 audit(1707472005.885:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.890454 systemd[1]: Reached target initrd.target. Feb 9 09:46:45.891534 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:46:45.892211 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:46:45.901752 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:46:45.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.903131 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:46:45.905383 kernel: audit: type=1130 audit(1707472005.902:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.910519 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:46:45.911993 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:46:45.913599 systemd[1]: Stopped target timers.target. Feb 9 09:46:45.915014 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:46:45.915960 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:46:45.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.917608 systemd[1]: Stopped target initrd.target. Feb 9 09:46:45.920409 kernel: audit: type=1131 audit(1707472005.917:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.920623 systemd[1]: Stopped target basic.target. Feb 9 09:46:45.921995 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:46:45.923604 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:46:45.925173 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:46:45.926779 systemd[1]: Stopped target remote-fs.target. Feb 9 09:46:45.928240 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:46:45.929758 systemd[1]: Stopped target sysinit.target. Feb 9 09:46:45.931216 systemd[1]: Stopped target local-fs.target. Feb 9 09:46:45.931980 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:46:45.933197 systemd[1]: Stopped target swap.target. Feb 9 09:46:45.934325 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:46:45.934429 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:46:45.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.935496 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:46:45.941615 kernel: audit: type=1131 audit(1707472005.935:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.941631 kernel: audit: type=1131 audit(1707472005.939:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.938344 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:46:45.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.938446 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:46:45.939601 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:46:45.939696 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:46:45.942492 systemd[1]: Stopped target paths.target. Feb 9 09:46:45.943669 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:46:45.948090 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:46:45.948885 systemd[1]: Stopped target slices.target. Feb 9 09:46:45.950188 systemd[1]: Stopped target sockets.target. Feb 9 09:46:45.951304 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:46:45.951373 systemd[1]: Closed iscsid.socket. Feb 9 09:46:45.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.952404 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:46:45.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.952502 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:46:45.953863 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:46:45.953953 systemd[1]: Stopped ignition-files.service. Feb 9 09:46:45.955539 systemd[1]: Stopping ignition-mount.service... Feb 9 09:46:45.956823 systemd[1]: Stopping iscsiuio.service... Feb 9 09:46:45.958168 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:46:45.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.959339 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:46:45.959466 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:46:45.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.964478 ignition[894]: INFO : Ignition 2.14.0 Feb 9 09:46:45.964478 ignition[894]: INFO : Stage: umount Feb 9 09:46:45.964478 ignition[894]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:46:45.964478 ignition[894]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:46:45.964478 ignition[894]: INFO : umount: umount passed Feb 9 09:46:45.964478 ignition[894]: INFO : Ignition finished successfully Feb 9 09:46:45.960650 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:46:45.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.960736 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:46:45.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.963101 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:46:45.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.963190 systemd[1]: Stopped iscsiuio.service. Feb 9 09:46:45.964123 systemd[1]: Stopped target network.target. Feb 9 09:46:45.964952 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:46:45.964983 systemd[1]: Closed iscsiuio.socket. Feb 9 09:46:45.966206 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:46:45.967363 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:46:45.969513 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:46:45.969961 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:46:45.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.970052 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:46:45.971077 systemd-networkd[738]: eth0: DHCPv6 lease lost Feb 9 09:46:45.984000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:46:45.985000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:46:45.971266 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:46:45.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.971337 systemd[1]: Stopped ignition-mount.service. Feb 9 09:46:45.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.972686 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:46:45.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.972761 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:46:45.974263 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:46:45.974325 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:46:45.976336 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:46:45.976419 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:46:45.978466 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:46:45.978495 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:46:45.979640 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:46:45.979679 systemd[1]: Stopped ignition-disks.service. Feb 9 09:46:45.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.980465 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:46:45.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.980499 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:46:45.981425 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:46:45.981457 systemd[1]: Stopped ignition-setup.service. Feb 9 09:46:46.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.982379 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:46:46.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.982410 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:46:46.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.984164 systemd[1]: Stopping network-cleanup.service... Feb 9 09:46:45.985130 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:46:45.985181 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:46:46.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.986206 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:46:46.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.986241 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:46:46.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.987717 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:46:45.987754 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:46:46.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.988623 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:46:45.993046 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:46:45.996284 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:46:45.996381 systemd[1]: Stopped network-cleanup.service. Feb 9 09:46:45.997864 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:46:45.997978 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:46:45.999035 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:46:45.999072 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:46:45.999885 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:46:45.999912 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:46:46.000816 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:46:46.000853 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:46:46.001904 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:46:46.001938 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:46:46.002821 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:46:46.002853 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:46:46.004522 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:46:46.005645 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 09:46:46.005694 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 09:46:46.007406 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:46:46.007443 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:46:46.008221 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:46:46.008259 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:46:46.010181 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 09:46:46.010546 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:46:46.010625 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:46:46.011564 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:46:46.013348 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:46:46.019241 systemd[1]: Switching root. Feb 9 09:46:46.035220 iscsid[743]: iscsid shutting down. Feb 9 09:46:46.035701 systemd-journald[289]: Journal stopped Feb 9 09:46:48.079313 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Feb 9 09:46:48.081146 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:46:48.081161 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:46:48.081172 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:46:48.081181 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:46:48.081193 kernel: SELinux: policy capability open_perms=1 Feb 9 09:46:48.081203 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:46:48.081212 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:46:48.081222 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:46:48.081231 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:46:48.081241 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:46:48.081250 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:46:48.081264 systemd[1]: Successfully loaded SELinux policy in 31.050ms. Feb 9 09:46:48.081279 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.597ms. Feb 9 09:46:48.081292 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:46:48.081303 systemd[1]: Detected virtualization kvm. Feb 9 09:46:48.081313 systemd[1]: Detected architecture arm64. Feb 9 09:46:48.081323 systemd[1]: Detected first boot. Feb 9 09:46:48.081333 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:46:48.081343 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:46:48.081353 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:46:48.081365 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:46:48.081380 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:46:48.081392 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:46:48.081404 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 09:46:48.081415 systemd[1]: Stopped iscsid.service. Feb 9 09:46:48.081425 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 09:46:48.081435 systemd[1]: Stopped initrd-switch-root.service. Feb 9 09:46:48.081451 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 09:46:48.081461 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:46:48.081471 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:46:48.081482 systemd[1]: Created slice system-getty.slice. Feb 9 09:46:48.081492 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:46:48.081502 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:46:48.081513 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:46:48.081523 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:46:48.081533 systemd[1]: Created slice user.slice. Feb 9 09:46:48.081545 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:46:48.081556 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:46:48.081566 systemd[1]: Set up automount boot.automount. Feb 9 09:46:48.081577 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:46:48.081588 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 09:46:48.081598 systemd[1]: Stopped target initrd-fs.target. Feb 9 09:46:48.081608 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 09:46:48.081619 systemd[1]: Reached target integritysetup.target. Feb 9 09:46:48.081631 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:46:48.081642 systemd[1]: Reached target remote-fs.target. Feb 9 09:46:48.081652 systemd[1]: Reached target slices.target. Feb 9 09:46:48.081663 systemd[1]: Reached target swap.target. Feb 9 09:46:48.081699 systemd[1]: Reached target torcx.target. Feb 9 09:46:48.081715 systemd[1]: Reached target veritysetup.target. Feb 9 09:46:48.081726 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:46:48.081737 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:46:48.081747 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:46:48.081760 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:46:48.081770 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:46:48.081781 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:46:48.081792 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:46:48.081802 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:46:48.081813 systemd[1]: Mounting media.mount... Feb 9 09:46:48.081823 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:46:48.081833 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:46:48.081844 systemd[1]: Mounting tmp.mount... Feb 9 09:46:48.081854 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:46:48.081866 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:46:48.081877 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:46:48.082587 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:46:48.082612 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:46:48.082624 systemd[1]: Starting modprobe@drm.service... Feb 9 09:46:48.082636 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:46:48.082647 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:46:48.082657 systemd[1]: Starting modprobe@loop.service... Feb 9 09:46:48.082668 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:46:48.082683 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 09:46:48.082693 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 09:46:48.082703 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 09:46:48.082713 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 09:46:48.082724 systemd[1]: Stopped systemd-journald.service. Feb 9 09:46:48.082733 kernel: fuse: init (API version 7.34) Feb 9 09:46:48.082743 systemd[1]: Starting systemd-journald.service... Feb 9 09:46:48.082754 kernel: loop: module loaded Feb 9 09:46:48.082767 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:46:48.082779 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:46:48.082790 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:46:48.082801 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:46:48.082811 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 09:46:48.082822 systemd[1]: Stopped verity-setup.service. Feb 9 09:46:48.082832 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:46:48.082843 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:46:48.082853 systemd[1]: Mounted media.mount. Feb 9 09:46:48.082865 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:46:48.082875 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:46:48.082885 systemd[1]: Mounted tmp.mount. Feb 9 09:46:48.082897 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:46:48.082907 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:46:48.082917 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:46:48.082930 systemd-journald[994]: Journal started Feb 9 09:46:48.082975 systemd-journald[994]: Runtime Journal (/run/log/journal/df799c1f0cf641e6bb965aa18c9cea9d) is 6.0M, max 48.7M, 42.6M free. Feb 9 09:46:46.094000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 09:46:46.266000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:46:46.266000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:46:46.266000 audit: BPF prog-id=10 op=LOAD Feb 9 09:46:46.266000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:46:46.266000 audit: BPF prog-id=11 op=LOAD Feb 9 09:46:46.266000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:46:46.302000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:46:46.302000 audit[928]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58b2 a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:46:46.302000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:46:46.303000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:46:46.303000 audit[928]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5989 a2=1ed a3=0 items=2 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:46:46.303000 audit: CWD cwd="/" Feb 9 09:46:46.303000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:46:46.303000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:46:46.303000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:46:47.960000 audit: BPF prog-id=12 op=LOAD Feb 9 09:46:47.960000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:46:47.960000 audit: BPF prog-id=13 op=LOAD Feb 9 09:46:47.960000 audit: BPF prog-id=14 op=LOAD Feb 9 09:46:47.960000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:46:47.960000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:46:47.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:47.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:47.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:47.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:47.970000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:46:48.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.056000 audit: BPF prog-id=15 op=LOAD Feb 9 09:46:48.056000 audit: BPF prog-id=16 op=LOAD Feb 9 09:46:48.056000 audit: BPF prog-id=17 op=LOAD Feb 9 09:46:48.056000 audit: BPF prog-id=13 op=UNLOAD Feb 9 09:46:48.056000 audit: BPF prog-id=14 op=UNLOAD Feb 9 09:46:48.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.078000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:46:48.078000 audit[994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffea7f64c0 a2=4000 a3=1 items=0 ppid=1 pid=994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:46:48.078000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:46:48.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:47.958075 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:46:46.301266 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:46:47.958088 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 09:46:46.301733 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:46Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:46:47.960938 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 09:46:48.084378 systemd[1]: Started systemd-journald.service. Feb 9 09:46:46.301752 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:46Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:46:46.301779 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:46Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 09:46:46.301789 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:46Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 09:46:46.301815 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:46Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 09:46:46.301826 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:46Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 09:46:46.302023 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:46Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 09:46:46.302073 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:46Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:46:46.302085 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:46Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:46:46.302486 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:46Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 09:46:46.302519 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:46Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 09:46:46.302536 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:46Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 09:46:46.302549 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:46Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 09:46:48.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.302565 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:46Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 09:46:46.302578 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:46Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 09:46:47.714717 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:47Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:46:47.714970 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:47Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:46:47.715106 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:47Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:46:47.715257 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:47Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:46:47.715307 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:47Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 09:46:47.715361 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-09T09:46:47Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 09:46:48.085753 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:46:48.085905 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:46:48.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.087038 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:46:48.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.087963 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:46:48.088171 systemd[1]: Finished modprobe@drm.service. Feb 9 09:46:48.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.089102 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:46:48.089280 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:46:48.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.090403 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:46:48.090555 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:46:48.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.091514 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:46:48.091654 systemd[1]: Finished modprobe@loop.service. Feb 9 09:46:48.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.092732 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:46:48.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.093822 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:46:48.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.094943 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:46:48.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.096172 systemd[1]: Reached target network-pre.target. Feb 9 09:46:48.097873 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:46:48.099697 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:46:48.100394 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:46:48.102219 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:46:48.103947 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:46:48.104897 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:46:48.105968 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:46:48.114336 systemd-journald[994]: Time spent on flushing to /var/log/journal/df799c1f0cf641e6bb965aa18c9cea9d is 14.136ms for 1017 entries. Feb 9 09:46:48.114336 systemd-journald[994]: System Journal (/var/log/journal/df799c1f0cf641e6bb965aa18c9cea9d) is 8.0M, max 195.6M, 187.6M free. Feb 9 09:46:48.143438 systemd-journald[994]: Received client request to flush runtime journal. Feb 9 09:46:48.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.106926 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:46:48.107907 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:46:48.110711 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:46:48.113891 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:46:48.146733 udevadm[1028]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 09:46:48.116941 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:46:48.120146 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:46:48.121426 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:46:48.122401 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:46:48.124306 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:46:48.135254 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:46:48.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.140133 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:46:48.142021 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:46:48.146335 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:46:48.161830 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:46:48.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.535110 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:46:48.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.536000 audit: BPF prog-id=18 op=LOAD Feb 9 09:46:48.536000 audit: BPF prog-id=19 op=LOAD Feb 9 09:46:48.536000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:46:48.536000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:46:48.537110 systemd[1]: Starting systemd-udevd.service... Feb 9 09:46:48.559000 systemd-udevd[1033]: Using default interface naming scheme 'v252'. Feb 9 09:46:48.571843 systemd[1]: Started systemd-udevd.service. Feb 9 09:46:48.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.573000 audit: BPF prog-id=20 op=LOAD Feb 9 09:46:48.573851 systemd[1]: Starting systemd-networkd.service... Feb 9 09:46:48.580000 audit: BPF prog-id=21 op=LOAD Feb 9 09:46:48.580000 audit: BPF prog-id=22 op=LOAD Feb 9 09:46:48.580000 audit: BPF prog-id=23 op=LOAD Feb 9 09:46:48.581253 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:46:48.597068 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 09:46:48.611666 systemd[1]: Started systemd-userdbd.service. Feb 9 09:46:48.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.664298 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:46:48.674705 systemd-networkd[1040]: lo: Link UP Feb 9 09:46:48.674716 systemd-networkd[1040]: lo: Gained carrier Feb 9 09:46:48.676863 systemd-networkd[1040]: Enumeration completed Feb 9 09:46:48.676958 systemd[1]: Started systemd-networkd.service. Feb 9 09:46:48.676964 systemd-networkd[1040]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:46:48.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.681807 systemd-networkd[1040]: eth0: Link UP Feb 9 09:46:48.681818 systemd-networkd[1040]: eth0: Gained carrier Feb 9 09:46:48.689351 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:46:48.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.691008 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:46:48.698838 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:46:48.700148 systemd-networkd[1040]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 09:46:48.733659 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:46:48.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.734448 systemd[1]: Reached target cryptsetup.target. Feb 9 09:46:48.735948 systemd[1]: Starting lvm2-activation.service... Feb 9 09:46:48.739206 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:46:48.767856 systemd[1]: Finished lvm2-activation.service. Feb 9 09:46:48.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.768598 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:46:48.769221 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:46:48.769252 systemd[1]: Reached target local-fs.target. Feb 9 09:46:48.769787 systemd[1]: Reached target machines.target. Feb 9 09:46:48.771314 systemd[1]: Starting ldconfig.service... Feb 9 09:46:48.772395 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:46:48.772448 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:46:48.773383 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:46:48.777290 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:46:48.779614 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:46:48.780424 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:46:48.780479 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:46:48.781549 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:46:48.784308 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1070 (bootctl) Feb 9 09:46:48.785257 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:46:48.793379 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:46:48.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.797801 systemd-tmpfiles[1073]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:46:48.798972 systemd-tmpfiles[1073]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:46:48.800301 systemd-tmpfiles[1073]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:46:48.889368 systemd-fsck[1078]: fsck.fat 4.2 (2021-01-31) Feb 9 09:46:48.889368 systemd-fsck[1078]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 09:46:48.893282 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:46:48.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.895880 systemd[1]: Mounting boot.mount... Feb 9 09:46:48.923959 systemd[1]: Mounted boot.mount. Feb 9 09:46:48.930926 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:46:48.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.934706 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:46:48.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.976559 ldconfig[1069]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:46:48.980388 systemd[1]: Finished ldconfig.service. Feb 9 09:46:48.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.985535 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:46:48.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.988308 systemd[1]: Starting audit-rules.service... Feb 9 09:46:48.989882 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:46:48.991578 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:46:48.992000 audit: BPF prog-id=24 op=LOAD Feb 9 09:46:48.993708 systemd[1]: Starting systemd-resolved.service... Feb 9 09:46:48.994000 audit: BPF prog-id=25 op=LOAD Feb 9 09:46:48.995652 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:46:48.998687 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:46:48.999801 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:46:49.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:49.000961 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:46:49.002000 audit[1093]: SYSTEM_BOOT pid=1093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:46:49.005972 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:46:49.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:49.010023 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:46:49.012009 systemd[1]: Starting systemd-update-done.service... Feb 9 09:46:49.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:49.017294 systemd[1]: Finished systemd-update-done.service. Feb 9 09:46:49.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:49.027000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:46:49.027000 audit[1103]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc5a35a90 a2=420 a3=0 items=0 ppid=1082 pid=1103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:46:49.027000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:46:49.028326 augenrules[1103]: No rules Feb 9 09:46:49.029150 systemd[1]: Finished audit-rules.service. Feb 9 09:46:49.049907 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:46:49.050691 systemd[1]: Reached target time-set.target. Feb 9 09:46:49.051476 systemd-timesyncd[1089]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 09:46:49.051532 systemd-timesyncd[1089]: Initial clock synchronization to Fri 2024-02-09 09:46:49.263630 UTC. Feb 9 09:46:49.053330 systemd-resolved[1086]: Positive Trust Anchors: Feb 9 09:46:49.053341 systemd-resolved[1086]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:46:49.053367 systemd-resolved[1086]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:46:49.065433 systemd-resolved[1086]: Defaulting to hostname 'linux'. Feb 9 09:46:49.066791 systemd[1]: Started systemd-resolved.service. Feb 9 09:46:49.067472 systemd[1]: Reached target network.target. Feb 9 09:46:49.068002 systemd[1]: Reached target nss-lookup.target. Feb 9 09:46:49.068595 systemd[1]: Reached target sysinit.target. Feb 9 09:46:49.069215 systemd[1]: Started motdgen.path. Feb 9 09:46:49.069720 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:46:49.070665 systemd[1]: Started logrotate.timer. Feb 9 09:46:49.071343 systemd[1]: Started mdadm.timer. Feb 9 09:46:49.072076 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:46:49.072896 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:46:49.072931 systemd[1]: Reached target paths.target. Feb 9 09:46:49.073678 systemd[1]: Reached target timers.target. Feb 9 09:46:49.074735 systemd[1]: Listening on dbus.socket. Feb 9 09:46:49.076556 systemd[1]: Starting docker.socket... Feb 9 09:46:49.079546 systemd[1]: Listening on sshd.socket. Feb 9 09:46:49.080366 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:46:49.081536 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:46:49.081907 systemd[1]: Listening on docker.socket. Feb 9 09:46:49.082747 systemd[1]: Reached target sockets.target. Feb 9 09:46:49.083467 systemd[1]: Reached target basic.target. Feb 9 09:46:49.084254 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:46:49.084287 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:46:49.085243 systemd[1]: Starting containerd.service... Feb 9 09:46:49.086830 systemd[1]: Starting dbus.service... Feb 9 09:46:49.088454 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:46:49.090272 systemd[1]: Starting extend-filesystems.service... Feb 9 09:46:49.091093 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:46:49.092285 systemd[1]: Starting motdgen.service... Feb 9 09:46:49.094048 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:46:49.096182 systemd[1]: Starting prepare-critools.service... Feb 9 09:46:49.098913 systemd[1]: Starting prepare-helm.service... Feb 9 09:46:49.102185 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:46:49.110096 jq[1113]: false Feb 9 09:46:49.104300 systemd[1]: Starting sshd-keygen.service... Feb 9 09:46:49.108930 systemd[1]: Starting systemd-logind.service... Feb 9 09:46:49.109648 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:46:49.109707 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:46:49.110840 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 09:46:49.111631 systemd[1]: Starting update-engine.service... Feb 9 09:46:49.113597 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:46:49.116398 jq[1131]: true Feb 9 09:46:49.117082 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:46:49.117249 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:46:49.123628 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:46:49.123776 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:46:49.128566 tar[1136]: linux-arm64/helm Feb 9 09:46:49.131614 jq[1139]: true Feb 9 09:46:49.136890 tar[1135]: crictl Feb 9 09:46:49.137206 tar[1133]: ./ Feb 9 09:46:49.137206 tar[1133]: ./macvlan Feb 9 09:46:49.132613 systemd[1]: Started dbus.service. Feb 9 09:46:49.131837 dbus-daemon[1112]: [system] SELinux support is enabled Feb 9 09:46:49.137561 extend-filesystems[1114]: Found vda Feb 9 09:46:49.137561 extend-filesystems[1114]: Found vda1 Feb 9 09:46:49.137561 extend-filesystems[1114]: Found vda2 Feb 9 09:46:49.137561 extend-filesystems[1114]: Found vda3 Feb 9 09:46:49.137561 extend-filesystems[1114]: Found usr Feb 9 09:46:49.137561 extend-filesystems[1114]: Found vda4 Feb 9 09:46:49.137561 extend-filesystems[1114]: Found vda6 Feb 9 09:46:49.137561 extend-filesystems[1114]: Found vda7 Feb 9 09:46:49.137561 extend-filesystems[1114]: Found vda9 Feb 9 09:46:49.137561 extend-filesystems[1114]: Checking size of /dev/vda9 Feb 9 09:46:49.135152 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:46:49.135175 systemd[1]: Reached target system-config.target. Feb 9 09:46:49.136008 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:46:49.136023 systemd[1]: Reached target user-config.target. Feb 9 09:46:49.145154 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:46:49.148100 extend-filesystems[1114]: Resized partition /dev/vda9 Feb 9 09:46:49.145316 systemd[1]: Finished motdgen.service. Feb 9 09:46:49.158786 extend-filesystems[1156]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 09:46:49.174453 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 09:46:49.205121 update_engine[1127]: I0209 09:46:49.201968 1127 main.cc:92] Flatcar Update Engine starting Feb 9 09:46:49.209744 systemd[1]: Started update-engine.service. Feb 9 09:46:49.219621 update_engine[1127]: I0209 09:46:49.209784 1127 update_check_scheduler.cc:74] Next update check in 7m7s Feb 9 09:46:49.212986 systemd[1]: Started locksmithd.service. Feb 9 09:46:49.220745 systemd-logind[1125]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 09:46:49.221660 systemd-logind[1125]: New seat seat0. Feb 9 09:46:49.232654 systemd[1]: Started systemd-logind.service. Feb 9 09:46:49.238653 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 09:46:49.276521 extend-filesystems[1156]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 09:46:49.276521 extend-filesystems[1156]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 09:46:49.276521 extend-filesystems[1156]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 09:46:49.292136 extend-filesystems[1114]: Resized filesystem in /dev/vda9 Feb 9 09:46:49.292949 env[1141]: time="2024-02-09T09:46:49.291661000Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:46:49.298278 bash[1167]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:46:49.280517 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:46:49.298398 tar[1133]: ./static Feb 9 09:46:49.280684 systemd[1]: Finished extend-filesystems.service. Feb 9 09:46:49.281880 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:46:49.312008 tar[1133]: ./vlan Feb 9 09:46:49.356808 tar[1133]: ./portmap Feb 9 09:46:49.359537 env[1141]: time="2024-02-09T09:46:49.359495480Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:46:49.359660 env[1141]: time="2024-02-09T09:46:49.359638640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:46:49.361083 env[1141]: time="2024-02-09T09:46:49.361047960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:46:49.361083 env[1141]: time="2024-02-09T09:46:49.361081640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:46:49.361323 env[1141]: time="2024-02-09T09:46:49.361299120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:46:49.361323 env[1141]: time="2024-02-09T09:46:49.361320840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:46:49.361377 env[1141]: time="2024-02-09T09:46:49.361334880Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:46:49.361377 env[1141]: time="2024-02-09T09:46:49.361344560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:46:49.361423 env[1141]: time="2024-02-09T09:46:49.361415160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:46:49.361749 env[1141]: time="2024-02-09T09:46:49.361726280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:46:49.361866 env[1141]: time="2024-02-09T09:46:49.361844080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:46:49.361866 env[1141]: time="2024-02-09T09:46:49.361863400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:46:49.361924 env[1141]: time="2024-02-09T09:46:49.361912720Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:46:49.361951 env[1141]: time="2024-02-09T09:46:49.361924320Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:46:49.373993 env[1141]: time="2024-02-09T09:46:49.373946440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:46:49.373993 env[1141]: time="2024-02-09T09:46:49.373994720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:46:49.374126 env[1141]: time="2024-02-09T09:46:49.374009920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:46:49.374150 env[1141]: time="2024-02-09T09:46:49.374134440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.374171 env[1141]: time="2024-02-09T09:46:49.374155400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.374191 env[1141]: time="2024-02-09T09:46:49.374170000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.374211 env[1141]: time="2024-02-09T09:46:49.374194600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.374671 env[1141]: time="2024-02-09T09:46:49.374645480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.374716 env[1141]: time="2024-02-09T09:46:49.374675880Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.374716 env[1141]: time="2024-02-09T09:46:49.374691560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.374716 env[1141]: time="2024-02-09T09:46:49.374703960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.374785 env[1141]: time="2024-02-09T09:46:49.374717080Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:46:49.374863 env[1141]: time="2024-02-09T09:46:49.374841040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:46:49.374944 env[1141]: time="2024-02-09T09:46:49.374926800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:46:49.375195 env[1141]: time="2024-02-09T09:46:49.375173480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.375243 env[1141]: time="2024-02-09T09:46:49.375206600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.375243 env[1141]: time="2024-02-09T09:46:49.375220600Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:46:49.375389 env[1141]: time="2024-02-09T09:46:49.375372720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.375419 env[1141]: time="2024-02-09T09:46:49.375389680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.375419 env[1141]: time="2024-02-09T09:46:49.375402640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.375419 env[1141]: time="2024-02-09T09:46:49.375413720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.375480 env[1141]: time="2024-02-09T09:46:49.375430200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.375480 env[1141]: time="2024-02-09T09:46:49.375443680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.375480 env[1141]: time="2024-02-09T09:46:49.375455240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.375480 env[1141]: time="2024-02-09T09:46:49.375467320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.375559 env[1141]: time="2024-02-09T09:46:49.375479920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:46:49.375617 env[1141]: time="2024-02-09T09:46:49.375596560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.375649 env[1141]: time="2024-02-09T09:46:49.375618920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.375649 env[1141]: time="2024-02-09T09:46:49.375631800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.375649 env[1141]: time="2024-02-09T09:46:49.375643560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:46:49.375714 env[1141]: time="2024-02-09T09:46:49.375658320Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:46:49.375714 env[1141]: time="2024-02-09T09:46:49.375669560Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:46:49.375714 env[1141]: time="2024-02-09T09:46:49.375686520Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:46:49.375774 env[1141]: time="2024-02-09T09:46:49.375720800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.375967 env[1141]: time="2024-02-09T09:46:49.375912120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:46:49.380311 env[1141]: time="2024-02-09T09:46:49.375972640Z" level=info msg="Connect containerd service" Feb 9 09:46:49.380311 env[1141]: time="2024-02-09T09:46:49.376013960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:46:49.380311 env[1141]: time="2024-02-09T09:46:49.376899320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:46:49.380311 env[1141]: time="2024-02-09T09:46:49.377311160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:46:49.380311 env[1141]: time="2024-02-09T09:46:49.377359280Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:46:49.380311 env[1141]: time="2024-02-09T09:46:49.377401600Z" level=info msg="containerd successfully booted in 0.101430s" Feb 9 09:46:49.379333 systemd[1]: Started containerd.service. Feb 9 09:46:49.382058 env[1141]: time="2024-02-09T09:46:49.380719920Z" level=info msg="Start subscribing containerd event" Feb 9 09:46:49.382058 env[1141]: time="2024-02-09T09:46:49.380800120Z" level=info msg="Start recovering state" Feb 9 09:46:49.382058 env[1141]: time="2024-02-09T09:46:49.380873520Z" level=info msg="Start event monitor" Feb 9 09:46:49.382058 env[1141]: time="2024-02-09T09:46:49.380893240Z" level=info msg="Start snapshots syncer" Feb 9 09:46:49.382058 env[1141]: time="2024-02-09T09:46:49.380907400Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:46:49.382058 env[1141]: time="2024-02-09T09:46:49.380915200Z" level=info msg="Start streaming server" Feb 9 09:46:49.405350 tar[1133]: ./host-local Feb 9 09:46:49.442458 tar[1133]: ./vrf Feb 9 09:46:49.483156 tar[1133]: ./bridge Feb 9 09:46:49.531456 tar[1133]: ./tuning Feb 9 09:46:49.569638 tar[1133]: ./firewall Feb 9 09:46:49.578209 tar[1136]: linux-arm64/LICENSE Feb 9 09:46:49.578319 tar[1136]: linux-arm64/README.md Feb 9 09:46:49.582684 systemd[1]: Finished prepare-helm.service. Feb 9 09:46:49.612170 systemd[1]: Finished prepare-critools.service. Feb 9 09:46:49.616063 locksmithd[1170]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:46:49.621629 tar[1133]: ./host-device Feb 9 09:46:49.652314 tar[1133]: ./sbr Feb 9 09:46:49.679450 tar[1133]: ./loopback Feb 9 09:46:49.702537 tar[1133]: ./dhcp Feb 9 09:46:49.766738 tar[1133]: ./ptp Feb 9 09:46:49.794687 tar[1133]: ./ipvlan Feb 9 09:46:49.821923 tar[1133]: ./bandwidth Feb 9 09:46:49.855979 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:46:50.355320 sshd_keygen[1142]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:46:50.373310 systemd[1]: Finished sshd-keygen.service. Feb 9 09:46:50.375591 systemd[1]: Starting issuegen.service... Feb 9 09:46:50.380158 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:46:50.380321 systemd[1]: Finished issuegen.service. Feb 9 09:46:50.382490 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:46:50.392210 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:46:50.394139 systemd[1]: Started getty@tty1.service. Feb 9 09:46:50.396071 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 09:46:50.397039 systemd[1]: Reached target getty.target. Feb 9 09:46:50.397828 systemd[1]: Reached target multi-user.target. Feb 9 09:46:50.399762 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:46:50.405930 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:46:50.406099 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:46:50.407144 systemd[1]: Startup finished in 587ms (kernel) + 5.479s (initrd) + 4.346s (userspace) = 10.413s. Feb 9 09:46:50.651925 systemd-networkd[1040]: eth0: Gained IPv6LL Feb 9 09:46:53.698333 systemd[1]: Created slice system-sshd.slice. Feb 9 09:46:53.699403 systemd[1]: Started sshd@0-10.0.0.24:22-10.0.0.1:47722.service. Feb 9 09:46:53.748205 sshd[1201]: Accepted publickey for core from 10.0.0.1 port 47722 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:46:53.752194 sshd[1201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:53.761672 systemd[1]: Created slice user-500.slice. Feb 9 09:46:53.762671 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:46:53.764106 systemd-logind[1125]: New session 1 of user core. Feb 9 09:46:53.770095 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:46:53.771293 systemd[1]: Starting user@500.service... Feb 9 09:46:53.773739 (systemd)[1204]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:53.829437 systemd[1204]: Queued start job for default target default.target. Feb 9 09:46:53.829850 systemd[1204]: Reached target paths.target. Feb 9 09:46:53.829870 systemd[1204]: Reached target sockets.target. Feb 9 09:46:53.829881 systemd[1204]: Reached target timers.target. Feb 9 09:46:53.829891 systemd[1204]: Reached target basic.target. Feb 9 09:46:53.829939 systemd[1204]: Reached target default.target. Feb 9 09:46:53.829962 systemd[1204]: Startup finished in 51ms. Feb 9 09:46:53.830176 systemd[1]: Started user@500.service. Feb 9 09:46:53.831000 systemd[1]: Started session-1.scope. Feb 9 09:46:53.881365 systemd[1]: Started sshd@1-10.0.0.24:22-10.0.0.1:47726.service. Feb 9 09:46:53.921869 sshd[1213]: Accepted publickey for core from 10.0.0.1 port 47726 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:46:53.923308 sshd[1213]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:53.927106 systemd-logind[1125]: New session 2 of user core. Feb 9 09:46:53.927468 systemd[1]: Started session-2.scope. Feb 9 09:46:53.983820 sshd[1213]: pam_unix(sshd:session): session closed for user core Feb 9 09:46:53.986387 systemd[1]: sshd@1-10.0.0.24:22-10.0.0.1:47726.service: Deactivated successfully. Feb 9 09:46:53.987021 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 09:46:53.987478 systemd-logind[1125]: Session 2 logged out. Waiting for processes to exit. Feb 9 09:46:53.988773 systemd[1]: Started sshd@2-10.0.0.24:22-10.0.0.1:47742.service. Feb 9 09:46:53.989350 systemd-logind[1125]: Removed session 2. Feb 9 09:46:54.029711 sshd[1219]: Accepted publickey for core from 10.0.0.1 port 47742 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:46:54.030737 sshd[1219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:54.033856 systemd-logind[1125]: New session 3 of user core. Feb 9 09:46:54.034577 systemd[1]: Started session-3.scope. Feb 9 09:46:54.083410 sshd[1219]: pam_unix(sshd:session): session closed for user core Feb 9 09:46:54.086645 systemd[1]: sshd@2-10.0.0.24:22-10.0.0.1:47742.service: Deactivated successfully. Feb 9 09:46:54.087264 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 09:46:54.087776 systemd-logind[1125]: Session 3 logged out. Waiting for processes to exit. Feb 9 09:46:54.088752 systemd[1]: Started sshd@3-10.0.0.24:22-10.0.0.1:47748.service. Feb 9 09:46:54.089454 systemd-logind[1125]: Removed session 3. Feb 9 09:46:54.129497 sshd[1225]: Accepted publickey for core from 10.0.0.1 port 47748 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:46:54.130508 sshd[1225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:54.133633 systemd-logind[1125]: New session 4 of user core. Feb 9 09:46:54.134353 systemd[1]: Started session-4.scope. Feb 9 09:46:54.188805 sshd[1225]: pam_unix(sshd:session): session closed for user core Feb 9 09:46:54.192401 systemd[1]: sshd@3-10.0.0.24:22-10.0.0.1:47748.service: Deactivated successfully. Feb 9 09:46:54.192990 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:46:54.193534 systemd-logind[1125]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:46:54.194461 systemd[1]: Started sshd@4-10.0.0.24:22-10.0.0.1:47762.service. Feb 9 09:46:54.195423 systemd-logind[1125]: Removed session 4. Feb 9 09:46:54.235276 sshd[1231]: Accepted publickey for core from 10.0.0.1 port 47762 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:46:54.235695 sshd[1231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:54.238494 systemd-logind[1125]: New session 5 of user core. Feb 9 09:46:54.239220 systemd[1]: Started session-5.scope. Feb 9 09:46:54.303523 sudo[1234]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:46:54.303838 sudo[1234]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:46:54.869366 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:46:54.874880 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:46:54.875641 systemd[1]: Reached target network-online.target. Feb 9 09:46:54.877231 systemd[1]: Starting docker.service... Feb 9 09:46:54.956488 env[1256]: time="2024-02-09T09:46:54.956428471Z" level=info msg="Starting up" Feb 9 09:46:54.958053 env[1256]: time="2024-02-09T09:46:54.958021342Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:46:54.958141 env[1256]: time="2024-02-09T09:46:54.958119891Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:46:54.958170 env[1256]: time="2024-02-09T09:46:54.958146011Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:46:54.958170 env[1256]: time="2024-02-09T09:46:54.958157182Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:46:54.959975 env[1256]: time="2024-02-09T09:46:54.959932162Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:46:54.959975 env[1256]: time="2024-02-09T09:46:54.959956413Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:46:54.959975 env[1256]: time="2024-02-09T09:46:54.959970306Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:46:54.959975 env[1256]: time="2024-02-09T09:46:54.959979649Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:46:54.964873 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2697795042-merged.mount: Deactivated successfully. Feb 9 09:46:54.983093 env[1256]: time="2024-02-09T09:46:54.983062507Z" level=info msg="Loading containers: start." Feb 9 09:46:55.072058 kernel: Initializing XFRM netlink socket Feb 9 09:46:55.094022 env[1256]: time="2024-02-09T09:46:55.093975260Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 09:46:55.145618 systemd-networkd[1040]: docker0: Link UP Feb 9 09:46:55.153116 env[1256]: time="2024-02-09T09:46:55.153087596Z" level=info msg="Loading containers: done." Feb 9 09:46:55.175727 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2479949833-merged.mount: Deactivated successfully. Feb 9 09:46:55.180545 env[1256]: time="2024-02-09T09:46:55.180512189Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 09:46:55.180846 env[1256]: time="2024-02-09T09:46:55.180824218Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 09:46:55.181012 env[1256]: time="2024-02-09T09:46:55.180994666Z" level=info msg="Daemon has completed initialization" Feb 9 09:46:55.195055 systemd[1]: Started docker.service. Feb 9 09:46:55.202505 env[1256]: time="2024-02-09T09:46:55.202465616Z" level=info msg="API listen on /run/docker.sock" Feb 9 09:46:55.217651 systemd[1]: Reloading. Feb 9 09:46:55.258225 /usr/lib/systemd/system-generators/torcx-generator[1398]: time="2024-02-09T09:46:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:46:55.258256 /usr/lib/systemd/system-generators/torcx-generator[1398]: time="2024-02-09T09:46:55Z" level=info msg="torcx already run" Feb 9 09:46:55.316775 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:46:55.316796 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:46:55.334229 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:46:55.392397 systemd[1]: Started kubelet.service. Feb 9 09:46:55.554232 kubelet[1435]: E0209 09:46:55.554096 1435 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:46:55.556229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:46:55.556355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:46:55.817645 env[1141]: time="2024-02-09T09:46:55.817530896Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 09:46:56.504923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3452627514.mount: Deactivated successfully. Feb 9 09:46:58.217464 env[1141]: time="2024-02-09T09:46:58.217417007Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:58.218724 env[1141]: time="2024-02-09T09:46:58.218694951Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:58.220526 env[1141]: time="2024-02-09T09:46:58.220499492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:58.222109 env[1141]: time="2024-02-09T09:46:58.222089737Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:58.222876 env[1141]: time="2024-02-09T09:46:58.222852265Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 09:46:58.232057 env[1141]: time="2024-02-09T09:46:58.232020523Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 09:47:00.024958 env[1141]: time="2024-02-09T09:47:00.024910461Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:00.026320 env[1141]: time="2024-02-09T09:47:00.026279349Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:00.028690 env[1141]: time="2024-02-09T09:47:00.028654208Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:00.030295 env[1141]: time="2024-02-09T09:47:00.030265536Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:00.031191 env[1141]: time="2024-02-09T09:47:00.031160660Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 09:47:00.041654 env[1141]: time="2024-02-09T09:47:00.041630946Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 09:47:02.000793 env[1141]: time="2024-02-09T09:47:02.000740071Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:02.001930 env[1141]: time="2024-02-09T09:47:02.001902871Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:02.004427 env[1141]: time="2024-02-09T09:47:02.004396565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:02.006022 env[1141]: time="2024-02-09T09:47:02.005997997Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:02.006838 env[1141]: time="2024-02-09T09:47:02.006797406Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 09:47:02.015981 env[1141]: time="2024-02-09T09:47:02.015956365Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 09:47:03.056848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2258063296.mount: Deactivated successfully. Feb 9 09:47:03.503215 env[1141]: time="2024-02-09T09:47:03.503162319Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:03.504996 env[1141]: time="2024-02-09T09:47:03.504965430Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:03.506435 env[1141]: time="2024-02-09T09:47:03.506409631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:03.507534 env[1141]: time="2024-02-09T09:47:03.507496850Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:03.508063 env[1141]: time="2024-02-09T09:47:03.508015624Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 09:47:03.518390 env[1141]: time="2024-02-09T09:47:03.518350761Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 09:47:03.965552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2750884337.mount: Deactivated successfully. Feb 9 09:47:03.969137 env[1141]: time="2024-02-09T09:47:03.969101330Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:03.970352 env[1141]: time="2024-02-09T09:47:03.970308628Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:03.971519 env[1141]: time="2024-02-09T09:47:03.971486229Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:03.972743 env[1141]: time="2024-02-09T09:47:03.972720613Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:03.973891 env[1141]: time="2024-02-09T09:47:03.973855133Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 09:47:03.982333 env[1141]: time="2024-02-09T09:47:03.982303971Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 09:47:04.760739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3503970109.mount: Deactivated successfully. Feb 9 09:47:05.635669 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 09:47:05.635849 systemd[1]: Stopped kubelet.service. Feb 9 09:47:05.637310 systemd[1]: Started kubelet.service. Feb 9 09:47:05.680650 kubelet[1490]: E0209 09:47:05.680593 1490 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:47:05.683554 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:47:05.683680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:47:06.861794 env[1141]: time="2024-02-09T09:47:06.861723579Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:06.863281 env[1141]: time="2024-02-09T09:47:06.863242602Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:06.865260 env[1141]: time="2024-02-09T09:47:06.865223226Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:06.866769 env[1141]: time="2024-02-09T09:47:06.866734023Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:06.867456 env[1141]: time="2024-02-09T09:47:06.867422934Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 09:47:06.876449 env[1141]: time="2024-02-09T09:47:06.876420147Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 09:47:07.478834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3310263628.mount: Deactivated successfully. Feb 9 09:47:07.944312 env[1141]: time="2024-02-09T09:47:07.944266432Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:07.945596 env[1141]: time="2024-02-09T09:47:07.945574768Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:07.947096 env[1141]: time="2024-02-09T09:47:07.947062915Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:07.948620 env[1141]: time="2024-02-09T09:47:07.948593619Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:07.949121 env[1141]: time="2024-02-09T09:47:07.949093224Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 09:47:12.380633 systemd[1]: Stopped kubelet.service. Feb 9 09:47:12.393149 systemd[1]: Reloading. Feb 9 09:47:12.439608 /usr/lib/systemd/system-generators/torcx-generator[1593]: time="2024-02-09T09:47:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:47:12.439640 /usr/lib/systemd/system-generators/torcx-generator[1593]: time="2024-02-09T09:47:12Z" level=info msg="torcx already run" Feb 9 09:47:12.489180 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:47:12.489200 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:47:12.506052 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:47:12.567854 systemd[1]: Started kubelet.service. Feb 9 09:47:12.607987 kubelet[1630]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:47:12.607987 kubelet[1630]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:47:12.608327 kubelet[1630]: I0209 09:47:12.608103 1630 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:47:12.609604 kubelet[1630]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:47:12.609604 kubelet[1630]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:47:13.526582 kubelet[1630]: I0209 09:47:13.526550 1630 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:47:13.526582 kubelet[1630]: I0209 09:47:13.526579 1630 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:47:13.526805 kubelet[1630]: I0209 09:47:13.526790 1630 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:47:13.531287 kubelet[1630]: I0209 09:47:13.531270 1630 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:47:13.531615 kubelet[1630]: E0209 09:47:13.531594 1630 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:13.533443 kubelet[1630]: W0209 09:47:13.533431 1630 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:47:13.534185 kubelet[1630]: I0209 09:47:13.534172 1630 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:47:13.534621 kubelet[1630]: I0209 09:47:13.534611 1630 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:47:13.534689 kubelet[1630]: I0209 09:47:13.534676 1630 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:47:13.534762 kubelet[1630]: I0209 09:47:13.534698 1630 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:47:13.534762 kubelet[1630]: I0209 09:47:13.534709 1630 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:47:13.534863 kubelet[1630]: I0209 09:47:13.534851 1630 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:47:13.540082 kubelet[1630]: I0209 09:47:13.540063 1630 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:47:13.540082 kubelet[1630]: I0209 09:47:13.540085 1630 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:47:13.540298 kubelet[1630]: I0209 09:47:13.540289 1630 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:47:13.540342 kubelet[1630]: I0209 09:47:13.540305 1630 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:47:13.541256 kubelet[1630]: W0209 09:47:13.541208 1630 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:13.541256 kubelet[1630]: E0209 09:47:13.541259 1630 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:13.541351 kubelet[1630]: W0209 09:47:13.541308 1630 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:13.541351 kubelet[1630]: E0209 09:47:13.541334 1630 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:13.541351 kubelet[1630]: I0209 09:47:13.541340 1630 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:47:13.542226 kubelet[1630]: W0209 09:47:13.542204 1630 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:47:13.542681 kubelet[1630]: I0209 09:47:13.542659 1630 server.go:1186] "Started kubelet" Feb 9 09:47:13.543360 kubelet[1630]: I0209 09:47:13.543340 1630 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:47:13.543679 kubelet[1630]: E0209 09:47:13.543653 1630 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:47:13.543679 kubelet[1630]: E0209 09:47:13.543674 1630 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:47:13.544201 kubelet[1630]: I0209 09:47:13.544183 1630 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:47:13.544589 kubelet[1630]: E0209 09:47:13.543353 1630 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b228c4973912f6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 542640374, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 542640374, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.24:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.24:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:47:13.545049 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:47:13.545109 kubelet[1630]: I0209 09:47:13.545085 1630 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:47:13.546387 kubelet[1630]: I0209 09:47:13.546368 1630 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:47:13.546462 kubelet[1630]: I0209 09:47:13.546427 1630 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:47:13.546913 kubelet[1630]: W0209 09:47:13.546832 1630 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:13.546913 kubelet[1630]: E0209 09:47:13.546876 1630 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:13.547080 kubelet[1630]: E0209 09:47:13.547065 1630 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:47:13.547308 kubelet[1630]: E0209 09:47:13.547283 1630 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:13.563351 kubelet[1630]: I0209 09:47:13.563335 1630 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:47:13.563351 kubelet[1630]: I0209 09:47:13.563350 1630 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:47:13.563432 kubelet[1630]: I0209 09:47:13.563365 1630 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:47:13.589918 kubelet[1630]: I0209 09:47:13.589883 1630 policy_none.go:49] "None policy: Start" Feb 9 09:47:13.590620 kubelet[1630]: I0209 09:47:13.590595 1630 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:47:13.590620 kubelet[1630]: I0209 09:47:13.590623 1630 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:47:13.598557 kubelet[1630]: I0209 09:47:13.598523 1630 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:47:13.607421 systemd[1]: Created slice kubepods.slice. Feb 9 09:47:13.610933 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 09:47:13.613214 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 09:47:13.619622 kubelet[1630]: I0209 09:47:13.619600 1630 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:47:13.619834 kubelet[1630]: I0209 09:47:13.619783 1630 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:47:13.620479 kubelet[1630]: E0209 09:47:13.620459 1630 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 09:47:13.627563 kubelet[1630]: I0209 09:47:13.627544 1630 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:47:13.627660 kubelet[1630]: I0209 09:47:13.627650 1630 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:47:13.627755 kubelet[1630]: I0209 09:47:13.627744 1630 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:47:13.627874 kubelet[1630]: E0209 09:47:13.627864 1630 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 09:47:13.628551 kubelet[1630]: W0209 09:47:13.628509 1630 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:13.628676 kubelet[1630]: E0209 09:47:13.628663 1630 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:13.648587 kubelet[1630]: I0209 09:47:13.648568 1630 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:47:13.649119 kubelet[1630]: E0209 09:47:13.649102 1630 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Feb 9 09:47:13.728339 kubelet[1630]: I0209 09:47:13.728313 1630 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:13.729486 kubelet[1630]: I0209 09:47:13.729466 1630 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:13.730219 kubelet[1630]: I0209 09:47:13.730187 1630 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:13.731077 kubelet[1630]: I0209 09:47:13.731061 1630 status_manager.go:698] "Failed to get status for pod" podUID=badfc18b216be78deea79b356dfdbf9e pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.24:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.24:6443: connect: connection refused" Feb 9 09:47:13.732040 kubelet[1630]: I0209 09:47:13.731932 1630 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.24:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.24:6443: connect: connection refused" Feb 9 09:47:13.732572 kubelet[1630]: I0209 09:47:13.732556 1630 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.24:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.24:6443: connect: connection refused" Feb 9 09:47:13.735140 systemd[1]: Created slice kubepods-burstable-podbadfc18b216be78deea79b356dfdbf9e.slice. Feb 9 09:47:13.747933 systemd[1]: Created slice kubepods-burstable-pod550020dd9f101bcc23e1d3c651841c4d.slice. Feb 9 09:47:13.748036 kubelet[1630]: E0209 09:47:13.747920 1630 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:13.751263 systemd[1]: Created slice kubepods-burstable-pod72ae17a74a2eae76daac6d298477aff0.slice. Feb 9 09:47:13.848045 kubelet[1630]: I0209 09:47:13.847394 1630 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 09:47:13.848045 kubelet[1630]: I0209 09:47:13.847442 1630 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/badfc18b216be78deea79b356dfdbf9e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"badfc18b216be78deea79b356dfdbf9e\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:47:13.848045 kubelet[1630]: I0209 09:47:13.847464 1630 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:47:13.848045 kubelet[1630]: I0209 09:47:13.847483 1630 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:47:13.848045 kubelet[1630]: I0209 09:47:13.847515 1630 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:47:13.848264 kubelet[1630]: I0209 09:47:13.847550 1630 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:47:13.848264 kubelet[1630]: I0209 09:47:13.847571 1630 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/badfc18b216be78deea79b356dfdbf9e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"badfc18b216be78deea79b356dfdbf9e\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:47:13.848264 kubelet[1630]: I0209 09:47:13.847590 1630 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/badfc18b216be78deea79b356dfdbf9e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"badfc18b216be78deea79b356dfdbf9e\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:47:13.848264 kubelet[1630]: I0209 09:47:13.847612 1630 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:47:13.850400 kubelet[1630]: I0209 09:47:13.850373 1630 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:47:13.850891 kubelet[1630]: E0209 09:47:13.850859 1630 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Feb 9 09:47:14.046847 kubelet[1630]: E0209 09:47:14.046811 1630 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:14.047369 env[1141]: time="2024-02-09T09:47:14.047335483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:badfc18b216be78deea79b356dfdbf9e,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:14.050653 kubelet[1630]: E0209 09:47:14.050633 1630 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:14.050979 env[1141]: time="2024-02-09T09:47:14.050950250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:14.053564 kubelet[1630]: E0209 09:47:14.053541 1630 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:14.053948 env[1141]: time="2024-02-09T09:47:14.053915237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:14.148897 kubelet[1630]: E0209 09:47:14.148845 1630 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:14.252591 kubelet[1630]: I0209 09:47:14.252566 1630 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:47:14.252920 kubelet[1630]: E0209 09:47:14.252901 1630 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Feb 9 09:47:14.466336 kubelet[1630]: W0209 09:47:14.466239 1630 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:14.466336 kubelet[1630]: E0209 09:47:14.466290 1630 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:14.511958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1219374384.mount: Deactivated successfully. Feb 9 09:47:14.516049 env[1141]: time="2024-02-09T09:47:14.515997776Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:14.516778 env[1141]: time="2024-02-09T09:47:14.516759275Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:14.518911 env[1141]: time="2024-02-09T09:47:14.518884120Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:14.519888 env[1141]: time="2024-02-09T09:47:14.519859208Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:14.521437 env[1141]: time="2024-02-09T09:47:14.521409795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:14.523734 env[1141]: time="2024-02-09T09:47:14.523708226Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:14.524454 env[1141]: time="2024-02-09T09:47:14.524431523Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:14.527166 env[1141]: time="2024-02-09T09:47:14.527142158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:14.529064 env[1141]: time="2024-02-09T09:47:14.528973126Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:14.530531 env[1141]: time="2024-02-09T09:47:14.530505453Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:14.531176 env[1141]: time="2024-02-09T09:47:14.531145942Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:14.531859 env[1141]: time="2024-02-09T09:47:14.531825593Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:14.543094 kubelet[1630]: W0209 09:47:14.543066 1630 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:14.543170 kubelet[1630]: E0209 09:47:14.543103 1630 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:14.567476 env[1141]: time="2024-02-09T09:47:14.567238385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:14.567476 env[1141]: time="2024-02-09T09:47:14.567272501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:14.567476 env[1141]: time="2024-02-09T09:47:14.567282272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:14.567698 env[1141]: time="2024-02-09T09:47:14.567654232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:14.567770 env[1141]: time="2024-02-09T09:47:14.567689069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:14.567770 env[1141]: time="2024-02-09T09:47:14.567699520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:14.567866 env[1141]: time="2024-02-09T09:47:14.567822613Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1c548efb8f77c5b3ca5d415da9f05a45b948957f72f5d93e8b16d6d81b1226c pid=1722 runtime=io.containerd.runc.v2 Feb 9 09:47:14.567939 env[1141]: time="2024-02-09T09:47:14.567811280Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca97bc74334189b2ee690f9f9e26d1758e1ac3cdec5bd2a02ac02aa94d0d0e78 pid=1723 runtime=io.containerd.runc.v2 Feb 9 09:47:14.568366 env[1141]: time="2024-02-09T09:47:14.568295001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:14.568366 env[1141]: time="2024-02-09T09:47:14.568337166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:14.568366 env[1141]: time="2024-02-09T09:47:14.568347137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:14.568524 env[1141]: time="2024-02-09T09:47:14.568481481Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f89d9318e80d41a91c1f2a4601ea2824bbd66939c491d97d5841d2fd96e8937 pid=1725 runtime=io.containerd.runc.v2 Feb 9 09:47:14.579387 systemd[1]: Started cri-containerd-f1c548efb8f77c5b3ca5d415da9f05a45b948957f72f5d93e8b16d6d81b1226c.scope. Feb 9 09:47:14.585073 systemd[1]: Started cri-containerd-4f89d9318e80d41a91c1f2a4601ea2824bbd66939c491d97d5841d2fd96e8937.scope. Feb 9 09:47:14.598543 systemd[1]: Started cri-containerd-ca97bc74334189b2ee690f9f9e26d1758e1ac3cdec5bd2a02ac02aa94d0d0e78.scope. Feb 9 09:47:14.643746 env[1141]: time="2024-02-09T09:47:14.642226563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:badfc18b216be78deea79b356dfdbf9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f89d9318e80d41a91c1f2a4601ea2824bbd66939c491d97d5841d2fd96e8937\"" Feb 9 09:47:14.643877 kubelet[1630]: E0209 09:47:14.643086 1630 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:14.647906 env[1141]: time="2024-02-09T09:47:14.647866146Z" level=info msg="CreateContainer within sandbox \"4f89d9318e80d41a91c1f2a4601ea2824bbd66939c491d97d5841d2fd96e8937\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 09:47:14.648176 env[1141]: time="2024-02-09T09:47:14.648147289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1c548efb8f77c5b3ca5d415da9f05a45b948957f72f5d93e8b16d6d81b1226c\"" Feb 9 09:47:14.648690 kubelet[1630]: E0209 09:47:14.648662 1630 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:14.651156 env[1141]: time="2024-02-09T09:47:14.651127533Z" level=info msg="CreateContainer within sandbox \"f1c548efb8f77c5b3ca5d415da9f05a45b948957f72f5d93e8b16d6d81b1226c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 09:47:14.662704 env[1141]: time="2024-02-09T09:47:14.662672945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca97bc74334189b2ee690f9f9e26d1758e1ac3cdec5bd2a02ac02aa94d0d0e78\"" Feb 9 09:47:14.663556 kubelet[1630]: E0209 09:47:14.663447 1630 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:14.664315 env[1141]: time="2024-02-09T09:47:14.664282796Z" level=info msg="CreateContainer within sandbox \"4f89d9318e80d41a91c1f2a4601ea2824bbd66939c491d97d5841d2fd96e8937\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"99fb63acd560958650e42b9feace04747192daa9221fea00f370aa87311d2d4e\"" Feb 9 09:47:14.665459 env[1141]: time="2024-02-09T09:47:14.665424703Z" level=info msg="StartContainer for \"99fb63acd560958650e42b9feace04747192daa9221fea00f370aa87311d2d4e\"" Feb 9 09:47:14.666166 env[1141]: time="2024-02-09T09:47:14.665435795Z" level=info msg="CreateContainer within sandbox \"ca97bc74334189b2ee690f9f9e26d1758e1ac3cdec5bd2a02ac02aa94d0d0e78\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 09:47:14.667097 env[1141]: time="2024-02-09T09:47:14.667023262Z" level=info msg="CreateContainer within sandbox \"f1c548efb8f77c5b3ca5d415da9f05a45b948957f72f5d93e8b16d6d81b1226c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"41ff96b1ac71efb925e15691c04f97e22af1b94ecc78f29e42936f88e4f06816\"" Feb 9 09:47:14.670180 env[1141]: time="2024-02-09T09:47:14.670148502Z" level=info msg="StartContainer for \"41ff96b1ac71efb925e15691c04f97e22af1b94ecc78f29e42936f88e4f06816\"" Feb 9 09:47:14.680261 env[1141]: time="2024-02-09T09:47:14.680205274Z" level=info msg="CreateContainer within sandbox \"ca97bc74334189b2ee690f9f9e26d1758e1ac3cdec5bd2a02ac02aa94d0d0e78\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7cda0538aaeafca51673eeb52e077dbeee631212943af1893b074720edd11cad\"" Feb 9 09:47:14.680733 env[1141]: time="2024-02-09T09:47:14.680707133Z" level=info msg="StartContainer for \"7cda0538aaeafca51673eeb52e077dbeee631212943af1893b074720edd11cad\"" Feb 9 09:47:14.687215 systemd[1]: Started cri-containerd-99fb63acd560958650e42b9feace04747192daa9221fea00f370aa87311d2d4e.scope. Feb 9 09:47:14.695879 systemd[1]: Started cri-containerd-41ff96b1ac71efb925e15691c04f97e22af1b94ecc78f29e42936f88e4f06816.scope. Feb 9 09:47:14.707399 systemd[1]: Started cri-containerd-7cda0538aaeafca51673eeb52e077dbeee631212943af1893b074720edd11cad.scope. Feb 9 09:47:14.755653 env[1141]: time="2024-02-09T09:47:14.755546592Z" level=info msg="StartContainer for \"41ff96b1ac71efb925e15691c04f97e22af1b94ecc78f29e42936f88e4f06816\" returns successfully" Feb 9 09:47:14.762689 env[1141]: time="2024-02-09T09:47:14.758219426Z" level=info msg="StartContainer for \"99fb63acd560958650e42b9feace04747192daa9221fea00f370aa87311d2d4e\" returns successfully" Feb 9 09:47:14.793724 env[1141]: time="2024-02-09T09:47:14.793590333Z" level=info msg="StartContainer for \"7cda0538aaeafca51673eeb52e077dbeee631212943af1893b074720edd11cad\" returns successfully" Feb 9 09:47:14.805637 kubelet[1630]: W0209 09:47:14.805544 1630 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:14.805637 kubelet[1630]: E0209 09:47:14.805603 1630 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:14.928606 kubelet[1630]: W0209 09:47:14.928547 1630 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:14.928606 kubelet[1630]: E0209 09:47:14.928608 1630 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 09:47:15.054313 kubelet[1630]: I0209 09:47:15.054215 1630 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:47:15.635019 kubelet[1630]: E0209 09:47:15.634987 1630 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:15.636990 kubelet[1630]: E0209 09:47:15.636970 1630 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:15.638616 kubelet[1630]: E0209 09:47:15.638566 1630 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:16.640166 kubelet[1630]: E0209 09:47:16.640138 1630 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:16.640619 kubelet[1630]: E0209 09:47:16.640179 1630 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:16.640619 kubelet[1630]: E0209 09:47:16.640219 1630 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:16.811429 kubelet[1630]: E0209 09:47:16.811395 1630 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 09:47:16.861544 kubelet[1630]: I0209 09:47:16.861497 1630 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 09:47:16.885477 kubelet[1630]: E0209 09:47:16.885423 1630 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:47:16.927898 kubelet[1630]: E0209 09:47:16.927732 1630 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b228c4973912f6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 542640374, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 542640374, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:47:16.980830 kubelet[1630]: E0209 09:47:16.980753 1630 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b228c49748bc6b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 543666795, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 543666795, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:47:16.985891 kubelet[1630]: E0209 09:47:16.985862 1630 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:47:17.034323 kubelet[1630]: E0209 09:47:17.034261 1630 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b228c4986d5ae2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 562843874, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 562843874, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:47:17.086696 kubelet[1630]: E0209 09:47:17.086653 1630 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:47:17.087555 kubelet[1630]: E0209 09:47:17.087468 1630 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b228c4986d6f60", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 562849120, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 562849120, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:47:17.140995 kubelet[1630]: E0209 09:47:17.140926 1630 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b228c4986d7b1c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 562852124, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 562852124, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:47:17.194438 kubelet[1630]: E0209 09:47:17.194330 1630 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b228c49be4b011", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 620996113, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 620996113, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:47:17.249194 kubelet[1630]: E0209 09:47:17.249130 1630 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b228c4986d5ae2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 562843874, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 648512476, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:47:17.303562 kubelet[1630]: E0209 09:47:17.303491 1630 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b228c4986d6f60", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 562849120, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 648524491, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:47:17.358433 kubelet[1630]: E0209 09:47:17.358369 1630 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b228c4986d7b1c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 562852124, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 648528616, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:47:17.543106 kubelet[1630]: I0209 09:47:17.543023 1630 apiserver.go:52] "Watching apiserver" Feb 9 09:47:17.581599 kubelet[1630]: E0209 09:47:17.581517 1630 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b228c4986d5ae2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 562843874, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 729379179, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:47:17.981185 kubelet[1630]: E0209 09:47:17.981090 1630 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b228c4986d6f60", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 562849120, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 13, 729391915, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:47:18.147242 kubelet[1630]: I0209 09:47:18.147213 1630 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:47:18.174392 kubelet[1630]: I0209 09:47:18.174366 1630 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:47:18.347153 kubelet[1630]: E0209 09:47:18.347072 1630 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:18.642108 kubelet[1630]: E0209 09:47:18.642090 1630 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:19.054121 kubelet[1630]: E0209 09:47:19.054016 1630 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:19.144219 systemd[1]: Reloading. Feb 9 09:47:19.191325 /usr/lib/systemd/system-generators/torcx-generator[1964]: time="2024-02-09T09:47:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:47:19.191355 /usr/lib/systemd/system-generators/torcx-generator[1964]: time="2024-02-09T09:47:19Z" level=info msg="torcx already run" Feb 9 09:47:19.246971 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:47:19.246989 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:47:19.264225 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:47:19.345717 systemd[1]: Stopping kubelet.service... Feb 9 09:47:19.365452 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 09:47:19.365650 systemd[1]: Stopped kubelet.service. Feb 9 09:47:19.365695 systemd[1]: kubelet.service: Consumed 1.270s CPU time. Feb 9 09:47:19.367310 systemd[1]: Started kubelet.service. Feb 9 09:47:19.426426 kubelet[2001]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:47:19.426426 kubelet[2001]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:47:19.426733 kubelet[2001]: I0209 09:47:19.426453 2001 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:47:19.427666 kubelet[2001]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:47:19.427666 kubelet[2001]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:47:19.430362 kubelet[2001]: I0209 09:47:19.430340 2001 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:47:19.430475 kubelet[2001]: I0209 09:47:19.430464 2001 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:47:19.430702 kubelet[2001]: I0209 09:47:19.430685 2001 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:47:19.431898 kubelet[2001]: I0209 09:47:19.431879 2001 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 09:47:19.432846 kubelet[2001]: I0209 09:47:19.432813 2001 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:47:19.434296 kubelet[2001]: W0209 09:47:19.434277 2001 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:47:19.435117 kubelet[2001]: I0209 09:47:19.435092 2001 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:47:19.435381 kubelet[2001]: I0209 09:47:19.435367 2001 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:47:19.435504 kubelet[2001]: I0209 09:47:19.435492 2001 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:47:19.435628 kubelet[2001]: I0209 09:47:19.435616 2001 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:47:19.435695 kubelet[2001]: I0209 09:47:19.435686 2001 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:47:19.435774 kubelet[2001]: I0209 09:47:19.435765 2001 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:47:19.438310 kubelet[2001]: I0209 09:47:19.438287 2001 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:47:19.438310 kubelet[2001]: I0209 09:47:19.438309 2001 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:47:19.438403 kubelet[2001]: I0209 09:47:19.438332 2001 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:47:19.438403 kubelet[2001]: I0209 09:47:19.438342 2001 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:47:19.443418 kubelet[2001]: I0209 09:47:19.443399 2001 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:47:19.443891 kubelet[2001]: I0209 09:47:19.443871 2001 server.go:1186] "Started kubelet" Feb 9 09:47:19.444315 kubelet[2001]: I0209 09:47:19.444281 2001 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:47:19.444963 kubelet[2001]: I0209 09:47:19.444943 2001 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:47:19.445539 kubelet[2001]: I0209 09:47:19.445503 2001 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:47:19.446693 kubelet[2001]: I0209 09:47:19.446227 2001 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:47:19.446693 kubelet[2001]: I0209 09:47:19.446303 2001 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:47:19.447005 kubelet[2001]: E0209 09:47:19.446972 2001 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:47:19.447005 kubelet[2001]: E0209 09:47:19.446999 2001 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:47:19.487109 kubelet[2001]: I0209 09:47:19.487079 2001 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:47:19.503306 kubelet[2001]: I0209 09:47:19.503271 2001 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:47:19.503306 kubelet[2001]: I0209 09:47:19.503290 2001 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:47:19.503306 kubelet[2001]: I0209 09:47:19.503305 2001 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:47:19.503426 kubelet[2001]: I0209 09:47:19.503420 2001 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 09:47:19.503458 kubelet[2001]: I0209 09:47:19.503431 2001 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 09:47:19.503458 kubelet[2001]: I0209 09:47:19.503438 2001 policy_none.go:49] "None policy: Start" Feb 9 09:47:19.503927 kubelet[2001]: I0209 09:47:19.503912 2001 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:47:19.503965 kubelet[2001]: I0209 09:47:19.503937 2001 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:47:19.504074 kubelet[2001]: I0209 09:47:19.504059 2001 state_mem.go:75] "Updated machine memory state" Feb 9 09:47:19.506976 kubelet[2001]: I0209 09:47:19.506958 2001 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:47:19.507274 kubelet[2001]: I0209 09:47:19.507257 2001 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:47:19.508500 kubelet[2001]: I0209 09:47:19.508465 2001 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:47:19.508500 kubelet[2001]: I0209 09:47:19.508485 2001 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:47:19.508500 kubelet[2001]: I0209 09:47:19.508498 2001 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:47:19.508616 kubelet[2001]: E0209 09:47:19.508535 2001 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 09:47:19.548812 kubelet[2001]: I0209 09:47:19.548789 2001 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:47:19.556001 kubelet[2001]: I0209 09:47:19.555943 2001 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 09:47:19.556103 kubelet[2001]: I0209 09:47:19.556092 2001 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 09:47:19.608745 kubelet[2001]: I0209 09:47:19.608654 2001 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:19.608745 kubelet[2001]: I0209 09:47:19.608732 2001 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:19.608839 kubelet[2001]: I0209 09:47:19.608763 2001 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:19.613638 kubelet[2001]: E0209 09:47:19.613601 2001 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 09:47:19.747418 kubelet[2001]: I0209 09:47:19.747380 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:47:19.747418 kubelet[2001]: I0209 09:47:19.747423 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:47:19.747541 kubelet[2001]: I0209 09:47:19.747449 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:47:19.747541 kubelet[2001]: I0209 09:47:19.747469 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 09:47:19.747541 kubelet[2001]: I0209 09:47:19.747489 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/badfc18b216be78deea79b356dfdbf9e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"badfc18b216be78deea79b356dfdbf9e\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:47:19.747541 kubelet[2001]: I0209 09:47:19.747509 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/badfc18b216be78deea79b356dfdbf9e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"badfc18b216be78deea79b356dfdbf9e\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:47:19.747541 kubelet[2001]: I0209 09:47:19.747527 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:47:19.747649 kubelet[2001]: I0209 09:47:19.747547 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:47:19.747649 kubelet[2001]: I0209 09:47:19.747568 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/badfc18b216be78deea79b356dfdbf9e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"badfc18b216be78deea79b356dfdbf9e\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:47:19.843425 kubelet[2001]: E0209 09:47:19.843388 2001 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 09:47:19.914974 kubelet[2001]: E0209 09:47:19.914951 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:19.915075 kubelet[2001]: E0209 09:47:19.914995 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:20.144800 kubelet[2001]: E0209 09:47:20.144762 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:20.443668 kubelet[2001]: I0209 09:47:20.443614 2001 apiserver.go:52] "Watching apiserver" Feb 9 09:47:20.446625 kubelet[2001]: I0209 09:47:20.446594 2001 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:47:20.451977 kubelet[2001]: I0209 09:47:20.451943 2001 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:47:20.643805 kubelet[2001]: E0209 09:47:20.643773 2001 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 09:47:20.644220 kubelet[2001]: E0209 09:47:20.644209 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:20.706651 sudo[1234]: pam_unix(sudo:session): session closed for user root Feb 9 09:47:20.709329 sshd[1231]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:20.711270 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:47:20.711476 systemd[1]: session-5.scope: Consumed 5.202s CPU time. Feb 9 09:47:20.711910 systemd-logind[1125]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:47:20.711983 systemd[1]: sshd@4-10.0.0.24:22-10.0.0.1:47762.service: Deactivated successfully. Feb 9 09:47:20.712950 systemd-logind[1125]: Removed session 5. Feb 9 09:47:21.043172 kubelet[2001]: E0209 09:47:21.043075 2001 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 09:47:21.043644 kubelet[2001]: E0209 09:47:21.043624 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:21.244839 kubelet[2001]: E0209 09:47:21.244813 2001 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 09:47:21.245112 kubelet[2001]: E0209 09:47:21.245100 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:21.522932 kubelet[2001]: E0209 09:47:21.522717 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:21.522932 kubelet[2001]: E0209 09:47:21.522770 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:21.522932 kubelet[2001]: E0209 09:47:21.522829 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:21.843727 kubelet[2001]: I0209 09:47:21.843630 2001 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.843583091 pod.CreationTimestamp="2024-02-09 09:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:21.843297877 +0000 UTC m=+2.471784247" watchObservedRunningTime="2024-02-09 09:47:21.843583091 +0000 UTC m=+2.472069461" Feb 9 09:47:21.843835 kubelet[2001]: I0209 09:47:21.843737 2001 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.843721477 pod.CreationTimestamp="2024-02-09 09:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:21.453573354 +0000 UTC m=+2.082059804" watchObservedRunningTime="2024-02-09 09:47:21.843721477 +0000 UTC m=+2.472207807" Feb 9 09:47:22.243745 kubelet[2001]: I0209 09:47:22.243564 2001 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.243529401 pod.CreationTimestamp="2024-02-09 09:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:22.24336913 +0000 UTC m=+2.871855460" watchObservedRunningTime="2024-02-09 09:47:22.243529401 +0000 UTC m=+2.872015771" Feb 9 09:47:22.525131 kubelet[2001]: E0209 09:47:22.524251 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:24.761957 kubelet[2001]: E0209 09:47:24.761924 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:25.527694 kubelet[2001]: E0209 09:47:25.527650 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:28.021180 kubelet[2001]: E0209 09:47:28.020678 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:28.319361 kubelet[2001]: E0209 09:47:28.319261 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:28.531652 kubelet[2001]: E0209 09:47:28.531611 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:28.532460 kubelet[2001]: E0209 09:47:28.532437 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:29.533932 kubelet[2001]: E0209 09:47:29.533902 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:33.017769 kubelet[2001]: I0209 09:47:33.017581 2001 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 09:47:33.018179 env[1141]: time="2024-02-09T09:47:33.018007814Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:47:33.018487 kubelet[2001]: I0209 09:47:33.018466 2001 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 09:47:33.961146 kubelet[2001]: I0209 09:47:33.961091 2001 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:33.968356 kubelet[2001]: I0209 09:47:33.963264 2001 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:33.984415 systemd[1]: Created slice kubepods-besteffort-pod7ab93e6a_d913_413b_83f0_7d6b508d0e0e.slice. Feb 9 09:47:33.993690 systemd[1]: Created slice kubepods-burstable-pod2135f6ed_024b_4d8a_9b75_093c12ff0d05.slice. Feb 9 09:47:34.007845 update_engine[1127]: I0209 09:47:34.007515 1127 update_attempter.cc:509] Updating boot flags... Feb 9 09:47:34.049058 kubelet[2001]: I0209 09:47:34.046975 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8fz4\" (UniqueName: \"kubernetes.io/projected/7ab93e6a-d913-413b-83f0-7d6b508d0e0e-kube-api-access-j8fz4\") pod \"kube-proxy-jv5hp\" (UID: \"7ab93e6a-d913-413b-83f0-7d6b508d0e0e\") " pod="kube-system/kube-proxy-jv5hp" Feb 9 09:47:34.049058 kubelet[2001]: I0209 09:47:34.047156 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2135f6ed-024b-4d8a-9b75-093c12ff0d05-run\") pod \"kube-flannel-ds-6nffh\" (UID: \"2135f6ed-024b-4d8a-9b75-093c12ff0d05\") " pod="kube-flannel/kube-flannel-ds-6nffh" Feb 9 09:47:34.049058 kubelet[2001]: I0209 09:47:34.047219 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/2135f6ed-024b-4d8a-9b75-093c12ff0d05-cni\") pod \"kube-flannel-ds-6nffh\" (UID: \"2135f6ed-024b-4d8a-9b75-093c12ff0d05\") " pod="kube-flannel/kube-flannel-ds-6nffh" Feb 9 09:47:34.049058 kubelet[2001]: I0209 09:47:34.047243 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2135f6ed-024b-4d8a-9b75-093c12ff0d05-xtables-lock\") pod \"kube-flannel-ds-6nffh\" (UID: \"2135f6ed-024b-4d8a-9b75-093c12ff0d05\") " pod="kube-flannel/kube-flannel-ds-6nffh" Feb 9 09:47:34.049058 kubelet[2001]: I0209 09:47:34.047282 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv7ww\" (UniqueName: \"kubernetes.io/projected/2135f6ed-024b-4d8a-9b75-093c12ff0d05-kube-api-access-gv7ww\") pod \"kube-flannel-ds-6nffh\" (UID: \"2135f6ed-024b-4d8a-9b75-093c12ff0d05\") " pod="kube-flannel/kube-flannel-ds-6nffh" Feb 9 09:47:34.050760 kubelet[2001]: I0209 09:47:34.047304 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ab93e6a-d913-413b-83f0-7d6b508d0e0e-xtables-lock\") pod \"kube-proxy-jv5hp\" (UID: \"7ab93e6a-d913-413b-83f0-7d6b508d0e0e\") " pod="kube-system/kube-proxy-jv5hp" Feb 9 09:47:34.050760 kubelet[2001]: I0209 09:47:34.047325 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7ab93e6a-d913-413b-83f0-7d6b508d0e0e-kube-proxy\") pod \"kube-proxy-jv5hp\" (UID: \"7ab93e6a-d913-413b-83f0-7d6b508d0e0e\") " pod="kube-system/kube-proxy-jv5hp" Feb 9 09:47:34.050760 kubelet[2001]: I0209 09:47:34.047363 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ab93e6a-d913-413b-83f0-7d6b508d0e0e-lib-modules\") pod \"kube-proxy-jv5hp\" (UID: \"7ab93e6a-d913-413b-83f0-7d6b508d0e0e\") " pod="kube-system/kube-proxy-jv5hp" Feb 9 09:47:34.050760 kubelet[2001]: I0209 09:47:34.047383 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/2135f6ed-024b-4d8a-9b75-093c12ff0d05-cni-plugin\") pod \"kube-flannel-ds-6nffh\" (UID: \"2135f6ed-024b-4d8a-9b75-093c12ff0d05\") " pod="kube-flannel/kube-flannel-ds-6nffh" Feb 9 09:47:34.050760 kubelet[2001]: I0209 09:47:34.047418 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/2135f6ed-024b-4d8a-9b75-093c12ff0d05-flannel-cfg\") pod \"kube-flannel-ds-6nffh\" (UID: \"2135f6ed-024b-4d8a-9b75-093c12ff0d05\") " pod="kube-flannel/kube-flannel-ds-6nffh" Feb 9 09:47:34.292589 kubelet[2001]: E0209 09:47:34.292495 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:34.294595 env[1141]: time="2024-02-09T09:47:34.294553382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jv5hp,Uid:7ab93e6a-d913-413b-83f0-7d6b508d0e0e,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:34.295932 env[1141]: time="2024-02-09T09:47:34.295641270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6nffh,Uid:2135f6ed-024b-4d8a-9b75-093c12ff0d05,Namespace:kube-flannel,Attempt:0,}" Feb 9 09:47:34.295966 kubelet[2001]: E0209 09:47:34.295294 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:34.313470 env[1141]: time="2024-02-09T09:47:34.313401354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:34.313470 env[1141]: time="2024-02-09T09:47:34.313439883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:34.313837 env[1141]: time="2024-02-09T09:47:34.313450285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:34.314100 env[1141]: time="2024-02-09T09:47:34.314057904Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/906ccd0fa30ad928d6b72f431f8b522ae07fbe585f4b7d6f21e9b42fff86bbcc pid=2116 runtime=io.containerd.runc.v2 Feb 9 09:47:34.316132 env[1141]: time="2024-02-09T09:47:34.315978861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:34.318121 env[1141]: time="2024-02-09T09:47:34.318066217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:34.318121 env[1141]: time="2024-02-09T09:47:34.318091382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:34.319737 env[1141]: time="2024-02-09T09:47:34.319010071Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ff483fd2f750d0ffeb8c2943a67fe49bff0e664b0b208c4060bcaf9a8bd60c4 pid=2124 runtime=io.containerd.runc.v2 Feb 9 09:47:34.329267 systemd[1]: Started cri-containerd-5ff483fd2f750d0ffeb8c2943a67fe49bff0e664b0b208c4060bcaf9a8bd60c4.scope. Feb 9 09:47:34.331277 systemd[1]: Started cri-containerd-906ccd0fa30ad928d6b72f431f8b522ae07fbe585f4b7d6f21e9b42fff86bbcc.scope. Feb 9 09:47:34.378577 env[1141]: time="2024-02-09T09:47:34.378539188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jv5hp,Uid:7ab93e6a-d913-413b-83f0-7d6b508d0e0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ff483fd2f750d0ffeb8c2943a67fe49bff0e664b0b208c4060bcaf9a8bd60c4\"" Feb 9 09:47:34.379529 kubelet[2001]: E0209 09:47:34.379354 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:34.382598 env[1141]: time="2024-02-09T09:47:34.382563265Z" level=info msg="CreateContainer within sandbox \"5ff483fd2f750d0ffeb8c2943a67fe49bff0e664b0b208c4060bcaf9a8bd60c4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:47:34.390462 env[1141]: time="2024-02-09T09:47:34.390417973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6nffh,Uid:2135f6ed-024b-4d8a-9b75-093c12ff0d05,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"906ccd0fa30ad928d6b72f431f8b522ae07fbe585f4b7d6f21e9b42fff86bbcc\"" Feb 9 09:47:34.391093 kubelet[2001]: E0209 09:47:34.391069 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:34.392519 env[1141]: time="2024-02-09T09:47:34.392490525Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\"" Feb 9 09:47:34.400170 env[1141]: time="2024-02-09T09:47:34.400129105Z" level=info msg="CreateContainer within sandbox \"5ff483fd2f750d0ffeb8c2943a67fe49bff0e664b0b208c4060bcaf9a8bd60c4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f616e22f0d1d4434cb05dff9923ff524ca01e5d1d30395d3868d96d69e99d652\"" Feb 9 09:47:34.400813 env[1141]: time="2024-02-09T09:47:34.400785134Z" level=info msg="StartContainer for \"f616e22f0d1d4434cb05dff9923ff524ca01e5d1d30395d3868d96d69e99d652\"" Feb 9 09:47:34.415143 systemd[1]: Started cri-containerd-f616e22f0d1d4434cb05dff9923ff524ca01e5d1d30395d3868d96d69e99d652.scope. Feb 9 09:47:34.452547 env[1141]: time="2024-02-09T09:47:34.452491549Z" level=info msg="StartContainer for \"f616e22f0d1d4434cb05dff9923ff524ca01e5d1d30395d3868d96d69e99d652\" returns successfully" Feb 9 09:47:34.541496 kubelet[2001]: E0209 09:47:34.541456 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:34.553684 kubelet[2001]: I0209 09:47:34.553570 2001 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jv5hp" podStartSLOduration=1.553525838 pod.CreationTimestamp="2024-02-09 09:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:34.552601908 +0000 UTC m=+15.181088278" watchObservedRunningTime="2024-02-09 09:47:34.553525838 +0000 UTC m=+15.182012208" Feb 9 09:47:35.393724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3349781856.mount: Deactivated successfully. Feb 9 09:47:35.430006 env[1141]: time="2024-02-09T09:47:35.429960560Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:35.431385 env[1141]: time="2024-02-09T09:47:35.431354302Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b04a1a4152e14ddc6c26adc946baca3226718fa1acce540c015ac593e50218a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:35.432538 env[1141]: time="2024-02-09T09:47:35.432503871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:35.433785 env[1141]: time="2024-02-09T09:47:35.433762543Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:35.434234 env[1141]: time="2024-02-09T09:47:35.434204079Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\" returns image reference \"sha256:b04a1a4152e14ddc6c26adc946baca3226718fa1acce540c015ac593e50218a9\"" Feb 9 09:47:35.437032 env[1141]: time="2024-02-09T09:47:35.436996483Z" level=info msg="CreateContainer within sandbox \"906ccd0fa30ad928d6b72f431f8b522ae07fbe585f4b7d6f21e9b42fff86bbcc\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 9 09:47:35.444999 env[1141]: time="2024-02-09T09:47:35.444964089Z" level=info msg="CreateContainer within sandbox \"906ccd0fa30ad928d6b72f431f8b522ae07fbe585f4b7d6f21e9b42fff86bbcc\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"fcb30f11297d8db3f7aaf5e43e89f927bc65d3c8cb4fe1ec545c8ba4075027eb\"" Feb 9 09:47:35.446372 env[1141]: time="2024-02-09T09:47:35.446332825Z" level=info msg="StartContainer for \"fcb30f11297d8db3f7aaf5e43e89f927bc65d3c8cb4fe1ec545c8ba4075027eb\"" Feb 9 09:47:35.459515 systemd[1]: Started cri-containerd-fcb30f11297d8db3f7aaf5e43e89f927bc65d3c8cb4fe1ec545c8ba4075027eb.scope. Feb 9 09:47:35.497846 env[1141]: time="2024-02-09T09:47:35.497805851Z" level=info msg="StartContainer for \"fcb30f11297d8db3f7aaf5e43e89f927bc65d3c8cb4fe1ec545c8ba4075027eb\" returns successfully" Feb 9 09:47:35.506764 systemd[1]: cri-containerd-fcb30f11297d8db3f7aaf5e43e89f927bc65d3c8cb4fe1ec545c8ba4075027eb.scope: Deactivated successfully. Feb 9 09:47:35.548569 kubelet[2001]: E0209 09:47:35.548540 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:35.549251 kubelet[2001]: E0209 09:47:35.549063 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:35.593500 env[1141]: time="2024-02-09T09:47:35.593450482Z" level=info msg="shim disconnected" id=fcb30f11297d8db3f7aaf5e43e89f927bc65d3c8cb4fe1ec545c8ba4075027eb Feb 9 09:47:35.593500 env[1141]: time="2024-02-09T09:47:35.593494371Z" level=warning msg="cleaning up after shim disconnected" id=fcb30f11297d8db3f7aaf5e43e89f927bc65d3c8cb4fe1ec545c8ba4075027eb namespace=k8s.io Feb 9 09:47:35.593500 env[1141]: time="2024-02-09T09:47:35.593503613Z" level=info msg="cleaning up dead shim" Feb 9 09:47:35.600243 env[1141]: time="2024-02-09T09:47:35.600187380Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2367 runtime=io.containerd.runc.v2\n" Feb 9 09:47:36.550552 kubelet[2001]: E0209 09:47:36.550525 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:36.551542 env[1141]: time="2024-02-09T09:47:36.551507361Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\"" Feb 9 09:47:37.630564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682153806.mount: Deactivated successfully. Feb 9 09:47:38.218409 env[1141]: time="2024-02-09T09:47:38.218351571Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:38.219829 env[1141]: time="2024-02-09T09:47:38.219800362Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:37c457685cef0c53d8641973794ca8ca8b89902c01fd7b52bc718f9b434da459,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:38.221594 env[1141]: time="2024-02-09T09:47:38.221562892Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:38.223503 env[1141]: time="2024-02-09T09:47:38.223476370Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel@sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:38.224215 env[1141]: time="2024-02-09T09:47:38.224184422Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\" returns image reference \"sha256:37c457685cef0c53d8641973794ca8ca8b89902c01fd7b52bc718f9b434da459\"" Feb 9 09:47:38.227689 env[1141]: time="2024-02-09T09:47:38.227524166Z" level=info msg="CreateContainer within sandbox \"906ccd0fa30ad928d6b72f431f8b522ae07fbe585f4b7d6f21e9b42fff86bbcc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 09:47:38.236581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount892551295.mount: Deactivated successfully. Feb 9 09:47:38.240064 env[1141]: time="2024-02-09T09:47:38.240015462Z" level=info msg="CreateContainer within sandbox \"906ccd0fa30ad928d6b72f431f8b522ae07fbe585f4b7d6f21e9b42fff86bbcc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f582a3a5c0075b3d2e9a67e0b1ebd2d3a3620848d7089f31ec5bd13327e830b5\"" Feb 9 09:47:38.242258 env[1141]: time="2024-02-09T09:47:38.242227676Z" level=info msg="StartContainer for \"f582a3a5c0075b3d2e9a67e0b1ebd2d3a3620848d7089f31ec5bd13327e830b5\"" Feb 9 09:47:38.258590 systemd[1]: Started cri-containerd-f582a3a5c0075b3d2e9a67e0b1ebd2d3a3620848d7089f31ec5bd13327e830b5.scope. Feb 9 09:47:38.296244 systemd[1]: cri-containerd-f582a3a5c0075b3d2e9a67e0b1ebd2d3a3620848d7089f31ec5bd13327e830b5.scope: Deactivated successfully. Feb 9 09:47:38.298094 env[1141]: time="2024-02-09T09:47:38.297958736Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2135f6ed_024b_4d8a_9b75_093c12ff0d05.slice/cri-containerd-f582a3a5c0075b3d2e9a67e0b1ebd2d3a3620848d7089f31ec5bd13327e830b5.scope/memory.events\": no such file or directory" Feb 9 09:47:38.299452 env[1141]: time="2024-02-09T09:47:38.299418849Z" level=info msg="StartContainer for \"f582a3a5c0075b3d2e9a67e0b1ebd2d3a3620848d7089f31ec5bd13327e830b5\" returns successfully" Feb 9 09:47:38.332987 kubelet[2001]: I0209 09:47:38.330434 2001 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:47:38.348416 kubelet[2001]: I0209 09:47:38.347802 2001 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:38.352305 kubelet[2001]: I0209 09:47:38.351330 2001 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:38.357684 systemd[1]: Created slice kubepods-burstable-pod934d6078_8779_42e4_aa43_7d59b7c5c2d8.slice. Feb 9 09:47:38.364075 systemd[1]: Created slice kubepods-burstable-poda383f256_df6e_4e0d_808f_2c05b146d680.slice. Feb 9 09:47:38.377079 kubelet[2001]: I0209 09:47:38.377009 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a383f256-df6e-4e0d-808f-2c05b146d680-config-volume\") pod \"coredns-787d4945fb-f47x8\" (UID: \"a383f256-df6e-4e0d-808f-2c05b146d680\") " pod="kube-system/coredns-787d4945fb-f47x8" Feb 9 09:47:38.377231 kubelet[2001]: I0209 09:47:38.377090 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wqmk\" (UniqueName: \"kubernetes.io/projected/a383f256-df6e-4e0d-808f-2c05b146d680-kube-api-access-6wqmk\") pod \"coredns-787d4945fb-f47x8\" (UID: \"a383f256-df6e-4e0d-808f-2c05b146d680\") " pod="kube-system/coredns-787d4945fb-f47x8" Feb 9 09:47:38.377231 kubelet[2001]: I0209 09:47:38.377113 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/934d6078-8779-42e4-aa43-7d59b7c5c2d8-config-volume\") pod \"coredns-787d4945fb-q54xz\" (UID: \"934d6078-8779-42e4-aa43-7d59b7c5c2d8\") " pod="kube-system/coredns-787d4945fb-q54xz" Feb 9 09:47:38.377231 kubelet[2001]: I0209 09:47:38.377148 2001 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7r6f\" (UniqueName: \"kubernetes.io/projected/934d6078-8779-42e4-aa43-7d59b7c5c2d8-kube-api-access-j7r6f\") pod \"coredns-787d4945fb-q54xz\" (UID: \"934d6078-8779-42e4-aa43-7d59b7c5c2d8\") " pod="kube-system/coredns-787d4945fb-q54xz" Feb 9 09:47:38.389712 env[1141]: time="2024-02-09T09:47:38.389664443Z" level=info msg="shim disconnected" id=f582a3a5c0075b3d2e9a67e0b1ebd2d3a3620848d7089f31ec5bd13327e830b5 Feb 9 09:47:38.389851 env[1141]: time="2024-02-09T09:47:38.389715093Z" level=warning msg="cleaning up after shim disconnected" id=f582a3a5c0075b3d2e9a67e0b1ebd2d3a3620848d7089f31ec5bd13327e830b5 namespace=k8s.io Feb 9 09:47:38.389851 env[1141]: time="2024-02-09T09:47:38.389727295Z" level=info msg="cleaning up dead shim" Feb 9 09:47:38.397391 env[1141]: time="2024-02-09T09:47:38.397336238Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2422 runtime=io.containerd.runc.v2\n" Feb 9 09:47:38.558268 kubelet[2001]: E0209 09:47:38.555739 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:38.558629 env[1141]: time="2024-02-09T09:47:38.558560703Z" level=info msg="CreateContainer within sandbox \"906ccd0fa30ad928d6b72f431f8b522ae07fbe585f4b7d6f21e9b42fff86bbcc\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 9 09:47:38.569333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3113047163.mount: Deactivated successfully. Feb 9 09:47:38.572528 env[1141]: time="2024-02-09T09:47:38.572486707Z" level=info msg="CreateContainer within sandbox \"906ccd0fa30ad928d6b72f431f8b522ae07fbe585f4b7d6f21e9b42fff86bbcc\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"a47ae461de08181c67baf189182ee0c489e5db32d5a6c5fb38654570fa0ee6fd\"" Feb 9 09:47:38.574261 env[1141]: time="2024-02-09T09:47:38.574179063Z" level=info msg="StartContainer for \"a47ae461de08181c67baf189182ee0c489e5db32d5a6c5fb38654570fa0ee6fd\"" Feb 9 09:47:38.588442 systemd[1]: Started cri-containerd-a47ae461de08181c67baf189182ee0c489e5db32d5a6c5fb38654570fa0ee6fd.scope. Feb 9 09:47:38.628577 env[1141]: time="2024-02-09T09:47:38.628531666Z" level=info msg="StartContainer for \"a47ae461de08181c67baf189182ee0c489e5db32d5a6c5fb38654570fa0ee6fd\" returns successfully" Feb 9 09:47:38.661010 kubelet[2001]: E0209 09:47:38.660964 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:38.661489 env[1141]: time="2024-02-09T09:47:38.661419215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-q54xz,Uid:934d6078-8779-42e4-aa43-7d59b7c5c2d8,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:38.666184 kubelet[2001]: E0209 09:47:38.666163 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:38.666762 env[1141]: time="2024-02-09T09:47:38.666720567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-f47x8,Uid:a383f256-df6e-4e0d-808f-2c05b146d680,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:38.703760 env[1141]: time="2024-02-09T09:47:38.703678557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-q54xz,Uid:934d6078-8779-42e4-aa43-7d59b7c5c2d8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e6aeab5abefbf734b48bfd69e3dd37db6c4b91bc0e57a4490ec239823c18afa2\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 09:47:38.704088 kubelet[2001]: E0209 09:47:38.704056 2001 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6aeab5abefbf734b48bfd69e3dd37db6c4b91bc0e57a4490ec239823c18afa2\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 09:47:38.704164 kubelet[2001]: E0209 09:47:38.704121 2001 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6aeab5abefbf734b48bfd69e3dd37db6c4b91bc0e57a4490ec239823c18afa2\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-q54xz" Feb 9 09:47:38.704164 kubelet[2001]: E0209 09:47:38.704141 2001 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6aeab5abefbf734b48bfd69e3dd37db6c4b91bc0e57a4490ec239823c18afa2\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-q54xz" Feb 9 09:47:38.704214 kubelet[2001]: E0209 09:47:38.704189 2001 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-q54xz_kube-system(934d6078-8779-42e4-aa43-7d59b7c5c2d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-q54xz_kube-system(934d6078-8779-42e4-aa43-7d59b7c5c2d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6aeab5abefbf734b48bfd69e3dd37db6c4b91bc0e57a4490ec239823c18afa2\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-q54xz" podUID=934d6078-8779-42e4-aa43-7d59b7c5c2d8 Feb 9 09:47:38.705633 env[1141]: time="2024-02-09T09:47:38.705591115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-f47x8,Uid:a383f256-df6e-4e0d-808f-2c05b146d680,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b5376d3bd0c13bce36b125333a5363983a506bbae0f479270cbfc533a6fb269\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 09:47:38.706113 kubelet[2001]: E0209 09:47:38.706090 2001 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b5376d3bd0c13bce36b125333a5363983a506bbae0f479270cbfc533a6fb269\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 09:47:38.706166 kubelet[2001]: E0209 09:47:38.706129 2001 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b5376d3bd0c13bce36b125333a5363983a506bbae0f479270cbfc533a6fb269\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-f47x8" Feb 9 09:47:38.706311 kubelet[2001]: E0209 09:47:38.706294 2001 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b5376d3bd0c13bce36b125333a5363983a506bbae0f479270cbfc533a6fb269\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-f47x8" Feb 9 09:47:38.706677 kubelet[2001]: E0209 09:47:38.706659 2001 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-f47x8_kube-system(a383f256-df6e-4e0d-808f-2c05b146d680)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-f47x8_kube-system(a383f256-df6e-4e0d-808f-2c05b146d680)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b5376d3bd0c13bce36b125333a5363983a506bbae0f479270cbfc533a6fb269\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-f47x8" podUID=a383f256-df6e-4e0d-808f-2c05b146d680 Feb 9 09:47:39.559161 kubelet[2001]: E0209 09:47:39.559135 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:39.570669 kubelet[2001]: I0209 09:47:39.570617 2001 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-6nffh" podStartSLOduration=-9.223372030284197e+09 pod.CreationTimestamp="2024-02-09 09:47:33 +0000 UTC" firstStartedPulling="2024-02-09 09:47:34.391926677 +0000 UTC m=+15.020413047" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:39.568703389 +0000 UTC m=+20.197189799" watchObservedRunningTime="2024-02-09 09:47:39.570579163 +0000 UTC m=+20.199065533" Feb 9 09:47:40.118110 systemd-networkd[1040]: flannel.1: Link UP Feb 9 09:47:40.118116 systemd-networkd[1040]: flannel.1: Gained carrier Feb 9 09:47:40.560683 kubelet[2001]: E0209 09:47:40.560601 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:42.106204 systemd-networkd[1040]: flannel.1: Gained IPv6LL Feb 9 09:47:49.510156 kubelet[2001]: E0209 09:47:49.510124 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:49.510508 env[1141]: time="2024-02-09T09:47:49.510408135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-q54xz,Uid:934d6078-8779-42e4-aa43-7d59b7c5c2d8,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:49.535203 systemd-networkd[1040]: cni0: Link UP Feb 9 09:47:49.535210 systemd-networkd[1040]: cni0: Gained carrier Feb 9 09:47:49.537107 systemd-networkd[1040]: cni0: Lost carrier Feb 9 09:47:49.540113 systemd-networkd[1040]: veth29951681: Link UP Feb 9 09:47:49.544099 kernel: cni0: port 1(veth29951681) entered blocking state Feb 9 09:47:49.544167 kernel: cni0: port 1(veth29951681) entered disabled state Feb 9 09:47:49.545278 kernel: device veth29951681 entered promiscuous mode Feb 9 09:47:49.545345 kernel: cni0: port 1(veth29951681) entered blocking state Feb 9 09:47:49.545371 kernel: cni0: port 1(veth29951681) entered forwarding state Feb 9 09:47:49.546575 kernel: cni0: port 1(veth29951681) entered disabled state Feb 9 09:47:49.562256 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth29951681: link becomes ready Feb 9 09:47:49.562358 kernel: cni0: port 1(veth29951681) entered blocking state Feb 9 09:47:49.562387 kernel: cni0: port 1(veth29951681) entered forwarding state Feb 9 09:47:49.562062 systemd-networkd[1040]: veth29951681: Gained carrier Feb 9 09:47:49.562249 systemd-networkd[1040]: cni0: Gained carrier Feb 9 09:47:49.563762 env[1141]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400010c8c8), "name":"cbr0", "type":"bridge"} Feb 9 09:47:49.572703 env[1141]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-09T09:47:49.572648034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:49.572978 env[1141]: time="2024-02-09T09:47:49.572949150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:49.573049 env[1141]: time="2024-02-09T09:47:49.572989634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:49.573198 env[1141]: time="2024-02-09T09:47:49.573162414Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1db7ffaec4cd0c38ce148673af469402fb4c2fab35ede9cc8660d1ae178f370e pid=2680 runtime=io.containerd.runc.v2 Feb 9 09:47:49.591685 systemd[1]: Started cri-containerd-1db7ffaec4cd0c38ce148673af469402fb4c2fab35ede9cc8660d1ae178f370e.scope. Feb 9 09:47:49.610099 systemd-resolved[1086]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:47:49.625795 env[1141]: time="2024-02-09T09:47:49.625734466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-q54xz,Uid:934d6078-8779-42e4-aa43-7d59b7c5c2d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1db7ffaec4cd0c38ce148673af469402fb4c2fab35ede9cc8660d1ae178f370e\"" Feb 9 09:47:49.626461 kubelet[2001]: E0209 09:47:49.626437 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:49.629390 env[1141]: time="2024-02-09T09:47:49.629350808Z" level=info msg="CreateContainer within sandbox \"1db7ffaec4cd0c38ce148673af469402fb4c2fab35ede9cc8660d1ae178f370e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:47:49.637684 env[1141]: time="2024-02-09T09:47:49.637635454Z" level=info msg="CreateContainer within sandbox \"1db7ffaec4cd0c38ce148673af469402fb4c2fab35ede9cc8660d1ae178f370e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a4f0555a95a29ceac57d44eb3676eaf1f291c7cbbf304b1e0d99e23564e3460\"" Feb 9 09:47:49.639072 env[1141]: time="2024-02-09T09:47:49.638125552Z" level=info msg="StartContainer for \"2a4f0555a95a29ceac57d44eb3676eaf1f291c7cbbf304b1e0d99e23564e3460\"" Feb 9 09:47:49.652752 systemd[1]: Started cri-containerd-2a4f0555a95a29ceac57d44eb3676eaf1f291c7cbbf304b1e0d99e23564e3460.scope. Feb 9 09:47:49.711079 env[1141]: time="2024-02-09T09:47:49.711016013Z" level=info msg="StartContainer for \"2a4f0555a95a29ceac57d44eb3676eaf1f291c7cbbf304b1e0d99e23564e3460\" returns successfully" Feb 9 09:47:50.509621 kubelet[2001]: E0209 09:47:50.509500 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:50.510367 env[1141]: time="2024-02-09T09:47:50.509856204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-f47x8,Uid:a383f256-df6e-4e0d-808f-2c05b146d680,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:50.517984 systemd[1]: run-containerd-runc-k8s.io-1db7ffaec4cd0c38ce148673af469402fb4c2fab35ede9cc8660d1ae178f370e-runc.tUv0xQ.mount: Deactivated successfully. Feb 9 09:47:50.528461 systemd-networkd[1040]: veth40f3dd88: Link UP Feb 9 09:47:50.532107 kernel: cni0: port 2(veth40f3dd88) entered blocking state Feb 9 09:47:50.532188 kernel: cni0: port 2(veth40f3dd88) entered disabled state Feb 9 09:47:50.533821 kernel: device veth40f3dd88 entered promiscuous mode Feb 9 09:47:50.533874 kernel: cni0: port 2(veth40f3dd88) entered blocking state Feb 9 09:47:50.533901 kernel: cni0: port 2(veth40f3dd88) entered forwarding state Feb 9 09:47:50.538092 kernel: cni0: port 2(veth40f3dd88) entered disabled state Feb 9 09:47:50.540211 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:47:50.540273 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth40f3dd88: link becomes ready Feb 9 09:47:50.540291 kernel: cni0: port 2(veth40f3dd88) entered blocking state Feb 9 09:47:50.541144 kernel: cni0: port 2(veth40f3dd88) entered forwarding state Feb 9 09:47:50.541322 systemd-networkd[1040]: veth40f3dd88: Gained carrier Feb 9 09:47:50.542784 env[1141]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014928), "name":"cbr0", "type":"bridge"} Feb 9 09:47:50.551210 env[1141]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-09T09:47:50.551139044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:50.551210 env[1141]: time="2024-02-09T09:47:50.551177248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:50.551210 env[1141]: time="2024-02-09T09:47:50.551194370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:50.551439 env[1141]: time="2024-02-09T09:47:50.551399953Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d96d3f72ee2b0596387e7c20e3217d75474e2e2d65dcd8172f067e5d76556a2e pid=2790 runtime=io.containerd.runc.v2 Feb 9 09:47:50.562948 systemd[1]: Started cri-containerd-d96d3f72ee2b0596387e7c20e3217d75474e2e2d65dcd8172f067e5d76556a2e.scope. Feb 9 09:47:50.575325 kubelet[2001]: E0209 09:47:50.575292 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:50.593613 kubelet[2001]: I0209 09:47:50.593448 2001 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-q54xz" podStartSLOduration=16.593407514 pod.CreationTimestamp="2024-02-09 09:47:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:50.58509438 +0000 UTC m=+31.213580750" watchObservedRunningTime="2024-02-09 09:47:50.593407514 +0000 UTC m=+31.221893844" Feb 9 09:47:50.594613 systemd-resolved[1086]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:47:50.612738 env[1141]: time="2024-02-09T09:47:50.612694162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-f47x8,Uid:a383f256-df6e-4e0d-808f-2c05b146d680,Namespace:kube-system,Attempt:0,} returns sandbox id \"d96d3f72ee2b0596387e7c20e3217d75474e2e2d65dcd8172f067e5d76556a2e\"" Feb 9 09:47:50.613362 kubelet[2001]: E0209 09:47:50.613322 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:50.615275 env[1141]: time="2024-02-09T09:47:50.615244969Z" level=info msg="CreateContainer within sandbox \"d96d3f72ee2b0596387e7c20e3217d75474e2e2d65dcd8172f067e5d76556a2e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:47:50.619184 systemd-networkd[1040]: veth29951681: Gained IPv6LL Feb 9 09:47:50.626018 env[1141]: time="2024-02-09T09:47:50.625978655Z" level=info msg="CreateContainer within sandbox \"d96d3f72ee2b0596387e7c20e3217d75474e2e2d65dcd8172f067e5d76556a2e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"63391431b47a3c4eb4a01d272b5831ba9ee6a8ec4f825f34a8455922348a43da\"" Feb 9 09:47:50.626518 env[1141]: time="2024-02-09T09:47:50.626476791Z" level=info msg="StartContainer for \"63391431b47a3c4eb4a01d272b5831ba9ee6a8ec4f825f34a8455922348a43da\"" Feb 9 09:47:50.640811 systemd[1]: Started cri-containerd-63391431b47a3c4eb4a01d272b5831ba9ee6a8ec4f825f34a8455922348a43da.scope. Feb 9 09:47:50.671246 env[1141]: time="2024-02-09T09:47:50.671203938Z" level=info msg="StartContainer for \"63391431b47a3c4eb4a01d272b5831ba9ee6a8ec4f825f34a8455922348a43da\" returns successfully" Feb 9 09:47:51.194172 systemd-networkd[1040]: cni0: Gained IPv6LL Feb 9 09:47:51.578419 kubelet[2001]: E0209 09:47:51.578157 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:51.578790 kubelet[2001]: E0209 09:47:51.578768 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:51.589510 kubelet[2001]: I0209 09:47:51.589448 2001 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-f47x8" podStartSLOduration=17.589417786 pod.CreationTimestamp="2024-02-09 09:47:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:51.589322135 +0000 UTC m=+32.217808505" watchObservedRunningTime="2024-02-09 09:47:51.589417786 +0000 UTC m=+32.217904156" Feb 9 09:47:52.410217 systemd-networkd[1040]: veth40f3dd88: Gained IPv6LL Feb 9 09:47:52.582809 kubelet[2001]: E0209 09:47:52.580253 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:52.582809 kubelet[2001]: E0209 09:47:52.580273 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:53.582322 kubelet[2001]: E0209 09:47:53.582287 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:48:01.389714 systemd[1]: Started sshd@5-10.0.0.24:22-10.0.0.1:41130.service. Feb 9 09:48:01.430480 sshd[3023]: Accepted publickey for core from 10.0.0.1 port 41130 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:48:01.431748 sshd[3023]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:01.435472 systemd-logind[1125]: New session 6 of user core. Feb 9 09:48:01.435906 systemd[1]: Started session-6.scope. Feb 9 09:48:01.561207 sshd[3023]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:01.563319 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:48:01.563865 systemd-logind[1125]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:48:01.564011 systemd[1]: sshd@5-10.0.0.24:22-10.0.0.1:41130.service: Deactivated successfully. Feb 9 09:48:01.564972 systemd-logind[1125]: Removed session 6. Feb 9 09:48:06.570494 systemd[1]: Started sshd@6-10.0.0.24:22-10.0.0.1:52700.service. Feb 9 09:48:06.612399 sshd[3059]: Accepted publickey for core from 10.0.0.1 port 52700 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:48:06.613681 sshd[3059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:06.617406 systemd-logind[1125]: New session 7 of user core. Feb 9 09:48:06.618863 systemd[1]: Started session-7.scope. Feb 9 09:48:06.747587 sshd[3059]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:06.750795 systemd-logind[1125]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:48:06.751349 systemd[1]: sshd@6-10.0.0.24:22-10.0.0.1:52700.service: Deactivated successfully. Feb 9 09:48:06.752059 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:48:06.752686 systemd-logind[1125]: Removed session 7. Feb 9 09:48:11.752553 systemd[1]: Started sshd@7-10.0.0.24:22-10.0.0.1:52702.service. Feb 9 09:48:11.796718 sshd[3092]: Accepted publickey for core from 10.0.0.1 port 52702 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:48:11.797798 sshd[3092]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:11.802901 systemd[1]: Started session-8.scope. Feb 9 09:48:11.803355 systemd-logind[1125]: New session 8 of user core. Feb 9 09:48:11.914331 sshd[3092]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:11.918371 systemd[1]: Started sshd@8-10.0.0.24:22-10.0.0.1:52710.service. Feb 9 09:48:11.918900 systemd[1]: sshd@7-10.0.0.24:22-10.0.0.1:52702.service: Deactivated successfully. Feb 9 09:48:11.919531 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 09:48:11.922547 systemd-logind[1125]: Session 8 logged out. Waiting for processes to exit. Feb 9 09:48:11.923535 systemd-logind[1125]: Removed session 8. Feb 9 09:48:11.963709 sshd[3105]: Accepted publickey for core from 10.0.0.1 port 52710 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:48:11.964887 sshd[3105]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:11.968417 systemd-logind[1125]: New session 9 of user core. Feb 9 09:48:11.969490 systemd[1]: Started session-9.scope. Feb 9 09:48:12.181226 sshd[3105]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:12.184317 systemd[1]: Started sshd@9-10.0.0.24:22-10.0.0.1:52720.service. Feb 9 09:48:12.194443 systemd[1]: sshd@8-10.0.0.24:22-10.0.0.1:52710.service: Deactivated successfully. Feb 9 09:48:12.196373 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 09:48:12.197482 systemd-logind[1125]: Session 9 logged out. Waiting for processes to exit. Feb 9 09:48:12.198174 systemd-logind[1125]: Removed session 9. Feb 9 09:48:12.230412 sshd[3116]: Accepted publickey for core from 10.0.0.1 port 52720 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:48:12.232419 sshd[3116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:12.236086 systemd-logind[1125]: New session 10 of user core. Feb 9 09:48:12.236828 systemd[1]: Started session-10.scope. Feb 9 09:48:12.342773 sshd[3116]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:12.345167 systemd[1]: sshd@9-10.0.0.24:22-10.0.0.1:52720.service: Deactivated successfully. Feb 9 09:48:12.345960 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 09:48:12.346499 systemd-logind[1125]: Session 10 logged out. Waiting for processes to exit. Feb 9 09:48:12.347076 systemd-logind[1125]: Removed session 10. Feb 9 09:48:17.348174 systemd[1]: Started sshd@10-10.0.0.24:22-10.0.0.1:34920.service. Feb 9 09:48:17.388599 sshd[3149]: Accepted publickey for core from 10.0.0.1 port 34920 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:48:17.390240 sshd[3149]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:17.393373 systemd-logind[1125]: New session 11 of user core. Feb 9 09:48:17.394291 systemd[1]: Started session-11.scope. Feb 9 09:48:17.511902 sshd[3149]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:17.513141 systemd[1]: Started sshd@11-10.0.0.24:22-10.0.0.1:34932.service. Feb 9 09:48:17.515510 systemd[1]: sshd@10-10.0.0.24:22-10.0.0.1:34920.service: Deactivated successfully. Feb 9 09:48:17.516443 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 09:48:17.517080 systemd-logind[1125]: Session 11 logged out. Waiting for processes to exit. Feb 9 09:48:17.517835 systemd-logind[1125]: Removed session 11. Feb 9 09:48:17.555143 sshd[3161]: Accepted publickey for core from 10.0.0.1 port 34932 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:48:17.556541 sshd[3161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:17.559850 systemd-logind[1125]: New session 12 of user core. Feb 9 09:48:17.560948 systemd[1]: Started session-12.scope. Feb 9 09:48:17.795187 sshd[3161]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:17.799016 systemd[1]: Started sshd@12-10.0.0.24:22-10.0.0.1:34934.service. Feb 9 09:48:17.799530 systemd[1]: sshd@11-10.0.0.24:22-10.0.0.1:34932.service: Deactivated successfully. Feb 9 09:48:17.800340 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 09:48:17.800911 systemd-logind[1125]: Session 12 logged out. Waiting for processes to exit. Feb 9 09:48:17.801853 systemd-logind[1125]: Removed session 12. Feb 9 09:48:17.840363 sshd[3172]: Accepted publickey for core from 10.0.0.1 port 34934 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:48:17.841647 sshd[3172]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:17.844512 systemd-logind[1125]: New session 13 of user core. Feb 9 09:48:17.845329 systemd[1]: Started session-13.scope. Feb 9 09:48:18.526553 sshd[3172]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:18.529240 systemd[1]: sshd@12-10.0.0.24:22-10.0.0.1:34934.service: Deactivated successfully. Feb 9 09:48:18.529916 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 09:48:18.530603 systemd-logind[1125]: Session 13 logged out. Waiting for processes to exit. Feb 9 09:48:18.531683 systemd[1]: Started sshd@13-10.0.0.24:22-10.0.0.1:34948.service. Feb 9 09:48:18.532844 systemd-logind[1125]: Removed session 13. Feb 9 09:48:18.580069 sshd[3200]: Accepted publickey for core from 10.0.0.1 port 34948 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:48:18.581339 sshd[3200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:18.584821 systemd-logind[1125]: New session 14 of user core. Feb 9 09:48:18.585696 systemd[1]: Started session-14.scope. Feb 9 09:48:18.759290 sshd[3200]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:18.762121 systemd[1]: sshd@13-10.0.0.24:22-10.0.0.1:34948.service: Deactivated successfully. Feb 9 09:48:18.762824 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 09:48:18.763505 systemd-logind[1125]: Session 14 logged out. Waiting for processes to exit. Feb 9 09:48:18.764497 systemd[1]: Started sshd@14-10.0.0.24:22-10.0.0.1:34954.service. Feb 9 09:48:18.765534 systemd-logind[1125]: Removed session 14. Feb 9 09:48:18.805373 sshd[3254]: Accepted publickey for core from 10.0.0.1 port 34954 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:48:18.806961 sshd[3254]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:18.810653 systemd-logind[1125]: New session 15 of user core. Feb 9 09:48:18.811104 systemd[1]: Started session-15.scope. Feb 9 09:48:18.914442 sshd[3254]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:18.917045 systemd-logind[1125]: Session 15 logged out. Waiting for processes to exit. Feb 9 09:48:18.917227 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 09:48:18.918136 systemd-logind[1125]: Removed session 15. Feb 9 09:48:18.918375 systemd[1]: sshd@14-10.0.0.24:22-10.0.0.1:34954.service: Deactivated successfully. Feb 9 09:48:23.919776 systemd[1]: Started sshd@15-10.0.0.24:22-10.0.0.1:34718.service. Feb 9 09:48:23.960192 sshd[3314]: Accepted publickey for core from 10.0.0.1 port 34718 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:48:23.962360 sshd[3314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:23.965398 systemd-logind[1125]: New session 16 of user core. Feb 9 09:48:23.966206 systemd[1]: Started session-16.scope. Feb 9 09:48:24.073151 sshd[3314]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:24.076263 systemd[1]: sshd@15-10.0.0.24:22-10.0.0.1:34718.service: Deactivated successfully. Feb 9 09:48:24.077059 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 09:48:24.077859 systemd-logind[1125]: Session 16 logged out. Waiting for processes to exit. Feb 9 09:48:24.079149 systemd-logind[1125]: Removed session 16. Feb 9 09:48:29.076630 systemd[1]: Started sshd@16-10.0.0.24:22-10.0.0.1:34730.service. Feb 9 09:48:29.117088 sshd[3345]: Accepted publickey for core from 10.0.0.1 port 34730 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:48:29.118140 sshd[3345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:29.121567 systemd-logind[1125]: New session 17 of user core. Feb 9 09:48:29.121994 systemd[1]: Started session-17.scope. Feb 9 09:48:29.224307 sshd[3345]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:29.226592 systemd[1]: sshd@16-10.0.0.24:22-10.0.0.1:34730.service: Deactivated successfully. Feb 9 09:48:29.227439 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 09:48:29.227941 systemd-logind[1125]: Session 17 logged out. Waiting for processes to exit. Feb 9 09:48:29.228645 systemd-logind[1125]: Removed session 17. Feb 9 09:48:34.228610 systemd[1]: Started sshd@17-10.0.0.24:22-10.0.0.1:36212.service. Feb 9 09:48:34.269246 sshd[3376]: Accepted publickey for core from 10.0.0.1 port 36212 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:48:34.270722 sshd[3376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:34.273951 systemd-logind[1125]: New session 18 of user core. Feb 9 09:48:34.274852 systemd[1]: Started session-18.scope. Feb 9 09:48:34.378909 sshd[3376]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:34.381617 systemd[1]: sshd@17-10.0.0.24:22-10.0.0.1:36212.service: Deactivated successfully. Feb 9 09:48:34.382484 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 09:48:34.383055 systemd-logind[1125]: Session 18 logged out. Waiting for processes to exit. Feb 9 09:48:34.383653 systemd-logind[1125]: Removed session 18. Feb 9 09:48:38.510193 kubelet[2001]: E0209 09:48:38.510164 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:48:39.383766 systemd[1]: Started sshd@18-10.0.0.24:22-10.0.0.1:36218.service. Feb 9 09:48:39.425996 sshd[3409]: Accepted publickey for core from 10.0.0.1 port 36218 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:48:39.427166 sshd[3409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:39.430447 systemd-logind[1125]: New session 19 of user core. Feb 9 09:48:39.431292 systemd[1]: Started session-19.scope. Feb 9 09:48:39.509966 kubelet[2001]: E0209 09:48:39.509912 2001 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:48:39.540402 sshd[3409]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:39.542842 systemd[1]: sshd@18-10.0.0.24:22-10.0.0.1:36218.service: Deactivated successfully. Feb 9 09:48:39.543567 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 09:48:39.544071 systemd-logind[1125]: Session 19 logged out. Waiting for processes to exit. Feb 9 09:48:39.544770 systemd-logind[1125]: Removed session 19.