Feb 12 19:09:21.736654 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 12 19:09:21.736674 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 19:09:21.736682 kernel: efi: EFI v2.70 by EDK II Feb 12 19:09:21.736687 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 12 19:09:21.736692 kernel: random: crng init done Feb 12 19:09:21.736698 kernel: ACPI: Early table checksum verification disabled Feb 12 19:09:21.736704 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 12 19:09:21.736711 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 12 19:09:21.736717 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:09:21.736722 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:09:21.736728 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:09:21.736734 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:09:21.736739 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:09:21.736745 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:09:21.736752 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:09:21.736758 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:09:21.736764 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:09:21.736770 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 12 19:09:21.736776 kernel: NUMA: Failed to initialise from firmware Feb 12 19:09:21.736782 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:09:21.736788 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff] Feb 12 19:09:21.736793 kernel: Zone ranges: Feb 12 19:09:21.736799 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:09:21.736806 kernel: DMA32 empty Feb 12 19:09:21.736811 kernel: Normal empty Feb 12 19:09:21.736817 kernel: Movable zone start for each node Feb 12 19:09:21.736822 kernel: Early memory node ranges Feb 12 19:09:21.736828 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 12 19:09:21.736834 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 12 19:09:21.736839 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 12 19:09:21.736845 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 12 19:09:21.736851 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 12 19:09:21.736856 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 12 19:09:21.736862 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 12 19:09:21.736867 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:09:21.736874 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 12 19:09:21.736880 kernel: psci: probing for conduit method from ACPI. Feb 12 19:09:21.736885 kernel: psci: PSCIv1.1 detected in firmware. Feb 12 19:09:21.736891 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 19:09:21.736897 kernel: psci: Trusted OS migration not required Feb 12 19:09:21.736905 kernel: psci: SMC Calling Convention v1.1 Feb 12 19:09:21.736911 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 12 19:09:21.736919 kernel: ACPI: SRAT not present Feb 12 19:09:21.736925 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 19:09:21.736931 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 19:09:21.736937 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 12 19:09:21.736943 kernel: Detected PIPT I-cache on CPU0 Feb 12 19:09:21.736949 kernel: CPU features: detected: GIC system register CPU interface Feb 12 19:09:21.736955 kernel: CPU features: detected: Hardware dirty bit management Feb 12 19:09:21.736961 kernel: CPU features: detected: Spectre-v4 Feb 12 19:09:21.736967 kernel: CPU features: detected: Spectre-BHB Feb 12 19:09:21.736974 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 19:09:21.736980 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 19:09:21.736986 kernel: CPU features: detected: ARM erratum 1418040 Feb 12 19:09:21.736992 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 12 19:09:21.736998 kernel: Policy zone: DMA Feb 12 19:09:21.737005 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:09:21.737011 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:09:21.737017 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:09:21.737024 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:09:21.737030 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:09:21.737036 kernel: Memory: 2459144K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113144K reserved, 0K cma-reserved) Feb 12 19:09:21.737043 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 19:09:21.737049 kernel: trace event string verifier disabled Feb 12 19:09:21.737055 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 19:09:21.737062 kernel: rcu: RCU event tracing is enabled. Feb 12 19:09:21.737068 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 19:09:21.737074 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 19:09:21.737080 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:09:21.737086 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:09:21.737092 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 19:09:21.737098 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 19:09:21.737104 kernel: GICv3: 256 SPIs implemented Feb 12 19:09:21.737111 kernel: GICv3: 0 Extended SPIs implemented Feb 12 19:09:21.737125 kernel: GICv3: Distributor has no Range Selector support Feb 12 19:09:21.737132 kernel: Root IRQ handler: gic_handle_irq Feb 12 19:09:21.737137 kernel: GICv3: 16 PPIs implemented Feb 12 19:09:21.737143 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 12 19:09:21.737149 kernel: ACPI: SRAT not present Feb 12 19:09:21.737155 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 12 19:09:21.737161 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 12 19:09:21.737168 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 12 19:09:21.737173 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 12 19:09:21.737179 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 12 19:09:21.737185 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:09:21.737193 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 12 19:09:21.737199 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 12 19:09:21.737205 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 12 19:09:21.737219 kernel: arm-pv: using stolen time PV Feb 12 19:09:21.737226 kernel: Console: colour dummy device 80x25 Feb 12 19:09:21.737232 kernel: ACPI: Core revision 20210730 Feb 12 19:09:21.737238 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 12 19:09:21.737245 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:09:21.737251 kernel: LSM: Security Framework initializing Feb 12 19:09:21.737257 kernel: SELinux: Initializing. Feb 12 19:09:21.737265 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:09:21.737271 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:09:21.737277 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:09:21.737283 kernel: Platform MSI: ITS@0x8080000 domain created Feb 12 19:09:21.737289 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 12 19:09:21.737296 kernel: Remapping and enabling EFI services. Feb 12 19:09:21.737302 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:09:21.737308 kernel: Detected PIPT I-cache on CPU1 Feb 12 19:09:21.737314 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 12 19:09:21.737322 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 12 19:09:21.737329 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:09:21.737335 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 12 19:09:21.737341 kernel: Detected PIPT I-cache on CPU2 Feb 12 19:09:21.737347 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 12 19:09:21.737354 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 12 19:09:21.737360 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:09:21.737366 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 12 19:09:21.737372 kernel: Detected PIPT I-cache on CPU3 Feb 12 19:09:21.737378 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 12 19:09:21.737386 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 12 19:09:21.737392 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:09:21.737398 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 12 19:09:21.737404 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 19:09:21.737415 kernel: SMP: Total of 4 processors activated. Feb 12 19:09:21.737423 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 19:09:21.737430 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 12 19:09:21.737436 kernel: CPU features: detected: Common not Private translations Feb 12 19:09:21.737443 kernel: CPU features: detected: CRC32 instructions Feb 12 19:09:21.737449 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 12 19:09:21.737455 kernel: CPU features: detected: LSE atomic instructions Feb 12 19:09:21.737462 kernel: CPU features: detected: Privileged Access Never Feb 12 19:09:21.737470 kernel: CPU features: detected: RAS Extension Support Feb 12 19:09:21.737476 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 12 19:09:21.737483 kernel: CPU: All CPU(s) started at EL1 Feb 12 19:09:21.737489 kernel: alternatives: patching kernel code Feb 12 19:09:21.737497 kernel: devtmpfs: initialized Feb 12 19:09:21.737503 kernel: KASLR enabled Feb 12 19:09:21.737510 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:09:21.737517 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 19:09:21.737524 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:09:21.737530 kernel: SMBIOS 3.0.0 present. Feb 12 19:09:21.737536 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 12 19:09:21.737543 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:09:21.737550 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 19:09:21.737556 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 19:09:21.737564 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 19:09:21.737571 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:09:21.737577 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Feb 12 19:09:21.737584 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:09:21.737590 kernel: cpuidle: using governor menu Feb 12 19:09:21.737597 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 19:09:21.737603 kernel: ASID allocator initialised with 32768 entries Feb 12 19:09:21.737610 kernel: ACPI: bus type PCI registered Feb 12 19:09:21.737617 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:09:21.737625 kernel: Serial: AMBA PL011 UART driver Feb 12 19:09:21.737631 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:09:21.737638 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 19:09:21.737644 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:09:21.737651 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 19:09:21.737657 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:09:21.737664 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 19:09:21.737671 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:09:21.737677 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:09:21.737685 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:09:21.737691 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:09:21.737698 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:09:21.737704 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:09:21.737711 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:09:21.737717 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:09:21.737724 kernel: ACPI: Interpreter enabled Feb 12 19:09:21.737730 kernel: ACPI: Using GIC for interrupt routing Feb 12 19:09:21.737737 kernel: ACPI: MCFG table detected, 1 entries Feb 12 19:09:21.737744 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 12 19:09:21.737751 kernel: printk: console [ttyAMA0] enabled Feb 12 19:09:21.737758 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:09:21.737908 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:09:21.737976 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 12 19:09:21.738038 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 12 19:09:21.738099 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 12 19:09:21.738170 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 12 19:09:21.738180 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 12 19:09:21.738187 kernel: PCI host bridge to bus 0000:00 Feb 12 19:09:21.738276 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 12 19:09:21.738335 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 12 19:09:21.738391 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 12 19:09:21.738445 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:09:21.738522 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 12 19:09:21.738595 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 19:09:21.738658 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 12 19:09:21.738721 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 12 19:09:21.738783 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 12 19:09:21.738846 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 12 19:09:21.738908 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 12 19:09:21.738972 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 12 19:09:21.739028 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 12 19:09:21.739081 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 12 19:09:21.739143 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 12 19:09:21.739152 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 12 19:09:21.739159 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 12 19:09:21.739166 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 12 19:09:21.739174 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 12 19:09:21.739181 kernel: iommu: Default domain type: Translated Feb 12 19:09:21.739188 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 19:09:21.739194 kernel: vgaarb: loaded Feb 12 19:09:21.739201 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:09:21.739208 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:09:21.739221 kernel: PTP clock support registered Feb 12 19:09:21.739228 kernel: Registered efivars operations Feb 12 19:09:21.739235 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 19:09:21.739243 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:09:21.739250 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:09:21.739257 kernel: pnp: PnP ACPI init Feb 12 19:09:21.739327 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 12 19:09:21.739336 kernel: pnp: PnP ACPI: found 1 devices Feb 12 19:09:21.739343 kernel: NET: Registered PF_INET protocol family Feb 12 19:09:21.739350 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:09:21.739357 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:09:21.739364 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:09:21.739373 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:09:21.739379 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:09:21.739386 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:09:21.739395 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:09:21.739402 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:09:21.739409 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:09:21.739415 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:09:21.739422 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 12 19:09:21.739430 kernel: kvm [1]: HYP mode not available Feb 12 19:09:21.739437 kernel: Initialise system trusted keyrings Feb 12 19:09:21.739443 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:09:21.739451 kernel: Key type asymmetric registered Feb 12 19:09:21.739457 kernel: Asymmetric key parser 'x509' registered Feb 12 19:09:21.739464 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:09:21.739470 kernel: io scheduler mq-deadline registered Feb 12 19:09:21.739477 kernel: io scheduler kyber registered Feb 12 19:09:21.739484 kernel: io scheduler bfq registered Feb 12 19:09:21.739490 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 12 19:09:21.739498 kernel: ACPI: button: Power Button [PWRB] Feb 12 19:09:21.739505 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 12 19:09:21.739568 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 12 19:09:21.739577 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:09:21.739584 kernel: thunder_xcv, ver 1.0 Feb 12 19:09:21.739591 kernel: thunder_bgx, ver 1.0 Feb 12 19:09:21.739597 kernel: nicpf, ver 1.0 Feb 12 19:09:21.739604 kernel: nicvf, ver 1.0 Feb 12 19:09:21.739680 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 19:09:21.739740 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T19:09:21 UTC (1707764961) Feb 12 19:09:21.739749 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:09:21.739756 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:09:21.739762 kernel: Segment Routing with IPv6 Feb 12 19:09:21.739769 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:09:21.739775 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:09:21.739782 kernel: Key type dns_resolver registered Feb 12 19:09:21.739789 kernel: registered taskstats version 1 Feb 12 19:09:21.739798 kernel: Loading compiled-in X.509 certificates Feb 12 19:09:21.739805 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 19:09:21.739811 kernel: Key type .fscrypt registered Feb 12 19:09:21.739817 kernel: Key type fscrypt-provisioning registered Feb 12 19:09:21.739824 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:09:21.739831 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:09:21.739837 kernel: ima: No architecture policies found Feb 12 19:09:21.739844 kernel: Freeing unused kernel memory: 34688K Feb 12 19:09:21.739852 kernel: Run /init as init process Feb 12 19:09:21.739859 kernel: with arguments: Feb 12 19:09:21.739865 kernel: /init Feb 12 19:09:21.739871 kernel: with environment: Feb 12 19:09:21.739878 kernel: HOME=/ Feb 12 19:09:21.739884 kernel: TERM=linux Feb 12 19:09:21.739890 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:09:21.739899 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:09:21.739907 systemd[1]: Detected virtualization kvm. Feb 12 19:09:21.739916 systemd[1]: Detected architecture arm64. Feb 12 19:09:21.739923 systemd[1]: Running in initrd. Feb 12 19:09:21.739930 systemd[1]: No hostname configured, using default hostname. Feb 12 19:09:21.739936 systemd[1]: Hostname set to . Feb 12 19:09:21.739944 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:09:21.739951 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:09:21.739957 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:09:21.739964 systemd[1]: Reached target cryptsetup.target. Feb 12 19:09:21.739973 systemd[1]: Reached target paths.target. Feb 12 19:09:21.739980 systemd[1]: Reached target slices.target. Feb 12 19:09:21.739987 systemd[1]: Reached target swap.target. Feb 12 19:09:21.739994 systemd[1]: Reached target timers.target. Feb 12 19:09:21.740001 systemd[1]: Listening on iscsid.socket. Feb 12 19:09:21.740008 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:09:21.740015 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:09:21.740023 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:09:21.740030 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:09:21.740037 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:09:21.740044 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:09:21.740051 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:09:21.740058 systemd[1]: Reached target sockets.target. Feb 12 19:09:21.740065 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:09:21.740072 systemd[1]: Finished network-cleanup.service. Feb 12 19:09:21.740079 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:09:21.740087 systemd[1]: Starting systemd-journald.service... Feb 12 19:09:21.740094 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:09:21.740101 systemd[1]: Starting systemd-resolved.service... Feb 12 19:09:21.740108 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:09:21.740121 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:09:21.740129 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:09:21.740136 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:09:21.740146 systemd-journald[290]: Journal started Feb 12 19:09:21.740185 systemd-journald[290]: Runtime Journal (/run/log/journal/2e618bd1c704458c8e1f76c5b9f59c3e) is 6.0M, max 48.7M, 42.6M free. Feb 12 19:09:21.736314 systemd-modules-load[291]: Inserted module 'overlay' Feb 12 19:09:21.743903 systemd[1]: Started systemd-journald.service. Feb 12 19:09:21.744059 kernel: audit: type=1130 audit(1707764961.741:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:21.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:21.742508 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:09:21.747018 kernel: audit: type=1130 audit(1707764961.744:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:21.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:21.744904 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:09:21.750249 kernel: audit: type=1130 audit(1707764961.747:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:21.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:21.748703 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:09:21.761090 systemd-resolved[292]: Positive Trust Anchors: Feb 12 19:09:21.762756 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:09:21.761108 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:09:21.761144 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:09:21.767227 systemd-resolved[292]: Defaulting to hostname 'linux'. Feb 12 19:09:21.767835 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:09:21.772067 kernel: Bridge firewalling registered Feb 12 19:09:21.772087 kernel: audit: type=1130 audit(1707764961.769:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:21.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:21.769139 systemd-modules-load[291]: Inserted module 'br_netfilter' Feb 12 19:09:21.774953 kernel: audit: type=1130 audit(1707764961.772:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:21.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:21.769775 systemd[1]: Started systemd-resolved.service. Feb 12 19:09:21.772714 systemd[1]: Reached target nss-lookup.target. Feb 12 19:09:21.776413 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:09:21.784572 kernel: SCSI subsystem initialized Feb 12 19:09:21.786446 dracut-cmdline[308]: dracut-dracut-053 Feb 12 19:09:21.789130 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:09:21.797821 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:09:21.797867 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:09:21.797884 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:09:21.803833 systemd-modules-load[291]: Inserted module 'dm_multipath' Feb 12 19:09:21.804667 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:09:21.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:21.806157 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:09:21.808778 kernel: audit: type=1130 audit(1707764961.805:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:21.815150 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:09:21.818252 kernel: audit: type=1130 audit(1707764961.815:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:21.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:21.861239 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:09:21.869236 kernel: iscsi: registered transport (tcp) Feb 12 19:09:21.882246 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:09:21.882288 kernel: QLogic iSCSI HBA Driver Feb 12 19:09:21.915819 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:09:21.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:21.917388 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:09:21.919748 kernel: audit: type=1130 audit(1707764961.916:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:21.964242 kernel: raid6: neonx8 gen() 13754 MB/s Feb 12 19:09:21.981242 kernel: raid6: neonx8 xor() 10820 MB/s Feb 12 19:09:21.998242 kernel: raid6: neonx4 gen() 13576 MB/s Feb 12 19:09:22.015233 kernel: raid6: neonx4 xor() 11321 MB/s Feb 12 19:09:22.032246 kernel: raid6: neonx2 gen() 12949 MB/s Feb 12 19:09:22.049245 kernel: raid6: neonx2 xor() 10235 MB/s Feb 12 19:09:22.066232 kernel: raid6: neonx1 gen() 10497 MB/s Feb 12 19:09:22.083266 kernel: raid6: neonx1 xor() 8794 MB/s Feb 12 19:09:22.100245 kernel: raid6: int64x8 gen() 6281 MB/s Feb 12 19:09:22.117254 kernel: raid6: int64x8 xor() 3547 MB/s Feb 12 19:09:22.134238 kernel: raid6: int64x4 gen() 7214 MB/s Feb 12 19:09:22.151247 kernel: raid6: int64x4 xor() 3855 MB/s Feb 12 19:09:22.168239 kernel: raid6: int64x2 gen() 6150 MB/s Feb 12 19:09:22.185248 kernel: raid6: int64x2 xor() 3322 MB/s Feb 12 19:09:22.202244 kernel: raid6: int64x1 gen() 5043 MB/s Feb 12 19:09:22.219432 kernel: raid6: int64x1 xor() 2646 MB/s Feb 12 19:09:22.219466 kernel: raid6: using algorithm neonx8 gen() 13754 MB/s Feb 12 19:09:22.219476 kernel: raid6: .... xor() 10820 MB/s, rmw enabled Feb 12 19:09:22.219484 kernel: raid6: using neon recovery algorithm Feb 12 19:09:22.230380 kernel: xor: measuring software checksum speed Feb 12 19:09:22.230416 kernel: 8regs : 17300 MB/sec Feb 12 19:09:22.231233 kernel: 32regs : 20755 MB/sec Feb 12 19:09:22.232365 kernel: arm64_neon : 27920 MB/sec Feb 12 19:09:22.232392 kernel: xor: using function: arm64_neon (27920 MB/sec) Feb 12 19:09:22.296245 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 19:09:22.306948 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:09:22.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:22.308686 systemd[1]: Starting systemd-udevd.service... Feb 12 19:09:22.311280 kernel: audit: type=1130 audit(1707764962.307:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:22.307000 audit: BPF prog-id=7 op=LOAD Feb 12 19:09:22.308000 audit: BPF prog-id=8 op=LOAD Feb 12 19:09:22.325145 systemd-udevd[492]: Using default interface naming scheme 'v252'. Feb 12 19:09:22.328556 systemd[1]: Started systemd-udevd.service. Feb 12 19:09:22.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:22.330522 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:09:22.341961 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Feb 12 19:09:22.371679 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:09:22.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:22.373296 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:09:22.406529 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:09:22.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:22.441201 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 19:09:22.443450 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:09:22.443489 kernel: GPT:9289727 != 19775487 Feb 12 19:09:22.443499 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:09:22.444442 kernel: GPT:9289727 != 19775487 Feb 12 19:09:22.444465 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:09:22.446252 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:09:22.457241 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (549) Feb 12 19:09:22.459461 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:09:22.464362 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:09:22.465158 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:09:22.471235 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:09:22.474607 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:09:22.476261 systemd[1]: Starting disk-uuid.service... Feb 12 19:09:22.487259 disk-uuid[563]: Primary Header is updated. Feb 12 19:09:22.487259 disk-uuid[563]: Secondary Entries is updated. Feb 12 19:09:22.487259 disk-uuid[563]: Secondary Header is updated. Feb 12 19:09:22.490244 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:09:23.508005 disk-uuid[564]: The operation has completed successfully. Feb 12 19:09:23.509414 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:09:23.531254 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:09:23.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.531348 systemd[1]: Finished disk-uuid.service. Feb 12 19:09:23.533015 systemd[1]: Starting verity-setup.service... Feb 12 19:09:23.559229 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 19:09:23.581258 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:09:23.583660 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:09:23.585284 systemd[1]: Finished verity-setup.service. Feb 12 19:09:23.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.634231 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:09:23.634404 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:09:23.635236 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:09:23.635928 systemd[1]: Starting ignition-setup.service... Feb 12 19:09:23.637787 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:09:23.645303 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:09:23.645341 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:09:23.645361 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:09:23.654463 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:09:23.660680 systemd[1]: Finished ignition-setup.service. Feb 12 19:09:23.662161 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:09:23.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.730677 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:09:23.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.734000 audit: BPF prog-id=9 op=LOAD Feb 12 19:09:23.734770 systemd[1]: Starting systemd-networkd.service... Feb 12 19:09:23.742248 ignition[648]: Ignition 2.14.0 Feb 12 19:09:23.742258 ignition[648]: Stage: fetch-offline Feb 12 19:09:23.742295 ignition[648]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:09:23.742304 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:09:23.742427 ignition[648]: parsed url from cmdline: "" Feb 12 19:09:23.742430 ignition[648]: no config URL provided Feb 12 19:09:23.742435 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:09:23.742442 ignition[648]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:09:23.742460 ignition[648]: op(1): [started] loading QEMU firmware config module Feb 12 19:09:23.742465 ignition[648]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 19:09:23.748278 ignition[648]: op(1): [finished] loading QEMU firmware config module Feb 12 19:09:23.769069 systemd-networkd[739]: lo: Link UP Feb 12 19:09:23.769085 systemd-networkd[739]: lo: Gained carrier Feb 12 19:09:23.769484 systemd-networkd[739]: Enumeration completed Feb 12 19:09:23.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.769565 systemd[1]: Started systemd-networkd.service. Feb 12 19:09:23.769667 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:09:23.770746 systemd[1]: Reached target network.target. Feb 12 19:09:23.771375 systemd-networkd[739]: eth0: Link UP Feb 12 19:09:23.771398 systemd-networkd[739]: eth0: Gained carrier Feb 12 19:09:23.772453 systemd[1]: Starting iscsiuio.service... Feb 12 19:09:23.781255 systemd[1]: Started iscsiuio.service. Feb 12 19:09:23.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.782741 systemd[1]: Starting iscsid.service... Feb 12 19:09:23.786280 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:09:23.786280 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:09:23.786280 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:09:23.786280 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:09:23.786280 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:09:23.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.795308 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:09:23.789665 systemd[1]: Started iscsid.service. Feb 12 19:09:23.793032 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:09:23.797630 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:09:23.804858 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:09:23.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.805667 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:09:23.806706 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:09:23.807799 systemd[1]: Reached target remote-fs.target. Feb 12 19:09:23.809773 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:09:23.817067 ignition[648]: parsing config with SHA512: 6118cf72739ab3aedaf2326798d3e50af12aae977c2af3556bec07e121e256070df55abaab76f0eb8bc02039ef19619f3dcf56b46570f12cfab80c657612599f Feb 12 19:09:23.817769 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:09:23.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.851833 unknown[648]: fetched base config from "system" Feb 12 19:09:23.851846 unknown[648]: fetched user config from "qemu" Feb 12 19:09:23.852392 ignition[648]: fetch-offline: fetch-offline passed Feb 12 19:09:23.853367 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:09:23.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.852451 ignition[648]: Ignition finished successfully Feb 12 19:09:23.854436 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 19:09:23.855156 systemd[1]: Starting ignition-kargs.service... Feb 12 19:09:23.864074 ignition[760]: Ignition 2.14.0 Feb 12 19:09:23.864084 ignition[760]: Stage: kargs Feb 12 19:09:23.864182 ignition[760]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:09:23.864191 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:09:23.865268 ignition[760]: kargs: kargs passed Feb 12 19:09:23.866616 systemd[1]: Finished ignition-kargs.service. Feb 12 19:09:23.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.865311 ignition[760]: Ignition finished successfully Feb 12 19:09:23.868143 systemd[1]: Starting ignition-disks.service... Feb 12 19:09:23.874907 ignition[766]: Ignition 2.14.0 Feb 12 19:09:23.874917 ignition[766]: Stage: disks Feb 12 19:09:23.875070 ignition[766]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:09:23.875082 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:09:23.877429 systemd[1]: Finished ignition-disks.service. Feb 12 19:09:23.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.876183 ignition[766]: disks: disks passed Feb 12 19:09:23.878640 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:09:23.876260 ignition[766]: Ignition finished successfully Feb 12 19:09:23.879651 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:09:23.880532 systemd[1]: Reached target local-fs.target. Feb 12 19:09:23.881512 systemd[1]: Reached target sysinit.target. Feb 12 19:09:23.882455 systemd[1]: Reached target basic.target. Feb 12 19:09:23.884190 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:09:23.894887 systemd-fsck[774]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 19:09:23.898520 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:09:23.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.900854 systemd[1]: Mounting sysroot.mount... Feb 12 19:09:23.907237 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:09:23.907749 systemd[1]: Mounted sysroot.mount. Feb 12 19:09:23.908451 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:09:23.910355 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:09:23.911168 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 19:09:23.911204 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:09:23.911256 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:09:23.913119 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:09:23.915374 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:09:23.919625 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:09:23.923781 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:09:23.927386 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:09:23.931126 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:09:23.956301 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:09:23.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.957778 systemd[1]: Starting ignition-mount.service... Feb 12 19:09:23.958916 systemd[1]: Starting sysroot-boot.service... Feb 12 19:09:23.964066 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 19:09:23.972895 ignition[827]: INFO : Ignition 2.14.0 Feb 12 19:09:23.972895 ignition[827]: INFO : Stage: mount Feb 12 19:09:23.974190 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:09:23.974190 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:09:23.974190 ignition[827]: INFO : mount: mount passed Feb 12 19:09:23.974190 ignition[827]: INFO : Ignition finished successfully Feb 12 19:09:23.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.974759 systemd[1]: Finished ignition-mount.service. Feb 12 19:09:23.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:23.977707 systemd[1]: Finished sysroot-boot.service. Feb 12 19:09:24.592962 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:09:24.598227 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) Feb 12 19:09:24.600326 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:09:24.600340 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:09:24.600349 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:09:24.602956 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:09:24.604514 systemd[1]: Starting ignition-files.service... Feb 12 19:09:24.617743 ignition[856]: INFO : Ignition 2.14.0 Feb 12 19:09:24.617743 ignition[856]: INFO : Stage: files Feb 12 19:09:24.619010 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:09:24.619010 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:09:24.619010 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:09:24.624967 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:09:24.624967 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:09:24.627424 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:09:24.627424 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:09:24.629401 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:09:24.629401 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 19:09:24.629401 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 12 19:09:24.627785 unknown[856]: wrote ssh authorized keys file for user: core Feb 12 19:09:24.949143 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:09:25.143363 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 12 19:09:25.143363 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 19:09:25.146912 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 19:09:25.146912 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 12 19:09:25.359855 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:09:25.476636 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 12 19:09:25.476636 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 19:09:25.476636 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 19:09:25.481496 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 12 19:09:25.676460 systemd-networkd[739]: eth0: Gained IPv6LL Feb 12 19:09:25.727348 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:09:25.769841 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 19:09:25.773021 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:09:25.773021 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 12 19:09:25.817065 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:09:26.234072 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 12 19:09:26.234072 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:09:26.237514 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:09:26.237514 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 12 19:09:26.258540 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:09:27.082547 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 12 19:09:27.082547 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:09:27.082547 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:09:27.082547 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 12 19:09:27.108350 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:09:27.381157 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 12 19:09:27.383235 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:09:27.383235 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:09:27.383235 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:09:27.383235 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:09:27.383235 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:09:27.383235 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:09:27.383235 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:09:27.383235 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:09:27.383235 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:09:27.383235 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:09:27.383235 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:09:27.396859 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:09:27.396859 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:09:27.396859 ignition[856]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" Feb 12 19:09:27.396859 ignition[856]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:09:27.396859 ignition[856]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:09:27.396859 ignition[856]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" Feb 12 19:09:27.396859 ignition[856]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:09:27.396859 ignition[856]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:09:27.396859 ignition[856]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:09:27.396859 ignition[856]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:09:27.396859 ignition[856]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Feb 12 19:09:27.396859 ignition[856]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:09:27.396859 ignition[856]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:09:27.396859 ignition[856]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Feb 12 19:09:27.396859 ignition[856]: INFO : files: op(15): [started] processing unit "prepare-helm.service" Feb 12 19:09:27.396859 ignition[856]: INFO : files: op(15): op(16): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:09:27.396859 ignition[856]: INFO : files: op(15): op(16): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:09:27.396859 ignition[856]: INFO : files: op(15): [finished] processing unit "prepare-helm.service" Feb 12 19:09:27.419741 ignition[856]: INFO : files: op(17): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:09:27.419741 ignition[856]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:09:27.419741 ignition[856]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:09:27.419741 ignition[856]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:09:27.419741 ignition[856]: INFO : files: op(19): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 19:09:27.419741 ignition[856]: INFO : files: op(19): op(1a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:09:27.460664 ignition[856]: INFO : files: op(19): op(1a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:09:27.462857 ignition[856]: INFO : files: op(19): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 19:09:27.462857 ignition[856]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:09:27.462857 ignition[856]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:09:27.462857 ignition[856]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:09:27.462857 ignition[856]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:09:27.462857 ignition[856]: INFO : files: files passed Feb 12 19:09:27.462857 ignition[856]: INFO : Ignition finished successfully Feb 12 19:09:27.480848 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 19:09:27.480869 kernel: audit: type=1130 audit(1707764967.464:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.480881 kernel: audit: type=1130 audit(1707764967.472:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.480891 kernel: audit: type=1131 audit(1707764967.472:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.480901 kernel: audit: type=1130 audit(1707764967.478:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.462988 systemd[1]: Finished ignition-files.service. Feb 12 19:09:27.465566 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:09:27.482789 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 19:09:27.466532 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:09:27.485691 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:09:27.467208 systemd[1]: Starting ignition-quench.service... Feb 12 19:09:27.471437 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:09:27.471528 systemd[1]: Finished ignition-quench.service. Feb 12 19:09:27.475377 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:09:27.478522 systemd[1]: Reached target ignition-complete.target. Feb 12 19:09:27.482182 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:09:27.495245 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:09:27.495362 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:09:27.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.496763 systemd[1]: Reached target initrd-fs.target. Feb 12 19:09:27.501869 kernel: audit: type=1130 audit(1707764967.496:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.501891 kernel: audit: type=1131 audit(1707764967.496:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.501468 systemd[1]: Reached target initrd.target. Feb 12 19:09:27.502603 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:09:27.503551 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:09:27.514200 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:09:27.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.515849 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:09:27.518403 kernel: audit: type=1130 audit(1707764967.514:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.524692 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:09:27.525580 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:09:27.526814 systemd[1]: Stopped target timers.target. Feb 12 19:09:27.527967 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:09:27.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.528084 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:09:27.533252 kernel: audit: type=1131 audit(1707764967.528:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.529253 systemd[1]: Stopped target initrd.target. Feb 12 19:09:27.532878 systemd[1]: Stopped target basic.target. Feb 12 19:09:27.534021 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:09:27.535305 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:09:27.536498 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:09:27.537688 systemd[1]: Stopped target remote-fs.target. Feb 12 19:09:27.538724 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:09:27.539813 systemd[1]: Stopped target sysinit.target. Feb 12 19:09:27.540827 systemd[1]: Stopped target local-fs.target. Feb 12 19:09:27.541944 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:09:27.543009 systemd[1]: Stopped target swap.target. Feb 12 19:09:27.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.547230 kernel: audit: type=1131 audit(1707764967.544:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.544025 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:09:27.544199 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:09:27.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.545310 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:09:27.552356 kernel: audit: type=1131 audit(1707764967.548:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.547910 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:09:27.548010 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:09:27.549047 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:09:27.549153 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:09:27.551980 systemd[1]: Stopped target paths.target. Feb 12 19:09:27.552987 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:09:27.554247 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:09:27.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.555602 systemd[1]: Stopped target slices.target. Feb 12 19:09:27.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.556638 systemd[1]: Stopped target sockets.target. Feb 12 19:09:27.557621 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:09:27.557728 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:09:27.564271 iscsid[745]: iscsid shutting down. Feb 12 19:09:27.559057 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:09:27.559157 systemd[1]: Stopped ignition-files.service. Feb 12 19:09:27.561155 systemd[1]: Stopping ignition-mount.service... Feb 12 19:09:27.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.562574 systemd[1]: Stopping iscsid.service... Feb 12 19:09:27.566115 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:09:27.569471 ignition[896]: INFO : Ignition 2.14.0 Feb 12 19:09:27.569471 ignition[896]: INFO : Stage: umount Feb 12 19:09:27.569471 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:09:27.569471 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:09:27.569471 ignition[896]: INFO : umount: umount passed Feb 12 19:09:27.569471 ignition[896]: INFO : Ignition finished successfully Feb 12 19:09:27.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.566254 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:09:27.567877 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:09:27.568716 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:09:27.568836 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:09:27.570068 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:09:27.570164 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:09:27.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.572478 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:09:27.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.572570 systemd[1]: Stopped iscsid.service. Feb 12 19:09:27.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.573722 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:09:27.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.573787 systemd[1]: Closed iscsid.socket. Feb 12 19:09:27.575017 systemd[1]: Stopping iscsiuio.service... Feb 12 19:09:27.579265 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:09:27.579720 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:09:27.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.579802 systemd[1]: Stopped iscsiuio.service. Feb 12 19:09:27.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.581737 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:09:27.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.581810 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:09:27.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.582780 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:09:27.582850 systemd[1]: Stopped ignition-mount.service. Feb 12 19:09:27.583921 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:09:27.583994 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:09:27.585499 systemd[1]: Stopped target network.target. Feb 12 19:09:27.586396 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:09:27.586434 systemd[1]: Closed iscsiuio.socket. Feb 12 19:09:27.587357 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:09:27.587399 systemd[1]: Stopped ignition-disks.service. Feb 12 19:09:27.588509 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:09:27.588547 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:09:27.589595 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:09:27.589631 systemd[1]: Stopped ignition-setup.service. Feb 12 19:09:27.590697 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:09:27.590735 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:09:27.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.592133 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:09:27.593467 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:09:27.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.601805 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:09:27.601891 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:09:27.603280 systemd-networkd[739]: eth0: DHCPv6 lease lost Feb 12 19:09:27.604336 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:09:27.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.609000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:09:27.604417 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:09:27.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.605638 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:09:27.605664 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:09:27.612000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:09:27.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.607061 systemd[1]: Stopping network-cleanup.service... Feb 12 19:09:27.608014 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:09:27.608064 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:09:27.609166 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:09:27.609201 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:09:27.610882 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:09:27.610918 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:09:27.612557 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:09:27.617274 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:09:27.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.619932 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:09:27.620010 systemd[1]: Stopped network-cleanup.service. Feb 12 19:09:27.624047 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:09:27.624164 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:09:27.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.625569 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:09:27.625602 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:09:27.626501 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:09:27.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.626534 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:09:27.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.627622 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:09:27.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.627657 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:09:27.628768 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:09:27.628804 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:09:27.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.629932 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:09:27.629968 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:09:27.631708 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:09:27.632834 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:09:27.632880 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:09:27.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.636814 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:09:27.636890 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:09:27.638208 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:09:27.639861 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:09:27.645803 systemd[1]: Switching root. Feb 12 19:09:27.656398 systemd-journald[290]: Journal stopped Feb 12 19:09:29.744112 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Feb 12 19:09:29.744176 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:09:29.744192 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:09:29.744203 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:09:29.744253 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:09:29.744289 kernel: SELinux: policy capability open_perms=1 Feb 12 19:09:29.744298 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:09:29.744307 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:09:29.744316 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:09:29.744326 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:09:29.744338 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:09:29.744347 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:09:29.744357 systemd[1]: Successfully loaded SELinux policy in 36.365ms. Feb 12 19:09:29.744381 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.832ms. Feb 12 19:09:29.744393 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:09:29.744404 systemd[1]: Detected virtualization kvm. Feb 12 19:09:29.744414 systemd[1]: Detected architecture arm64. Feb 12 19:09:29.744425 systemd[1]: Detected first boot. Feb 12 19:09:29.744435 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:09:29.744446 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:09:29.744456 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:09:29.744470 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:09:29.744482 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:09:29.744493 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:09:29.744505 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 19:09:29.744517 systemd[1]: Stopped initrd-switch-root.service. Feb 12 19:09:29.744528 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 19:09:29.744539 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:09:29.744549 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:09:29.744560 systemd[1]: Created slice system-getty.slice. Feb 12 19:09:29.744570 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:09:29.744582 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:09:29.744593 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:09:29.744604 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:09:29.744614 systemd[1]: Created slice user.slice. Feb 12 19:09:29.744626 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:09:29.744636 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:09:29.744646 systemd[1]: Set up automount boot.automount. Feb 12 19:09:29.744660 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:09:29.744670 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 19:09:29.744681 systemd[1]: Stopped target initrd-fs.target. Feb 12 19:09:29.744692 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 19:09:29.744703 systemd[1]: Reached target integritysetup.target. Feb 12 19:09:29.744713 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:09:29.744723 systemd[1]: Reached target remote-fs.target. Feb 12 19:09:29.744733 systemd[1]: Reached target slices.target. Feb 12 19:09:29.744744 systemd[1]: Reached target swap.target. Feb 12 19:09:29.744755 systemd[1]: Reached target torcx.target. Feb 12 19:09:29.744765 systemd[1]: Reached target veritysetup.target. Feb 12 19:09:29.744775 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:09:29.744786 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:09:29.744797 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:09:29.744808 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:09:29.744819 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:09:29.744829 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:09:29.744839 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:09:29.744850 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:09:29.744860 systemd[1]: Mounting media.mount... Feb 12 19:09:29.744870 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:09:29.744881 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:09:29.744892 systemd[1]: Mounting tmp.mount... Feb 12 19:09:29.744903 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:09:29.744913 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:09:29.744924 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:09:29.744934 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:09:29.744945 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:09:29.744955 systemd[1]: Starting modprobe@drm.service... Feb 12 19:09:29.744966 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:09:29.744976 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:09:29.744987 systemd[1]: Starting modprobe@loop.service... Feb 12 19:09:29.744998 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:09:29.745008 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 19:09:29.745018 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 19:09:29.745030 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 19:09:29.745040 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 19:09:29.745051 systemd[1]: Stopped systemd-journald.service. Feb 12 19:09:29.745060 kernel: loop: module loaded Feb 12 19:09:29.745072 systemd[1]: Starting systemd-journald.service... Feb 12 19:09:29.745083 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:09:29.745099 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:09:29.745110 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:09:29.745120 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:09:29.745130 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 19:09:29.745140 systemd[1]: Stopped verity-setup.service. Feb 12 19:09:29.745151 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:09:29.745160 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:09:29.745171 systemd[1]: Mounted media.mount. Feb 12 19:09:29.745183 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:09:29.745193 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:09:29.745203 kernel: fuse: init (API version 7.34) Feb 12 19:09:29.745220 systemd[1]: Mounted tmp.mount. Feb 12 19:09:29.745232 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:09:29.745242 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:09:29.745255 systemd-journald[995]: Journal started Feb 12 19:09:29.745295 systemd-journald[995]: Runtime Journal (/run/log/journal/2e618bd1c704458c8e1f76c5b9f59c3e) is 6.0M, max 48.7M, 42.6M free. Feb 12 19:09:27.720000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:09:27.885000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:09:27.885000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:09:27.885000 audit: BPF prog-id=10 op=LOAD Feb 12 19:09:27.885000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:09:27.885000 audit: BPF prog-id=11 op=LOAD Feb 12 19:09:27.885000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:09:27.941000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:09:27.941000 audit[929]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001bd8ac a1=400013ede0 a2=40001450c0 a3=32 items=0 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:09:27.941000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:09:27.942000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:09:27.942000 audit[929]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001bd985 a2=1ed a3=0 items=2 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:09:27.942000 audit: CWD cwd="/" Feb 12 19:09:27.942000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:09:27.942000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:09:27.942000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:09:29.621000 audit: BPF prog-id=12 op=LOAD Feb 12 19:09:29.621000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:09:29.621000 audit: BPF prog-id=13 op=LOAD Feb 12 19:09:29.622000 audit: BPF prog-id=14 op=LOAD Feb 12 19:09:29.622000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:09:29.622000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:09:29.623000 audit: BPF prog-id=15 op=LOAD Feb 12 19:09:29.623000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:09:29.623000 audit: BPF prog-id=16 op=LOAD Feb 12 19:09:29.623000 audit: BPF prog-id=17 op=LOAD Feb 12 19:09:29.623000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:09:29.623000 audit: BPF prog-id=14 op=UNLOAD Feb 12 19:09:29.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.633000 audit: BPF prog-id=15 op=UNLOAD Feb 12 19:09:29.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.712000 audit: BPF prog-id=18 op=LOAD Feb 12 19:09:29.712000 audit: BPF prog-id=19 op=LOAD Feb 12 19:09:29.713000 audit: BPF prog-id=20 op=LOAD Feb 12 19:09:29.713000 audit: BPF prog-id=16 op=UNLOAD Feb 12 19:09:29.713000 audit: BPF prog-id=17 op=UNLOAD Feb 12 19:09:29.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.742000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:09:29.742000 audit[995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffffe11f3b0 a2=4000 a3=1 items=0 ppid=1 pid=995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:09:29.742000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:09:27.939671 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:09:29.620822 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:09:29.746469 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:09:27.940170 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:27Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:09:29.620834 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:09:29.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.940188 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:27Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:09:29.623750 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 19:09:27.940231 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:27Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 19:09:27.940253 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:27Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 19:09:27.940283 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:27Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 19:09:27.940294 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:27Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 19:09:27.940494 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:27Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 19:09:29.749167 systemd[1]: Started systemd-journald.service. Feb 12 19:09:29.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.940529 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:27Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:09:29.748356 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:09:27.940540 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:27Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:09:29.748504 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:09:27.940964 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:27Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 19:09:29.749484 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:09:27.940996 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:27Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 19:09:29.749632 systemd[1]: Finished modprobe@drm.service. Feb 12 19:09:27.941014 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:27Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 19:09:27.941028 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:27Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 19:09:27.941044 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:27Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 19:09:29.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:27.941057 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:27Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 19:09:29.379103 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:29Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:09:29.379386 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:29Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:09:29.379485 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:29Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:09:29.379648 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:29Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:09:29.379696 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:29Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 19:09:29.379753 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-12T19:09:29Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 19:09:29.750831 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:09:29.750953 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:09:29.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.752032 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:09:29.752795 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:09:29.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.753738 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:09:29.753903 systemd[1]: Finished modprobe@loop.service. Feb 12 19:09:29.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.755034 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:09:29.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.756080 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:09:29.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.757244 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:09:29.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.758433 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:09:29.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.759800 systemd[1]: Reached target network-pre.target. Feb 12 19:09:29.762156 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:09:29.764026 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:09:29.764662 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:09:29.766434 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:09:29.768282 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:09:29.768900 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:09:29.769997 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:09:29.770773 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:09:29.771950 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:09:29.774759 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:09:29.777967 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:09:29.778918 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:09:29.779886 systemd-journald[995]: Time spent on flushing to /var/log/journal/2e618bd1c704458c8e1f76c5b9f59c3e is 12.911ms for 1026 entries. Feb 12 19:09:29.779886 systemd-journald[995]: System Journal (/var/log/journal/2e618bd1c704458c8e1f76c5b9f59c3e) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:09:29.809173 systemd-journald[995]: Received client request to flush runtime journal. Feb 12 19:09:29.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.789790 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:09:29.809711 udevadm[1029]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 19:09:29.791865 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:09:29.792748 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:09:29.793651 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:09:29.805648 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:09:29.810349 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:09:29.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:29.811481 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:09:29.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.147078 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:09:30.148000 audit: BPF prog-id=21 op=LOAD Feb 12 19:09:30.148000 audit: BPF prog-id=22 op=LOAD Feb 12 19:09:30.148000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:09:30.148000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:09:30.149146 systemd[1]: Starting systemd-udevd.service... Feb 12 19:09:30.168254 systemd-udevd[1032]: Using default interface naming scheme 'v252'. Feb 12 19:09:30.182484 systemd[1]: Started systemd-udevd.service. Feb 12 19:09:30.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.183000 audit: BPF prog-id=23 op=LOAD Feb 12 19:09:30.184679 systemd[1]: Starting systemd-networkd.service... Feb 12 19:09:30.208000 audit: BPF prog-id=24 op=LOAD Feb 12 19:09:30.208000 audit: BPF prog-id=25 op=LOAD Feb 12 19:09:30.208000 audit: BPF prog-id=26 op=LOAD Feb 12 19:09:30.209772 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:09:30.223163 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 12 19:09:30.238375 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:09:30.262683 systemd[1]: Started systemd-userdbd.service. Feb 12 19:09:30.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.301697 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:09:30.303766 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:09:30.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.329020 lvm[1064]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:09:30.329821 systemd-networkd[1040]: lo: Link UP Feb 12 19:09:30.329830 systemd-networkd[1040]: lo: Gained carrier Feb 12 19:09:30.330180 systemd-networkd[1040]: Enumeration completed Feb 12 19:09:30.330300 systemd-networkd[1040]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:09:30.330325 systemd[1]: Started systemd-networkd.service. Feb 12 19:09:30.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.338092 systemd-networkd[1040]: eth0: Link UP Feb 12 19:09:30.338103 systemd-networkd[1040]: eth0: Gained carrier Feb 12 19:09:30.361237 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:09:30.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.362004 systemd[1]: Reached target cryptsetup.target. Feb 12 19:09:30.363769 systemd[1]: Starting lvm2-activation.service... Feb 12 19:09:30.364351 systemd-networkd[1040]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:09:30.367683 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:09:30.395200 systemd[1]: Finished lvm2-activation.service. Feb 12 19:09:30.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.395954 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:09:30.396628 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:09:30.396653 systemd[1]: Reached target local-fs.target. Feb 12 19:09:30.397236 systemd[1]: Reached target machines.target. Feb 12 19:09:30.398977 systemd[1]: Starting ldconfig.service... Feb 12 19:09:30.399919 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:09:30.399976 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:09:30.401091 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:09:30.402859 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:09:30.404758 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:09:30.406305 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:09:30.406374 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:09:30.407425 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:09:30.409054 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1068 (bootctl) Feb 12 19:09:30.412227 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:09:30.417897 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:09:30.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.427751 systemd-tmpfiles[1071]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:09:30.429772 systemd-tmpfiles[1071]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:09:30.432358 systemd-tmpfiles[1071]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:09:30.502006 systemd-fsck[1076]: fsck.fat 4.2 (2021-01-31) Feb 12 19:09:30.502006 systemd-fsck[1076]: /dev/vda1: 236 files, 113719/258078 clusters Feb 12 19:09:30.503496 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:09:30.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.506379 systemd[1]: Mounting boot.mount... Feb 12 19:09:30.538380 systemd[1]: Mounted boot.mount. Feb 12 19:09:30.552468 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:09:30.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.553822 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:09:30.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.607496 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:09:30.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.609872 systemd[1]: Starting audit-rules.service... Feb 12 19:09:30.611552 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:09:30.613414 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:09:30.615000 audit: BPF prog-id=27 op=LOAD Feb 12 19:09:30.616490 systemd[1]: Starting systemd-resolved.service... Feb 12 19:09:30.619000 audit: BPF prog-id=28 op=LOAD Feb 12 19:09:30.620248 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:09:30.623476 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:09:30.632306 ldconfig[1067]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:09:30.635185 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:09:30.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.636000 audit[1092]: SYSTEM_BOOT pid=1092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.639390 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:09:30.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.640796 systemd[1]: Finished ldconfig.service. Feb 12 19:09:30.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.642893 systemd[1]: Starting systemd-update-done.service... Feb 12 19:09:30.644964 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:09:30.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.646092 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:09:30.651006 systemd[1]: Finished systemd-update-done.service. Feb 12 19:09:30.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:09:30.666457 augenrules[1102]: No rules Feb 12 19:09:30.666000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:09:30.666000 audit[1102]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffce0413f0 a2=420 a3=0 items=0 ppid=1080 pid=1102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:09:30.666000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:09:30.667970 systemd[1]: Finished audit-rules.service. Feb 12 19:09:30.680940 systemd-resolved[1086]: Positive Trust Anchors: Feb 12 19:09:30.680952 systemd-resolved[1086]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:09:30.680980 systemd-resolved[1086]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:09:30.693441 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:09:30.694178 systemd[1]: Reached target time-set.target. Feb 12 19:09:30.695171 systemd-timesyncd[1090]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 19:09:30.695242 systemd-timesyncd[1090]: Initial clock synchronization to Mon 2024-02-12 19:09:30.726225 UTC. Feb 12 19:09:30.702644 systemd-resolved[1086]: Defaulting to hostname 'linux'. Feb 12 19:09:30.703961 systemd[1]: Started systemd-resolved.service. Feb 12 19:09:30.704670 systemd[1]: Reached target network.target. Feb 12 19:09:30.705246 systemd[1]: Reached target nss-lookup.target. Feb 12 19:09:30.705814 systemd[1]: Reached target sysinit.target. Feb 12 19:09:30.706519 systemd[1]: Started motdgen.path. Feb 12 19:09:30.707053 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:09:30.708035 systemd[1]: Started logrotate.timer. Feb 12 19:09:30.708813 systemd[1]: Started mdadm.timer. Feb 12 19:09:30.709518 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:09:30.710328 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:09:30.710371 systemd[1]: Reached target paths.target. Feb 12 19:09:30.711056 systemd[1]: Reached target timers.target. Feb 12 19:09:30.712048 systemd[1]: Listening on dbus.socket. Feb 12 19:09:30.713825 systemd[1]: Starting docker.socket... Feb 12 19:09:30.717131 systemd[1]: Listening on sshd.socket. Feb 12 19:09:30.717965 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:09:30.718535 systemd[1]: Listening on docker.socket. Feb 12 19:09:30.719384 systemd[1]: Reached target sockets.target. Feb 12 19:09:30.720072 systemd[1]: Reached target basic.target. Feb 12 19:09:30.720784 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:09:30.720813 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:09:30.721875 systemd[1]: Starting containerd.service... Feb 12 19:09:30.723836 systemd[1]: Starting dbus.service... Feb 12 19:09:30.725968 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:09:30.728416 systemd[1]: Starting extend-filesystems.service... Feb 12 19:09:30.729465 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:09:30.730950 systemd[1]: Starting motdgen.service... Feb 12 19:09:30.733538 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:09:30.735639 systemd[1]: Starting prepare-critools.service... Feb 12 19:09:30.736062 jq[1112]: false Feb 12 19:09:30.738459 systemd[1]: Starting prepare-helm.service... Feb 12 19:09:30.740923 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:09:30.742837 systemd[1]: Starting sshd-keygen.service... Feb 12 19:09:30.745949 systemd[1]: Starting systemd-logind.service... Feb 12 19:09:30.746943 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:09:30.747075 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:09:30.747684 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 19:09:30.749185 systemd[1]: Starting update-engine.service... Feb 12 19:09:30.751745 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:09:30.754718 jq[1132]: true Feb 12 19:09:30.757159 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:09:30.757382 dbus-daemon[1111]: [system] SELinux support is enabled Feb 12 19:09:30.758232 systemd[1]: Started dbus.service. Feb 12 19:09:30.761911 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:09:30.762105 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:09:30.762458 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:09:30.762597 systemd[1]: Finished motdgen.service. Feb 12 19:09:30.764108 extend-filesystems[1113]: Found vda Feb 12 19:09:30.765307 extend-filesystems[1113]: Found vda1 Feb 12 19:09:30.766280 extend-filesystems[1113]: Found vda2 Feb 12 19:09:30.767196 extend-filesystems[1113]: Found vda3 Feb 12 19:09:30.767788 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:09:30.768064 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:09:30.768461 extend-filesystems[1113]: Found usr Feb 12 19:09:30.769523 extend-filesystems[1113]: Found vda4 Feb 12 19:09:30.769523 extend-filesystems[1113]: Found vda6 Feb 12 19:09:30.769523 extend-filesystems[1113]: Found vda7 Feb 12 19:09:30.772594 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:09:30.772640 systemd[1]: Reached target system-config.target. Feb 12 19:09:30.782483 extend-filesystems[1113]: Found vda9 Feb 12 19:09:30.782483 extend-filesystems[1113]: Checking size of /dev/vda9 Feb 12 19:09:30.782150 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:09:30.782166 systemd[1]: Reached target user-config.target. Feb 12 19:09:30.784790 jq[1138]: true Feb 12 19:09:30.802063 extend-filesystems[1113]: Resized partition /dev/vda9 Feb 12 19:09:30.814268 tar[1135]: ./ Feb 12 19:09:30.814268 tar[1135]: ./macvlan Feb 12 19:09:30.822859 tar[1136]: crictl Feb 12 19:09:30.834031 tar[1137]: linux-arm64/helm Feb 12 19:09:30.849469 extend-filesystems[1156]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:09:30.858236 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 19:09:30.858954 systemd-logind[1127]: Watching system buttons on /dev/input/event0 (Power Button) Feb 12 19:09:30.860630 systemd-logind[1127]: New seat seat0. Feb 12 19:09:30.865803 systemd[1]: Started systemd-logind.service. Feb 12 19:09:30.878234 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 19:09:30.895344 extend-filesystems[1156]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:09:30.895344 extend-filesystems[1156]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 19:09:30.895344 extend-filesystems[1156]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 19:09:30.899620 extend-filesystems[1113]: Resized filesystem in /dev/vda9 Feb 12 19:09:30.897557 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:09:30.897726 systemd[1]: Finished extend-filesystems.service. Feb 12 19:09:30.901882 update_engine[1130]: I0212 19:09:30.901551 1130 main.cc:92] Flatcar Update Engine starting Feb 12 19:09:30.904891 bash[1168]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:09:30.905749 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:09:30.911115 systemd[1]: Started update-engine.service. Feb 12 19:09:30.911313 update_engine[1130]: I0212 19:09:30.911288 1130 update_check_scheduler.cc:74] Next update check in 5m17s Feb 12 19:09:30.913790 systemd[1]: Started locksmithd.service. Feb 12 19:09:30.924952 tar[1135]: ./static Feb 12 19:09:30.945526 tar[1135]: ./vlan Feb 12 19:09:30.974448 tar[1135]: ./portmap Feb 12 19:09:30.987180 env[1139]: time="2024-02-12T19:09:30.987130280Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:09:30.998497 tar[1135]: ./host-local Feb 12 19:09:31.020999 tar[1135]: ./vrf Feb 12 19:09:31.047371 tar[1135]: ./bridge Feb 12 19:09:31.048281 env[1139]: time="2024-02-12T19:09:31.047951435Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:09:31.048281 env[1139]: time="2024-02-12T19:09:31.048095154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:09:31.054302 env[1139]: time="2024-02-12T19:09:31.053296060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:09:31.054302 env[1139]: time="2024-02-12T19:09:31.053330928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:09:31.054302 env[1139]: time="2024-02-12T19:09:31.053554763Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:09:31.054302 env[1139]: time="2024-02-12T19:09:31.053573359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:09:31.054302 env[1139]: time="2024-02-12T19:09:31.053586865Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:09:31.054302 env[1139]: time="2024-02-12T19:09:31.053596244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:09:31.054302 env[1139]: time="2024-02-12T19:09:31.053663935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:09:31.054302 env[1139]: time="2024-02-12T19:09:31.053938749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:09:31.054302 env[1139]: time="2024-02-12T19:09:31.054061347Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:09:31.054302 env[1139]: time="2024-02-12T19:09:31.054078821Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:09:31.054568 env[1139]: time="2024-02-12T19:09:31.054133528Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:09:31.054568 env[1139]: time="2024-02-12T19:09:31.054147194Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:09:31.065242 env[1139]: time="2024-02-12T19:09:31.062873590Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:09:31.065242 env[1139]: time="2024-02-12T19:09:31.062909500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:09:31.065242 env[1139]: time="2024-02-12T19:09:31.062923647Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:09:31.065242 env[1139]: time="2024-02-12T19:09:31.062958355Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:09:31.065242 env[1139]: time="2024-02-12T19:09:31.062974747Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:09:31.065242 env[1139]: time="2024-02-12T19:09:31.062988694Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:09:31.065242 env[1139]: time="2024-02-12T19:09:31.063002641Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:09:31.065242 env[1139]: time="2024-02-12T19:09:31.063372399Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:09:31.065242 env[1139]: time="2024-02-12T19:09:31.063394603Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:09:31.065242 env[1139]: time="2024-02-12T19:09:31.063408189Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:09:31.065242 env[1139]: time="2024-02-12T19:09:31.063421815Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:09:31.065242 env[1139]: time="2024-02-12T19:09:31.063434560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:09:31.065242 env[1139]: time="2024-02-12T19:09:31.063561767Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:09:31.065242 env[1139]: time="2024-02-12T19:09:31.063635551Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:09:31.065571 env[1139]: time="2024-02-12T19:09:31.063852212Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:09:31.065571 env[1139]: time="2024-02-12T19:09:31.063876379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:09:31.065571 env[1139]: time="2024-02-12T19:09:31.063889604Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:09:31.065571 env[1139]: time="2024-02-12T19:09:31.063995811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:09:31.065571 env[1139]: time="2024-02-12T19:09:31.064007714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:09:31.065571 env[1139]: time="2024-02-12T19:09:31.064019056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:09:31.065571 env[1139]: time="2024-02-12T19:09:31.064030438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:09:31.065571 env[1139]: time="2024-02-12T19:09:31.064042141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:09:31.065571 env[1139]: time="2024-02-12T19:09:31.064054846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:09:31.065571 env[1139]: time="2024-02-12T19:09:31.064066348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:09:31.065571 env[1139]: time="2024-02-12T19:09:31.064077129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:09:31.065571 env[1139]: time="2024-02-12T19:09:31.064090194Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:09:31.065571 env[1139]: time="2024-02-12T19:09:31.064248542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:09:31.065571 env[1139]: time="2024-02-12T19:09:31.064268340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:09:31.065571 env[1139]: time="2024-02-12T19:09:31.064280083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:09:31.065850 env[1139]: time="2024-02-12T19:09:31.064291145Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:09:31.065850 env[1139]: time="2024-02-12T19:09:31.064308098Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:09:31.065850 env[1139]: time="2024-02-12T19:09:31.064319921Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:09:31.065850 env[1139]: time="2024-02-12T19:09:31.064338757Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:09:31.065850 env[1139]: time="2024-02-12T19:09:31.064371461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:09:31.065943 env[1139]: time="2024-02-12T19:09:31.064559266Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:09:31.065943 env[1139]: time="2024-02-12T19:09:31.064612529Z" level=info msg="Connect containerd service" Feb 12 19:09:31.065943 env[1139]: time="2024-02-12T19:09:31.064643149Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:09:31.069267 env[1139]: time="2024-02-12T19:09:31.066467053Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:09:31.069314 env[1139]: time="2024-02-12T19:09:31.069264448Z" level=info msg="Start subscribing containerd event" Feb 12 19:09:31.070648 env[1139]: time="2024-02-12T19:09:31.069333382Z" level=info msg="Start recovering state" Feb 12 19:09:31.070648 env[1139]: time="2024-02-12T19:09:31.069438707Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:09:31.070648 env[1139]: time="2024-02-12T19:09:31.069464156Z" level=info msg="Start event monitor" Feb 12 19:09:31.070648 env[1139]: time="2024-02-12T19:09:31.069482833Z" level=info msg="Start snapshots syncer" Feb 12 19:09:31.070648 env[1139]: time="2024-02-12T19:09:31.069485718Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:09:31.070648 env[1139]: time="2024-02-12T19:09:31.069542669Z" level=info msg="containerd successfully booted in 0.083152s" Feb 12 19:09:31.070648 env[1139]: time="2024-02-12T19:09:31.069493253Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:09:31.070648 env[1139]: time="2024-02-12T19:09:31.069571084Z" level=info msg="Start streaming server" Feb 12 19:09:31.069620 systemd[1]: Started containerd.service. Feb 12 19:09:31.077384 tar[1135]: ./tuning Feb 12 19:09:31.123592 tar[1135]: ./firewall Feb 12 19:09:31.183310 tar[1135]: ./host-device Feb 12 19:09:31.238507 tar[1135]: ./sbr Feb 12 19:09:31.286451 tar[1135]: ./loopback Feb 12 19:09:31.295500 systemd[1]: Finished prepare-critools.service. Feb 12 19:09:31.312199 tar[1135]: ./dhcp Feb 12 19:09:31.368013 locksmithd[1170]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:09:31.378142 tar[1135]: ./ptp Feb 12 19:09:31.405165 tar[1137]: linux-arm64/LICENSE Feb 12 19:09:31.405287 tar[1137]: linux-arm64/README.md Feb 12 19:09:31.406097 tar[1135]: ./ipvlan Feb 12 19:09:31.409491 systemd[1]: Finished prepare-helm.service. Feb 12 19:09:31.433350 tar[1135]: ./bandwidth Feb 12 19:09:31.475251 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:09:31.962712 sshd_keygen[1133]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:09:31.979875 systemd[1]: Finished sshd-keygen.service. Feb 12 19:09:31.981915 systemd[1]: Starting issuegen.service... Feb 12 19:09:31.986241 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:09:31.986377 systemd[1]: Finished issuegen.service. Feb 12 19:09:31.988388 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:09:31.996069 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:09:31.998119 systemd[1]: Started getty@tty1.service. Feb 12 19:09:31.999859 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 12 19:09:32.000752 systemd[1]: Reached target getty.target. Feb 12 19:09:32.001412 systemd[1]: Reached target multi-user.target. Feb 12 19:09:32.003045 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:09:32.009361 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:09:32.009506 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:09:32.010453 systemd[1]: Startup finished in 613ms (kernel) + 6.096s (initrd) + 4.326s (userspace) = 11.037s. Feb 12 19:09:32.332403 systemd-networkd[1040]: eth0: Gained IPv6LL Feb 12 19:09:34.509616 systemd[1]: Created slice system-sshd.slice. Feb 12 19:09:34.510978 systemd[1]: Started sshd@0-10.0.0.19:22-10.0.0.1:57310.service. Feb 12 19:09:34.554995 sshd[1199]: Accepted publickey for core from 10.0.0.1 port 57310 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:34.557648 sshd[1199]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:34.577851 systemd-logind[1127]: New session 1 of user core. Feb 12 19:09:34.579798 systemd[1]: Created slice user-500.slice. Feb 12 19:09:34.581335 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:09:34.590171 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:09:34.591970 systemd[1]: Starting user@500.service... Feb 12 19:09:34.594754 (systemd)[1202]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:34.654710 systemd[1202]: Queued start job for default target default.target. Feb 12 19:09:34.655203 systemd[1202]: Reached target paths.target. Feb 12 19:09:34.655248 systemd[1202]: Reached target sockets.target. Feb 12 19:09:34.655260 systemd[1202]: Reached target timers.target. Feb 12 19:09:34.655270 systemd[1202]: Reached target basic.target. Feb 12 19:09:34.655322 systemd[1202]: Reached target default.target. Feb 12 19:09:34.655345 systemd[1202]: Startup finished in 54ms. Feb 12 19:09:34.655406 systemd[1]: Started user@500.service. Feb 12 19:09:34.656331 systemd[1]: Started session-1.scope. Feb 12 19:09:34.707637 systemd[1]: Started sshd@1-10.0.0.19:22-10.0.0.1:57324.service. Feb 12 19:09:34.742375 sshd[1211]: Accepted publickey for core from 10.0.0.1 port 57324 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:34.743940 sshd[1211]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:34.748693 systemd[1]: Started session-2.scope. Feb 12 19:09:34.748993 systemd-logind[1127]: New session 2 of user core. Feb 12 19:09:34.806108 sshd[1211]: pam_unix(sshd:session): session closed for user core Feb 12 19:09:34.808706 systemd[1]: sshd@1-10.0.0.19:22-10.0.0.1:57324.service: Deactivated successfully. Feb 12 19:09:34.809336 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:09:34.809816 systemd-logind[1127]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:09:34.811321 systemd[1]: Started sshd@2-10.0.0.19:22-10.0.0.1:57332.service. Feb 12 19:09:34.812022 systemd-logind[1127]: Removed session 2. Feb 12 19:09:34.847723 sshd[1217]: Accepted publickey for core from 10.0.0.1 port 57332 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:34.848844 sshd[1217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:34.852183 systemd-logind[1127]: New session 3 of user core. Feb 12 19:09:34.853037 systemd[1]: Started session-3.scope. Feb 12 19:09:34.907518 sshd[1217]: pam_unix(sshd:session): session closed for user core Feb 12 19:09:34.911877 systemd[1]: sshd@2-10.0.0.19:22-10.0.0.1:57332.service: Deactivated successfully. Feb 12 19:09:34.912450 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:09:34.912957 systemd-logind[1127]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:09:34.913987 systemd[1]: Started sshd@3-10.0.0.19:22-10.0.0.1:57340.service. Feb 12 19:09:34.914703 systemd-logind[1127]: Removed session 3. Feb 12 19:09:34.948274 sshd[1223]: Accepted publickey for core from 10.0.0.1 port 57340 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:34.949523 sshd[1223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:34.953558 systemd[1]: Started session-4.scope. Feb 12 19:09:34.953844 systemd-logind[1127]: New session 4 of user core. Feb 12 19:09:35.008514 sshd[1223]: pam_unix(sshd:session): session closed for user core Feb 12 19:09:35.011058 systemd[1]: sshd@3-10.0.0.19:22-10.0.0.1:57340.service: Deactivated successfully. Feb 12 19:09:35.011622 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:09:35.012081 systemd-logind[1127]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:09:35.013111 systemd[1]: Started sshd@4-10.0.0.19:22-10.0.0.1:57354.service. Feb 12 19:09:35.013710 systemd-logind[1127]: Removed session 4. Feb 12 19:09:35.047841 sshd[1229]: Accepted publickey for core from 10.0.0.1 port 57354 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:35.049395 sshd[1229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:35.052705 systemd-logind[1127]: New session 5 of user core. Feb 12 19:09:35.053481 systemd[1]: Started session-5.scope. Feb 12 19:09:35.117275 sudo[1232]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:09:35.118096 sudo[1232]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:09:35.684424 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:09:35.690011 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:09:35.690321 systemd[1]: Reached target network-online.target. Feb 12 19:09:35.691688 systemd[1]: Starting docker.service... Feb 12 19:09:35.795584 env[1250]: time="2024-02-12T19:09:35.795518365Z" level=info msg="Starting up" Feb 12 19:09:35.797855 env[1250]: time="2024-02-12T19:09:35.797819020Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:09:35.797958 env[1250]: time="2024-02-12T19:09:35.797943647Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:09:35.798048 env[1250]: time="2024-02-12T19:09:35.798030537Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:09:35.798102 env[1250]: time="2024-02-12T19:09:35.798089546Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:09:35.800918 env[1250]: time="2024-02-12T19:09:35.800871403Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:09:35.800918 env[1250]: time="2024-02-12T19:09:35.800896601Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:09:35.800918 env[1250]: time="2024-02-12T19:09:35.800914027Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:09:35.801046 env[1250]: time="2024-02-12T19:09:35.800931013Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:09:35.804765 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport518563328-merged.mount: Deactivated successfully. Feb 12 19:09:36.024468 env[1250]: time="2024-02-12T19:09:36.024367045Z" level=info msg="Loading containers: start." Feb 12 19:09:36.134281 kernel: Initializing XFRM netlink socket Feb 12 19:09:36.165019 env[1250]: time="2024-02-12T19:09:36.164976613Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:09:36.229373 systemd-networkd[1040]: docker0: Link UP Feb 12 19:09:36.240506 env[1250]: time="2024-02-12T19:09:36.240463132Z" level=info msg="Loading containers: done." Feb 12 19:09:36.275742 env[1250]: time="2024-02-12T19:09:36.275623075Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:09:36.276014 env[1250]: time="2024-02-12T19:09:36.275997322Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:09:36.276152 env[1250]: time="2024-02-12T19:09:36.276117772Z" level=info msg="Daemon has completed initialization" Feb 12 19:09:36.289663 systemd[1]: Started docker.service. Feb 12 19:09:36.297071 env[1250]: time="2024-02-12T19:09:36.296929833Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:09:36.315868 systemd[1]: Reloading. Feb 12 19:09:36.370872 /usr/lib/systemd/system-generators/torcx-generator[1392]: time="2024-02-12T19:09:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:09:36.370904 /usr/lib/systemd/system-generators/torcx-generator[1392]: time="2024-02-12T19:09:36Z" level=info msg="torcx already run" Feb 12 19:09:36.424867 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:09:36.424887 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:09:36.439824 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:09:36.500460 systemd[1]: Started kubelet.service. Feb 12 19:09:36.840080 kubelet[1429]: E0212 19:09:36.840013 1429 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:09:36.842121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:09:36.842265 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:09:36.997411 env[1139]: time="2024-02-12T19:09:36.997344327Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 19:09:37.770266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210629676.mount: Deactivated successfully. Feb 12 19:09:39.190180 env[1139]: time="2024-02-12T19:09:39.190128107Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:39.192174 env[1139]: time="2024-02-12T19:09:39.192124504Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:39.194312 env[1139]: time="2024-02-12T19:09:39.194275159Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:39.199229 env[1139]: time="2024-02-12T19:09:39.199166715Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:39.200120 env[1139]: time="2024-02-12T19:09:39.200072847Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 12 19:09:39.211026 env[1139]: time="2024-02-12T19:09:39.210975178Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 19:09:42.459039 env[1139]: time="2024-02-12T19:09:42.458035911Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:42.463334 env[1139]: time="2024-02-12T19:09:42.459571340Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:42.463334 env[1139]: time="2024-02-12T19:09:42.461787099Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:42.469168 env[1139]: time="2024-02-12T19:09:42.465352869Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:42.469168 env[1139]: time="2024-02-12T19:09:42.466782876Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 12 19:09:42.481032 env[1139]: time="2024-02-12T19:09:42.480938213Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 19:09:43.580660 env[1139]: time="2024-02-12T19:09:43.580588598Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:43.582908 env[1139]: time="2024-02-12T19:09:43.582808909Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:43.585321 env[1139]: time="2024-02-12T19:09:43.585249417Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:43.590247 env[1139]: time="2024-02-12T19:09:43.590184042Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:43.591165 env[1139]: time="2024-02-12T19:09:43.591121162Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 12 19:09:43.599799 env[1139]: time="2024-02-12T19:09:43.599742492Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 19:09:44.505044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2294247120.mount: Deactivated successfully. Feb 12 19:09:44.994351 env[1139]: time="2024-02-12T19:09:44.994288889Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:44.996424 env[1139]: time="2024-02-12T19:09:44.996393939Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:44.997909 env[1139]: time="2024-02-12T19:09:44.997871741Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:44.999437 env[1139]: time="2024-02-12T19:09:44.999412717Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:44.999869 env[1139]: time="2024-02-12T19:09:44.999838154Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 12 19:09:45.008597 env[1139]: time="2024-02-12T19:09:45.008559380Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:09:45.450671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1698116143.mount: Deactivated successfully. Feb 12 19:09:45.458825 env[1139]: time="2024-02-12T19:09:45.458773042Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:45.464287 env[1139]: time="2024-02-12T19:09:45.464241912Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:45.468930 env[1139]: time="2024-02-12T19:09:45.468873162Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:45.470471 env[1139]: time="2024-02-12T19:09:45.470433832Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:45.470936 env[1139]: time="2024-02-12T19:09:45.470905043Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 12 19:09:45.481164 env[1139]: time="2024-02-12T19:09:45.481090310Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 19:09:46.331710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4004679283.mount: Deactivated successfully. Feb 12 19:09:46.850620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:09:46.850796 systemd[1]: Stopped kubelet.service. Feb 12 19:09:46.852366 systemd[1]: Started kubelet.service. Feb 12 19:09:46.897058 kubelet[1483]: E0212 19:09:46.896979 1483 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:09:46.900000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:09:46.900135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:09:48.172127 env[1139]: time="2024-02-12T19:09:48.172068015Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:48.174940 env[1139]: time="2024-02-12T19:09:48.174642447Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:48.175934 env[1139]: time="2024-02-12T19:09:48.175884494Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:48.177750 env[1139]: time="2024-02-12T19:09:48.177710840Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:48.178599 env[1139]: time="2024-02-12T19:09:48.178561273Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 12 19:09:48.187497 env[1139]: time="2024-02-12T19:09:48.187438958Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 19:09:48.799084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1386656897.mount: Deactivated successfully. Feb 12 19:09:49.465992 env[1139]: time="2024-02-12T19:09:49.465938428Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:49.468298 env[1139]: time="2024-02-12T19:09:49.468256760Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:49.470193 env[1139]: time="2024-02-12T19:09:49.469771002Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:49.475155 env[1139]: time="2024-02-12T19:09:49.475114375Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:49.476105 env[1139]: time="2024-02-12T19:09:49.476068236Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 12 19:09:54.087226 systemd[1]: Stopped kubelet.service. Feb 12 19:09:54.102898 systemd[1]: Reloading. Feb 12 19:09:54.169103 /usr/lib/systemd/system-generators/torcx-generator[1590]: time="2024-02-12T19:09:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:09:54.169515 /usr/lib/systemd/system-generators/torcx-generator[1590]: time="2024-02-12T19:09:54Z" level=info msg="torcx already run" Feb 12 19:09:54.227201 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:09:54.227232 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:09:54.243924 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:09:54.313328 systemd[1]: Started kubelet.service. Feb 12 19:09:54.366010 kubelet[1627]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:09:54.366010 kubelet[1627]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:09:54.366367 kubelet[1627]: I0212 19:09:54.366108 1627 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:09:54.367317 kubelet[1627]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:09:54.367317 kubelet[1627]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:09:54.782863 kubelet[1627]: I0212 19:09:54.782521 1627 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:09:54.783002 kubelet[1627]: I0212 19:09:54.782987 1627 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:09:54.783294 kubelet[1627]: I0212 19:09:54.783277 1627 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:09:54.787900 kubelet[1627]: I0212 19:09:54.787812 1627 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:09:54.788369 kubelet[1627]: E0212 19:09:54.788330 1627 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:54.790700 kubelet[1627]: W0212 19:09:54.790679 1627 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:09:54.791919 kubelet[1627]: I0212 19:09:54.791899 1627 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:09:54.792798 kubelet[1627]: I0212 19:09:54.792783 1627 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:09:54.792965 kubelet[1627]: I0212 19:09:54.792948 1627 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:09:54.793308 kubelet[1627]: I0212 19:09:54.793295 1627 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:09:54.793373 kubelet[1627]: I0212 19:09:54.793363 1627 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:09:54.793568 kubelet[1627]: I0212 19:09:54.793553 1627 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:09:54.799835 kubelet[1627]: I0212 19:09:54.799805 1627 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:09:54.799969 kubelet[1627]: I0212 19:09:54.799956 1627 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:09:54.800111 kubelet[1627]: I0212 19:09:54.800101 1627 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:09:54.800178 kubelet[1627]: I0212 19:09:54.800168 1627 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:09:54.802068 kubelet[1627]: W0212 19:09:54.801928 1627 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:54.802068 kubelet[1627]: E0212 19:09:54.801986 1627 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:54.802068 kubelet[1627]: W0212 19:09:54.802040 1627 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:54.802068 kubelet[1627]: E0212 19:09:54.802063 1627 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:54.802275 kubelet[1627]: I0212 19:09:54.802258 1627 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:09:54.803776 kubelet[1627]: W0212 19:09:54.803755 1627 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:09:54.804567 kubelet[1627]: I0212 19:09:54.804547 1627 server.go:1186] "Started kubelet" Feb 12 19:09:54.805111 kubelet[1627]: I0212 19:09:54.804885 1627 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:09:54.806070 kubelet[1627]: E0212 19:09:54.805959 1627 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b33336f32b9de3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 9, 54, 804522467, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 9, 54, 804522467, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.19:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.19:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:09:54.806187 kubelet[1627]: I0212 19:09:54.806117 1627 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:09:54.806396 kubelet[1627]: E0212 19:09:54.806369 1627 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:09:54.806396 kubelet[1627]: E0212 19:09:54.806396 1627 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:09:54.808006 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:09:54.808281 kubelet[1627]: I0212 19:09:54.808258 1627 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:09:54.808561 kubelet[1627]: I0212 19:09:54.808535 1627 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:09:54.808624 kubelet[1627]: I0212 19:09:54.808608 1627 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:09:54.809610 kubelet[1627]: W0212 19:09:54.809562 1627 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:54.809683 kubelet[1627]: E0212 19:09:54.809618 1627 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:54.809972 kubelet[1627]: E0212 19:09:54.809934 1627 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:09:54.810142 kubelet[1627]: E0212 19:09:54.810121 1627 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:54.830285 kubelet[1627]: I0212 19:09:54.830248 1627 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:09:54.830285 kubelet[1627]: I0212 19:09:54.830274 1627 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:09:54.830285 kubelet[1627]: I0212 19:09:54.830291 1627 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:09:54.840613 kubelet[1627]: I0212 19:09:54.837642 1627 policy_none.go:49] "None policy: Start" Feb 12 19:09:54.840613 kubelet[1627]: I0212 19:09:54.838355 1627 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:09:54.840613 kubelet[1627]: I0212 19:09:54.838391 1627 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:09:54.843824 systemd[1]: Created slice kubepods.slice. Feb 12 19:09:54.847512 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 19:09:54.849744 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 19:09:54.860005 kubelet[1627]: I0212 19:09:54.859969 1627 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:09:54.860239 kubelet[1627]: I0212 19:09:54.860200 1627 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:09:54.861343 kubelet[1627]: E0212 19:09:54.861326 1627 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 12 19:09:54.874481 kubelet[1627]: I0212 19:09:54.874431 1627 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:09:54.900597 kubelet[1627]: I0212 19:09:54.900543 1627 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:09:54.900597 kubelet[1627]: I0212 19:09:54.900579 1627 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:09:54.900597 kubelet[1627]: I0212 19:09:54.900599 1627 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:09:54.900791 kubelet[1627]: E0212 19:09:54.900668 1627 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:09:54.901512 kubelet[1627]: W0212 19:09:54.901441 1627 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:54.901555 kubelet[1627]: E0212 19:09:54.901526 1627 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:54.911063 kubelet[1627]: I0212 19:09:54.911032 1627 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:09:54.911473 kubelet[1627]: E0212 19:09:54.911446 1627 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Feb 12 19:09:55.001687 kubelet[1627]: I0212 19:09:55.001660 1627 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:09:55.003034 kubelet[1627]: I0212 19:09:55.003009 1627 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:09:55.003891 kubelet[1627]: I0212 19:09:55.003872 1627 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:09:55.004616 kubelet[1627]: I0212 19:09:55.004595 1627 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.19:6443: connect: connection refused" Feb 12 19:09:55.005098 kubelet[1627]: I0212 19:09:55.005077 1627 status_manager.go:698] "Failed to get status for pod" podUID=a283407d6cf35caee2e9c4a606b1bfe2 pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.19:6443: connect: connection refused" Feb 12 19:09:55.006866 kubelet[1627]: I0212 19:09:55.006847 1627 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.19:6443: connect: connection refused" Feb 12 19:09:55.008272 systemd[1]: Created slice kubepods-burstable-pod72ae17a74a2eae76daac6d298477aff0.slice. Feb 12 19:09:55.009691 kubelet[1627]: I0212 19:09:55.009664 1627 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 12 19:09:55.010499 kubelet[1627]: E0212 19:09:55.010469 1627 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:55.026410 systemd[1]: Created slice kubepods-burstable-poda283407d6cf35caee2e9c4a606b1bfe2.slice. Feb 12 19:09:55.029396 systemd[1]: Created slice kubepods-burstable-pod550020dd9f101bcc23e1d3c651841c4d.slice. Feb 12 19:09:55.110398 kubelet[1627]: I0212 19:09:55.110356 1627 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a283407d6cf35caee2e9c4a606b1bfe2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a283407d6cf35caee2e9c4a606b1bfe2\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:09:55.110534 kubelet[1627]: I0212 19:09:55.110412 1627 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a283407d6cf35caee2e9c4a606b1bfe2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a283407d6cf35caee2e9c4a606b1bfe2\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:09:55.110534 kubelet[1627]: I0212 19:09:55.110439 1627 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:09:55.110534 kubelet[1627]: I0212 19:09:55.110464 1627 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:09:55.110534 kubelet[1627]: I0212 19:09:55.110492 1627 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a283407d6cf35caee2e9c4a606b1bfe2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a283407d6cf35caee2e9c4a606b1bfe2\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:09:55.110534 kubelet[1627]: I0212 19:09:55.110517 1627 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:09:55.110670 kubelet[1627]: I0212 19:09:55.110624 1627 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:09:55.110670 kubelet[1627]: I0212 19:09:55.110654 1627 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:09:55.113074 kubelet[1627]: I0212 19:09:55.113044 1627 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:09:55.113490 kubelet[1627]: E0212 19:09:55.113474 1627 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Feb 12 19:09:55.324666 kubelet[1627]: E0212 19:09:55.324615 1627 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:55.325268 env[1139]: time="2024-02-12T19:09:55.325227615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 12 19:09:55.328404 kubelet[1627]: E0212 19:09:55.328380 1627 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:55.329269 env[1139]: time="2024-02-12T19:09:55.329236033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a283407d6cf35caee2e9c4a606b1bfe2,Namespace:kube-system,Attempt:0,}" Feb 12 19:09:55.331790 kubelet[1627]: E0212 19:09:55.331771 1627 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:55.332103 env[1139]: time="2024-02-12T19:09:55.332070044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 12 19:09:55.411244 kubelet[1627]: E0212 19:09:55.411117 1627 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:55.514544 kubelet[1627]: I0212 19:09:55.514519 1627 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:09:55.514867 kubelet[1627]: E0212 19:09:55.514834 1627 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Feb 12 19:09:55.783461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount791797949.mount: Deactivated successfully. Feb 12 19:09:55.788501 env[1139]: time="2024-02-12T19:09:55.788455076Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:55.789292 env[1139]: time="2024-02-12T19:09:55.789263290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:55.791381 env[1139]: time="2024-02-12T19:09:55.791344711Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:55.792368 env[1139]: time="2024-02-12T19:09:55.792340002Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:55.793103 env[1139]: time="2024-02-12T19:09:55.793073746Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:55.793864 env[1139]: time="2024-02-12T19:09:55.793833540Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:55.797001 env[1139]: time="2024-02-12T19:09:55.796965915Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:55.799996 env[1139]: time="2024-02-12T19:09:55.799967116Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:55.801734 env[1139]: time="2024-02-12T19:09:55.801699352Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:55.804198 env[1139]: time="2024-02-12T19:09:55.804169574Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:55.805181 env[1139]: time="2024-02-12T19:09:55.805148659Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:55.806203 env[1139]: time="2024-02-12T19:09:55.806175163Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:09:55.843771 env[1139]: time="2024-02-12T19:09:55.843534011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:09:55.843771 env[1139]: time="2024-02-12T19:09:55.843570946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:09:55.843771 env[1139]: time="2024-02-12T19:09:55.843583271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:09:55.844086 env[1139]: time="2024-02-12T19:09:55.844025494Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fa5c0b25684a16065eada954855045763e2c80eb4d7e11d7c5d070c3ed63f92 pid=1716 runtime=io.containerd.runc.v2 Feb 12 19:09:55.844417 env[1139]: time="2024-02-12T19:09:55.844261832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:09:55.844417 env[1139]: time="2024-02-12T19:09:55.844295526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:09:55.844417 env[1139]: time="2024-02-12T19:09:55.844318815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:09:55.844769 env[1139]: time="2024-02-12T19:09:55.844558995Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fc8569b702c6226db0b08ad3acdb2582d0608cfb59e3adbd34bfafefd87b3bd pid=1725 runtime=io.containerd.runc.v2 Feb 12 19:09:55.844916 env[1139]: time="2024-02-12T19:09:55.844866722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:09:55.844957 env[1139]: time="2024-02-12T19:09:55.844929708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:09:55.844990 env[1139]: time="2024-02-12T19:09:55.844954678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:09:55.845352 env[1139]: time="2024-02-12T19:09:55.845309225Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6cd90c1ffc14dd6793ff8aa3a90699b1de36a27d74a0bbb40df01d9477b8e6cc pid=1715 runtime=io.containerd.runc.v2 Feb 12 19:09:55.857491 systemd[1]: Started cri-containerd-2fa5c0b25684a16065eada954855045763e2c80eb4d7e11d7c5d070c3ed63f92.scope. Feb 12 19:09:55.860513 systemd[1]: Started cri-containerd-2fc8569b702c6226db0b08ad3acdb2582d0608cfb59e3adbd34bfafefd87b3bd.scope. Feb 12 19:09:55.878261 systemd[1]: Started cri-containerd-6cd90c1ffc14dd6793ff8aa3a90699b1de36a27d74a0bbb40df01d9477b8e6cc.scope. Feb 12 19:09:55.925308 env[1139]: time="2024-02-12T19:09:55.925190575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fa5c0b25684a16065eada954855045763e2c80eb4d7e11d7c5d070c3ed63f92\"" Feb 12 19:09:55.927679 kubelet[1627]: E0212 19:09:55.927446 1627 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:55.927778 env[1139]: time="2024-02-12T19:09:55.927460074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fc8569b702c6226db0b08ad3acdb2582d0608cfb59e3adbd34bfafefd87b3bd\"" Feb 12 19:09:55.929391 kubelet[1627]: E0212 19:09:55.928205 1627 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:55.929466 env[1139]: time="2024-02-12T19:09:55.929335449Z" level=info msg="CreateContainer within sandbox \"2fa5c0b25684a16065eada954855045763e2c80eb4d7e11d7c5d070c3ed63f92\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:09:55.930412 env[1139]: time="2024-02-12T19:09:55.930324418Z" level=info msg="CreateContainer within sandbox \"2fc8569b702c6226db0b08ad3acdb2582d0608cfb59e3adbd34bfafefd87b3bd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:09:55.935933 env[1139]: time="2024-02-12T19:09:55.935798001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a283407d6cf35caee2e9c4a606b1bfe2,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cd90c1ffc14dd6793ff8aa3a90699b1de36a27d74a0bbb40df01d9477b8e6cc\"" Feb 12 19:09:55.936548 kubelet[1627]: E0212 19:09:55.936500 1627 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:55.939095 env[1139]: time="2024-02-12T19:09:55.939053867Z" level=info msg="CreateContainer within sandbox \"6cd90c1ffc14dd6793ff8aa3a90699b1de36a27d74a0bbb40df01d9477b8e6cc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:09:55.946624 env[1139]: time="2024-02-12T19:09:55.946573497Z" level=info msg="CreateContainer within sandbox \"2fa5c0b25684a16065eada954855045763e2c80eb4d7e11d7c5d070c3ed63f92\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e9db44ed18dd5418ac9e4c95839ae93dca83443b247b0077dc9566cefc5698c4\"" Feb 12 19:09:55.947183 env[1139]: time="2024-02-12T19:09:55.947153176Z" level=info msg="StartContainer for \"e9db44ed18dd5418ac9e4c95839ae93dca83443b247b0077dc9566cefc5698c4\"" Feb 12 19:09:55.950278 env[1139]: time="2024-02-12T19:09:55.950219845Z" level=info msg="CreateContainer within sandbox \"2fc8569b702c6226db0b08ad3acdb2582d0608cfb59e3adbd34bfafefd87b3bd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c90f6f28c5acce64452aa878bb980acf35413d1fc41bcae82475a085150a6b9d\"" Feb 12 19:09:55.950639 env[1139]: time="2024-02-12T19:09:55.950612207Z" level=info msg="StartContainer for \"c90f6f28c5acce64452aa878bb980acf35413d1fc41bcae82475a085150a6b9d\"" Feb 12 19:09:55.953260 env[1139]: time="2024-02-12T19:09:55.953177668Z" level=info msg="CreateContainer within sandbox \"6cd90c1ffc14dd6793ff8aa3a90699b1de36a27d74a0bbb40df01d9477b8e6cc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"76c3f7a18daf5bbe34a14a1129edd540739c38924183ab1ca8dc0a09376fcd83\"" Feb 12 19:09:55.953734 env[1139]: time="2024-02-12T19:09:55.953706606Z" level=info msg="StartContainer for \"76c3f7a18daf5bbe34a14a1129edd540739c38924183ab1ca8dc0a09376fcd83\"" Feb 12 19:09:55.965416 systemd[1]: Started cri-containerd-e9db44ed18dd5418ac9e4c95839ae93dca83443b247b0077dc9566cefc5698c4.scope. Feb 12 19:09:55.971892 systemd[1]: Started cri-containerd-c90f6f28c5acce64452aa878bb980acf35413d1fc41bcae82475a085150a6b9d.scope. Feb 12 19:09:55.975427 kubelet[1627]: W0212 19:09:55.974727 1627 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:55.975427 kubelet[1627]: E0212 19:09:55.974795 1627 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:55.977305 systemd[1]: Started cri-containerd-76c3f7a18daf5bbe34a14a1129edd540739c38924183ab1ca8dc0a09376fcd83.scope. Feb 12 19:09:56.036491 env[1139]: time="2024-02-12T19:09:56.036394793Z" level=info msg="StartContainer for \"76c3f7a18daf5bbe34a14a1129edd540739c38924183ab1ca8dc0a09376fcd83\" returns successfully" Feb 12 19:09:56.049443 env[1139]: time="2024-02-12T19:09:56.049397954Z" level=info msg="StartContainer for \"e9db44ed18dd5418ac9e4c95839ae93dca83443b247b0077dc9566cefc5698c4\" returns successfully" Feb 12 19:09:56.078488 env[1139]: time="2024-02-12T19:09:56.078083114Z" level=info msg="StartContainer for \"c90f6f28c5acce64452aa878bb980acf35413d1fc41bcae82475a085150a6b9d\" returns successfully" Feb 12 19:09:56.115834 kubelet[1627]: W0212 19:09:56.112149 1627 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:56.115834 kubelet[1627]: E0212 19:09:56.112236 1627 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:56.214915 kubelet[1627]: E0212 19:09:56.214869 1627 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:56.237476 kubelet[1627]: W0212 19:09:56.237418 1627 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:56.237476 kubelet[1627]: E0212 19:09:56.237476 1627 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:56.250394 kubelet[1627]: W0212 19:09:56.250327 1627 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:56.250394 kubelet[1627]: E0212 19:09:56.250386 1627 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Feb 12 19:09:56.316090 kubelet[1627]: I0212 19:09:56.315982 1627 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:09:56.908039 kubelet[1627]: E0212 19:09:56.908001 1627 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:56.910885 kubelet[1627]: E0212 19:09:56.910863 1627 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:56.912931 kubelet[1627]: E0212 19:09:56.912908 1627 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:57.914928 kubelet[1627]: E0212 19:09:57.914449 1627 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:57.914928 kubelet[1627]: E0212 19:09:57.914551 1627 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:57.914928 kubelet[1627]: E0212 19:09:57.914882 1627 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:58.020115 kubelet[1627]: I0212 19:09:58.020074 1627 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 19:09:58.044037 kubelet[1627]: E0212 19:09:58.044003 1627 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:09:58.144581 kubelet[1627]: E0212 19:09:58.144539 1627 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:09:58.245114 kubelet[1627]: E0212 19:09:58.245006 1627 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:09:58.345718 kubelet[1627]: E0212 19:09:58.345675 1627 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:09:58.446680 kubelet[1627]: E0212 19:09:58.446636 1627 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:09:58.547099 kubelet[1627]: E0212 19:09:58.546993 1627 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:09:58.647537 kubelet[1627]: E0212 19:09:58.647492 1627 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:09:58.748060 kubelet[1627]: E0212 19:09:58.748018 1627 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:09:58.849025 kubelet[1627]: E0212 19:09:58.848910 1627 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:09:58.915746 kubelet[1627]: E0212 19:09:58.915721 1627 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:58.949480 kubelet[1627]: E0212 19:09:58.949447 1627 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:09:59.405568 kubelet[1627]: E0212 19:09:59.405539 1627 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:59.805136 kubelet[1627]: I0212 19:09:59.805040 1627 apiserver.go:52] "Watching apiserver" Feb 12 19:09:59.809133 kubelet[1627]: I0212 19:09:59.809104 1627 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:09:59.837348 kubelet[1627]: I0212 19:09:59.837308 1627 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:09:59.916149 kubelet[1627]: E0212 19:09:59.916109 1627 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:00.899902 systemd[1]: Reloading. Feb 12 19:10:00.956044 /usr/lib/systemd/system-generators/torcx-generator[1960]: time="2024-02-12T19:10:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:10:00.956435 /usr/lib/systemd/system-generators/torcx-generator[1960]: time="2024-02-12T19:10:00Z" level=info msg="torcx already run" Feb 12 19:10:01.012829 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:10:01.012847 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:10:01.028207 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:10:01.111085 systemd[1]: Stopping kubelet.service... Feb 12 19:10:01.130654 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:10:01.130867 systemd[1]: Stopped kubelet.service. Feb 12 19:10:01.132768 systemd[1]: Started kubelet.service. Feb 12 19:10:01.191149 kubelet[1998]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:10:01.191149 kubelet[1998]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:10:01.191149 kubelet[1998]: I0212 19:10:01.190711 1998 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:10:01.192162 kubelet[1998]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:10:01.192162 kubelet[1998]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:10:01.195293 kubelet[1998]: I0212 19:10:01.195271 1998 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:10:01.195293 kubelet[1998]: I0212 19:10:01.195294 1998 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:10:01.196163 kubelet[1998]: I0212 19:10:01.196128 1998 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:10:01.198705 kubelet[1998]: I0212 19:10:01.198679 1998 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:10:01.199562 kubelet[1998]: I0212 19:10:01.199538 1998 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:10:01.201546 kubelet[1998]: W0212 19:10:01.201531 1998 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:10:01.202581 kubelet[1998]: I0212 19:10:01.202559 1998 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:10:01.202955 kubelet[1998]: I0212 19:10:01.202942 1998 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:10:01.203128 kubelet[1998]: I0212 19:10:01.203109 1998 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:10:01.203297 kubelet[1998]: I0212 19:10:01.203282 1998 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:10:01.203370 kubelet[1998]: I0212 19:10:01.203360 1998 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:10:01.203451 kubelet[1998]: I0212 19:10:01.203441 1998 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:10:01.206349 kubelet[1998]: I0212 19:10:01.206325 1998 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:10:01.206468 kubelet[1998]: I0212 19:10:01.206455 1998 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:10:01.206557 kubelet[1998]: I0212 19:10:01.206545 1998 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:10:01.206630 kubelet[1998]: I0212 19:10:01.206620 1998 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:10:01.207435 kubelet[1998]: I0212 19:10:01.207420 1998 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:10:01.208636 kubelet[1998]: I0212 19:10:01.208617 1998 server.go:1186] "Started kubelet" Feb 12 19:10:01.209738 kubelet[1998]: I0212 19:10:01.209715 1998 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:10:01.210006 kubelet[1998]: I0212 19:10:01.209980 1998 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:10:01.210899 kubelet[1998]: I0212 19:10:01.210870 1998 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:10:01.212312 kubelet[1998]: E0212 19:10:01.212143 1998 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:10:01.212312 kubelet[1998]: E0212 19:10:01.212171 1998 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:10:01.212312 kubelet[1998]: E0212 19:10:01.212232 1998 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:10:01.212312 kubelet[1998]: I0212 19:10:01.212254 1998 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:10:01.212312 kubelet[1998]: I0212 19:10:01.212317 1998 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:10:01.271170 kubelet[1998]: I0212 19:10:01.271110 1998 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:10:01.281116 kubelet[1998]: I0212 19:10:01.277835 1998 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:10:01.281116 kubelet[1998]: I0212 19:10:01.277868 1998 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:10:01.281116 kubelet[1998]: I0212 19:10:01.277886 1998 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:10:01.281116 kubelet[1998]: I0212 19:10:01.278062 1998 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:10:01.281116 kubelet[1998]: I0212 19:10:01.278076 1998 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 19:10:01.281116 kubelet[1998]: I0212 19:10:01.278083 1998 policy_none.go:49] "None policy: Start" Feb 12 19:10:01.281116 kubelet[1998]: I0212 19:10:01.278697 1998 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:10:01.281116 kubelet[1998]: I0212 19:10:01.278722 1998 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:10:01.281116 kubelet[1998]: I0212 19:10:01.278845 1998 state_mem.go:75] "Updated machine memory state" Feb 12 19:10:01.285664 kubelet[1998]: I0212 19:10:01.284083 1998 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:10:01.285664 kubelet[1998]: I0212 19:10:01.284302 1998 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:10:01.295343 kubelet[1998]: I0212 19:10:01.295314 1998 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:10:01.295343 kubelet[1998]: I0212 19:10:01.295336 1998 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:10:01.295343 kubelet[1998]: I0212 19:10:01.295352 1998 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:10:01.295506 kubelet[1998]: E0212 19:10:01.295395 1998 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:10:01.315617 kubelet[1998]: I0212 19:10:01.315579 1998 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:10:01.321611 kubelet[1998]: I0212 19:10:01.321569 1998 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 12 19:10:01.321794 kubelet[1998]: I0212 19:10:01.321686 1998 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 19:10:01.396160 kubelet[1998]: I0212 19:10:01.396111 1998 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:10:01.396303 kubelet[1998]: I0212 19:10:01.396268 1998 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:10:01.396717 kubelet[1998]: I0212 19:10:01.396675 1998 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:10:01.514441 kubelet[1998]: I0212 19:10:01.514322 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:10:01.514441 kubelet[1998]: I0212 19:10:01.514376 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:10:01.514599 kubelet[1998]: I0212 19:10:01.514440 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:10:01.514599 kubelet[1998]: I0212 19:10:01.514529 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:10:01.514599 kubelet[1998]: I0212 19:10:01.514577 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:10:01.514688 kubelet[1998]: I0212 19:10:01.514609 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 12 19:10:01.514688 kubelet[1998]: I0212 19:10:01.514648 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a283407d6cf35caee2e9c4a606b1bfe2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a283407d6cf35caee2e9c4a606b1bfe2\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:10:01.514688 kubelet[1998]: I0212 19:10:01.514686 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a283407d6cf35caee2e9c4a606b1bfe2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a283407d6cf35caee2e9c4a606b1bfe2\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:10:01.514770 kubelet[1998]: I0212 19:10:01.514705 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a283407d6cf35caee2e9c4a606b1bfe2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a283407d6cf35caee2e9c4a606b1bfe2\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:10:01.612500 kubelet[1998]: E0212 19:10:01.612468 1998 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 12 19:10:01.701778 kubelet[1998]: E0212 19:10:01.701749 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:01.710996 kubelet[1998]: E0212 19:10:01.710957 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:01.913207 kubelet[1998]: E0212 19:10:01.913175 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:02.207274 kubelet[1998]: I0212 19:10:02.207158 1998 apiserver.go:52] "Watching apiserver" Feb 12 19:10:02.394714 sudo[1232]: pam_unix(sudo:session): session closed for user root Feb 12 19:10:02.396506 sshd[1229]: pam_unix(sshd:session): session closed for user core Feb 12 19:10:02.399384 systemd[1]: sshd@4-10.0.0.19:22-10.0.0.1:57354.service: Deactivated successfully. Feb 12 19:10:02.400141 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:10:02.400354 systemd[1]: session-5.scope: Consumed 5.471s CPU time. Feb 12 19:10:02.400810 systemd-logind[1127]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:10:02.401596 systemd-logind[1127]: Removed session 5. Feb 12 19:10:02.413397 kubelet[1998]: I0212 19:10:02.413362 1998 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:10:02.421522 kubelet[1998]: I0212 19:10:02.421492 1998 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:10:02.812139 kubelet[1998]: E0212 19:10:02.812102 1998 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 12 19:10:02.812603 kubelet[1998]: E0212 19:10:02.812587 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:03.013814 kubelet[1998]: E0212 19:10:03.013762 1998 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 12 19:10:03.014231 kubelet[1998]: E0212 19:10:03.014197 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:03.213185 kubelet[1998]: E0212 19:10:03.213116 1998 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 12 19:10:03.213885 kubelet[1998]: E0212 19:10:03.213577 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:03.303936 kubelet[1998]: E0212 19:10:03.303903 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:03.305406 kubelet[1998]: E0212 19:10:03.305387 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:03.306142 kubelet[1998]: E0212 19:10:03.306128 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:03.420502 kubelet[1998]: I0212 19:10:03.420408 1998 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.420366247 pod.CreationTimestamp="2024-02-12 19:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:10:03.420349803 +0000 UTC m=+2.282889542" watchObservedRunningTime="2024-02-12 19:10:03.420366247 +0000 UTC m=+2.282905986" Feb 12 19:10:04.216124 kubelet[1998]: I0212 19:10:04.216091 1998 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.2160506269999996 pod.CreationTimestamp="2024-02-12 19:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:10:03.813689471 +0000 UTC m=+2.676229250" watchObservedRunningTime="2024-02-12 19:10:04.216050627 +0000 UTC m=+3.078590366" Feb 12 19:10:04.615952 kubelet[1998]: I0212 19:10:04.615915 1998 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.615871529 pod.CreationTimestamp="2024-02-12 19:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:10:04.216873897 +0000 UTC m=+3.079413636" watchObservedRunningTime="2024-02-12 19:10:04.615871529 +0000 UTC m=+3.478411228" Feb 12 19:10:07.747891 kubelet[1998]: E0212 19:10:07.747859 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:08.310860 kubelet[1998]: E0212 19:10:08.310813 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:08.643713 kubelet[1998]: E0212 19:10:08.643683 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:09.311794 kubelet[1998]: E0212 19:10:09.311740 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:12.267720 kubelet[1998]: E0212 19:10:12.267599 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:12.315345 kubelet[1998]: E0212 19:10:12.314685 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:15.026945 kubelet[1998]: I0212 19:10:15.026909 1998 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 19:10:15.029673 env[1139]: time="2024-02-12T19:10:15.028633020Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:10:15.030454 kubelet[1998]: I0212 19:10:15.030428 1998 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 19:10:15.299721 kubelet[1998]: I0212 19:10:15.299615 1998 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:10:15.300385 kubelet[1998]: I0212 19:10:15.300344 1998 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:10:15.305855 systemd[1]: Created slice kubepods-burstable-poda9385274_7a9a_474a_af36_27eccf9337ce.slice. Feb 12 19:10:15.310495 systemd[1]: Created slice kubepods-besteffort-podc2b5273f_ffdb_4c50_933d_e872c9c76b16.slice. Feb 12 19:10:15.311007 kubelet[1998]: I0212 19:10:15.310964 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a9385274-7a9a-474a-af36-27eccf9337ce-run\") pod \"kube-flannel-ds-pstqh\" (UID: \"a9385274-7a9a-474a-af36-27eccf9337ce\") " pod="kube-flannel/kube-flannel-ds-pstqh" Feb 12 19:10:15.311007 kubelet[1998]: I0212 19:10:15.311008 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/a9385274-7a9a-474a-af36-27eccf9337ce-cni-plugin\") pod \"kube-flannel-ds-pstqh\" (UID: \"a9385274-7a9a-474a-af36-27eccf9337ce\") " pod="kube-flannel/kube-flannel-ds-pstqh" Feb 12 19:10:15.311122 kubelet[1998]: I0212 19:10:15.311029 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/a9385274-7a9a-474a-af36-27eccf9337ce-cni\") pod \"kube-flannel-ds-pstqh\" (UID: \"a9385274-7a9a-474a-af36-27eccf9337ce\") " pod="kube-flannel/kube-flannel-ds-pstqh" Feb 12 19:10:15.311122 kubelet[1998]: I0212 19:10:15.311053 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86dc7\" (UniqueName: \"kubernetes.io/projected/a9385274-7a9a-474a-af36-27eccf9337ce-kube-api-access-86dc7\") pod \"kube-flannel-ds-pstqh\" (UID: \"a9385274-7a9a-474a-af36-27eccf9337ce\") " pod="kube-flannel/kube-flannel-ds-pstqh" Feb 12 19:10:15.311122 kubelet[1998]: I0212 19:10:15.311073 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c2b5273f-ffdb-4c50-933d-e872c9c76b16-kube-proxy\") pod \"kube-proxy-twjj6\" (UID: \"c2b5273f-ffdb-4c50-933d-e872c9c76b16\") " pod="kube-system/kube-proxy-twjj6" Feb 12 19:10:15.311122 kubelet[1998]: I0212 19:10:15.311092 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/a9385274-7a9a-474a-af36-27eccf9337ce-flannel-cfg\") pod \"kube-flannel-ds-pstqh\" (UID: \"a9385274-7a9a-474a-af36-27eccf9337ce\") " pod="kube-flannel/kube-flannel-ds-pstqh" Feb 12 19:10:15.311122 kubelet[1998]: I0212 19:10:15.311110 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2b5273f-ffdb-4c50-933d-e872c9c76b16-xtables-lock\") pod \"kube-proxy-twjj6\" (UID: \"c2b5273f-ffdb-4c50-933d-e872c9c76b16\") " pod="kube-system/kube-proxy-twjj6" Feb 12 19:10:15.311272 kubelet[1998]: I0212 19:10:15.311129 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlg7h\" (UniqueName: \"kubernetes.io/projected/c2b5273f-ffdb-4c50-933d-e872c9c76b16-kube-api-access-nlg7h\") pod \"kube-proxy-twjj6\" (UID: \"c2b5273f-ffdb-4c50-933d-e872c9c76b16\") " pod="kube-system/kube-proxy-twjj6" Feb 12 19:10:15.311272 kubelet[1998]: I0212 19:10:15.311152 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9385274-7a9a-474a-af36-27eccf9337ce-xtables-lock\") pod \"kube-flannel-ds-pstqh\" (UID: \"a9385274-7a9a-474a-af36-27eccf9337ce\") " pod="kube-flannel/kube-flannel-ds-pstqh" Feb 12 19:10:15.311272 kubelet[1998]: I0212 19:10:15.311174 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2b5273f-ffdb-4c50-933d-e872c9c76b16-lib-modules\") pod \"kube-proxy-twjj6\" (UID: \"c2b5273f-ffdb-4c50-933d-e872c9c76b16\") " pod="kube-system/kube-proxy-twjj6" Feb 12 19:10:15.421030 kubelet[1998]: E0212 19:10:15.420981 1998 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 12 19:10:15.421030 kubelet[1998]: E0212 19:10:15.421012 1998 projected.go:198] Error preparing data for projected volume kube-api-access-nlg7h for pod kube-system/kube-proxy-twjj6: configmap "kube-root-ca.crt" not found Feb 12 19:10:15.421250 kubelet[1998]: E0212 19:10:15.421067 1998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2b5273f-ffdb-4c50-933d-e872c9c76b16-kube-api-access-nlg7h podName:c2b5273f-ffdb-4c50-933d-e872c9c76b16 nodeName:}" failed. No retries permitted until 2024-02-12 19:10:15.921048144 +0000 UTC m=+14.783587843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nlg7h" (UniqueName: "kubernetes.io/projected/c2b5273f-ffdb-4c50-933d-e872c9c76b16-kube-api-access-nlg7h") pod "kube-proxy-twjj6" (UID: "c2b5273f-ffdb-4c50-933d-e872c9c76b16") : configmap "kube-root-ca.crt" not found Feb 12 19:10:15.421406 kubelet[1998]: E0212 19:10:15.421381 1998 projected.go:292] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 12 19:10:15.421406 kubelet[1998]: E0212 19:10:15.421398 1998 projected.go:198] Error preparing data for projected volume kube-api-access-86dc7 for pod kube-flannel/kube-flannel-ds-pstqh: configmap "kube-root-ca.crt" not found Feb 12 19:10:15.421484 kubelet[1998]: E0212 19:10:15.421435 1998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a9385274-7a9a-474a-af36-27eccf9337ce-kube-api-access-86dc7 podName:a9385274-7a9a-474a-af36-27eccf9337ce nodeName:}" failed. No retries permitted until 2024-02-12 19:10:15.921421467 +0000 UTC m=+14.783961206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-86dc7" (UniqueName: "kubernetes.io/projected/a9385274-7a9a-474a-af36-27eccf9337ce-kube-api-access-86dc7") pod "kube-flannel-ds-pstqh" (UID: "a9385274-7a9a-474a-af36-27eccf9337ce") : configmap "kube-root-ca.crt" not found Feb 12 19:10:15.987063 update_engine[1130]: I0212 19:10:15.987007 1130 update_attempter.cc:509] Updating boot flags... Feb 12 19:10:16.208836 kubelet[1998]: E0212 19:10:16.208792 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:16.209527 env[1139]: time="2024-02-12T19:10:16.209475842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-pstqh,Uid:a9385274-7a9a-474a-af36-27eccf9337ce,Namespace:kube-flannel,Attempt:0,}" Feb 12 19:10:16.217728 kubelet[1998]: E0212 19:10:16.217692 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:16.218412 env[1139]: time="2024-02-12T19:10:16.218368150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-twjj6,Uid:c2b5273f-ffdb-4c50-933d-e872c9c76b16,Namespace:kube-system,Attempt:0,}" Feb 12 19:10:16.250306 env[1139]: time="2024-02-12T19:10:16.250146740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:10:16.250461 env[1139]: time="2024-02-12T19:10:16.250191185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:10:16.250461 env[1139]: time="2024-02-12T19:10:16.250201706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:10:16.250917 env[1139]: time="2024-02-12T19:10:16.250871017Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc71650902522abb1bc924a4d5f04bffec624acffdd583277531b93517448cd1 pid=2114 runtime=io.containerd.runc.v2 Feb 12 19:10:16.251199 env[1139]: time="2024-02-12T19:10:16.251154847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:10:16.251304 env[1139]: time="2024-02-12T19:10:16.251192451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:10:16.251304 env[1139]: time="2024-02-12T19:10:16.251203612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:10:16.251398 env[1139]: time="2024-02-12T19:10:16.251333546Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45cbe81d3154e28bb9b9c929774336f61744ec341ad58136046de55d94bf599 pid=2115 runtime=io.containerd.runc.v2 Feb 12 19:10:16.262678 systemd[1]: Started cri-containerd-cc71650902522abb1bc924a4d5f04bffec624acffdd583277531b93517448cd1.scope. Feb 12 19:10:16.271974 systemd[1]: Started cri-containerd-f45cbe81d3154e28bb9b9c929774336f61744ec341ad58136046de55d94bf599.scope. Feb 12 19:10:16.325762 env[1139]: time="2024-02-12T19:10:16.325680956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-pstqh,Uid:a9385274-7a9a-474a-af36-27eccf9337ce,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"cc71650902522abb1bc924a4d5f04bffec624acffdd583277531b93517448cd1\"" Feb 12 19:10:16.326614 kubelet[1998]: E0212 19:10:16.326444 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:16.329051 env[1139]: time="2024-02-12T19:10:16.329017552Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\"" Feb 12 19:10:16.330398 env[1139]: time="2024-02-12T19:10:16.330166275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-twjj6,Uid:c2b5273f-ffdb-4c50-933d-e872c9c76b16,Namespace:kube-system,Attempt:0,} returns sandbox id \"f45cbe81d3154e28bb9b9c929774336f61744ec341ad58136046de55d94bf599\"" Feb 12 19:10:16.331376 kubelet[1998]: E0212 19:10:16.331348 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:16.334258 env[1139]: time="2024-02-12T19:10:16.333834146Z" level=info msg="CreateContainer within sandbox \"f45cbe81d3154e28bb9b9c929774336f61744ec341ad58136046de55d94bf599\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:10:16.349337 env[1139]: time="2024-02-12T19:10:16.349277073Z" level=info msg="CreateContainer within sandbox \"f45cbe81d3154e28bb9b9c929774336f61744ec341ad58136046de55d94bf599\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d45c68ca5f3a5c3f44d015941e023df135900a342c56c79f330f34dd760b19a\"" Feb 12 19:10:16.350057 env[1139]: time="2024-02-12T19:10:16.350028513Z" level=info msg="StartContainer for \"2d45c68ca5f3a5c3f44d015941e023df135900a342c56c79f330f34dd760b19a\"" Feb 12 19:10:16.365811 systemd[1]: Started cri-containerd-2d45c68ca5f3a5c3f44d015941e023df135900a342c56c79f330f34dd760b19a.scope. Feb 12 19:10:16.442081 env[1139]: time="2024-02-12T19:10:16.442025565Z" level=info msg="StartContainer for \"2d45c68ca5f3a5c3f44d015941e023df135900a342c56c79f330f34dd760b19a\" returns successfully" Feb 12 19:10:17.322868 kubelet[1998]: E0212 19:10:17.322804 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:17.331768 kubelet[1998]: I0212 19:10:17.331726 1998 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-twjj6" podStartSLOduration=2.331679532 pod.CreationTimestamp="2024-02-12 19:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:10:17.330985343 +0000 UTC m=+16.193525082" watchObservedRunningTime="2024-02-12 19:10:17.331679532 +0000 UTC m=+16.194219271" Feb 12 19:10:17.396998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2558751442.mount: Deactivated successfully. Feb 12 19:10:17.785758 env[1139]: time="2024-02-12T19:10:17.785703852Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:10:17.797935 env[1139]: time="2024-02-12T19:10:17.797866988Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b04a1a4152e14ddc6c26adc946baca3226718fa1acce540c015ac593e50218a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:10:17.799525 env[1139]: time="2024-02-12T19:10:17.799485870Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:10:17.800668 env[1139]: time="2024-02-12T19:10:17.800638865Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:10:17.801199 env[1139]: time="2024-02-12T19:10:17.801174799Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\" returns image reference \"sha256:b04a1a4152e14ddc6c26adc946baca3226718fa1acce540c015ac593e50218a9\"" Feb 12 19:10:17.804953 env[1139]: time="2024-02-12T19:10:17.804898531Z" level=info msg="CreateContainer within sandbox \"cc71650902522abb1bc924a4d5f04bffec624acffdd583277531b93517448cd1\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 12 19:10:17.815053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851483961.mount: Deactivated successfully. Feb 12 19:10:17.817969 env[1139]: time="2024-02-12T19:10:17.817919393Z" level=info msg="CreateContainer within sandbox \"cc71650902522abb1bc924a4d5f04bffec624acffdd583277531b93517448cd1\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"73a67fb0553e02a7f6a6dfd0172c10a3e14774ac722efaa517ae04d45a0772f8\"" Feb 12 19:10:17.819588 env[1139]: time="2024-02-12T19:10:17.819552796Z" level=info msg="StartContainer for \"73a67fb0553e02a7f6a6dfd0172c10a3e14774ac722efaa517ae04d45a0772f8\"" Feb 12 19:10:17.834944 systemd[1]: Started cri-containerd-73a67fb0553e02a7f6a6dfd0172c10a3e14774ac722efaa517ae04d45a0772f8.scope. Feb 12 19:10:17.873344 env[1139]: time="2024-02-12T19:10:17.873297690Z" level=info msg="StartContainer for \"73a67fb0553e02a7f6a6dfd0172c10a3e14774ac722efaa517ae04d45a0772f8\" returns successfully" Feb 12 19:10:17.880658 systemd[1]: cri-containerd-73a67fb0553e02a7f6a6dfd0172c10a3e14774ac722efaa517ae04d45a0772f8.scope: Deactivated successfully. Feb 12 19:10:17.964207 env[1139]: time="2024-02-12T19:10:17.964160376Z" level=info msg="shim disconnected" id=73a67fb0553e02a7f6a6dfd0172c10a3e14774ac722efaa517ae04d45a0772f8 Feb 12 19:10:17.964207 env[1139]: time="2024-02-12T19:10:17.964206501Z" level=warning msg="cleaning up after shim disconnected" id=73a67fb0553e02a7f6a6dfd0172c10a3e14774ac722efaa517ae04d45a0772f8 namespace=k8s.io Feb 12 19:10:17.964444 env[1139]: time="2024-02-12T19:10:17.964234064Z" level=info msg="cleaning up dead shim" Feb 12 19:10:17.971804 env[1139]: time="2024-02-12T19:10:17.971735174Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:10:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2367 runtime=io.containerd.runc.v2\n" Feb 12 19:10:18.328503 kubelet[1998]: E0212 19:10:18.328473 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:18.330231 kubelet[1998]: E0212 19:10:18.330187 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:18.331814 env[1139]: time="2024-02-12T19:10:18.331770592Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\"" Feb 12 19:10:19.387587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3450871966.mount: Deactivated successfully. Feb 12 19:10:19.959297 env[1139]: time="2024-02-12T19:10:19.959250031Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:10:19.961139 env[1139]: time="2024-02-12T19:10:19.961076392Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:37c457685cef0c53d8641973794ca8ca8b89902c01fd7b52bc718f9b434da459,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:10:19.963457 env[1139]: time="2024-02-12T19:10:19.963429158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:10:19.965171 env[1139]: time="2024-02-12T19:10:19.965138028Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel@sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:10:19.965779 env[1139]: time="2024-02-12T19:10:19.965738881Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\" returns image reference \"sha256:37c457685cef0c53d8641973794ca8ca8b89902c01fd7b52bc718f9b434da459\"" Feb 12 19:10:19.968447 env[1139]: time="2024-02-12T19:10:19.968384874Z" level=info msg="CreateContainer within sandbox \"cc71650902522abb1bc924a4d5f04bffec624acffdd583277531b93517448cd1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 12 19:10:19.984029 env[1139]: time="2024-02-12T19:10:19.983974244Z" level=info msg="CreateContainer within sandbox \"cc71650902522abb1bc924a4d5f04bffec624acffdd583277531b93517448cd1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"14d3b6da47eb351ed558748a37c35e0fa7cad4618afe2888939c2d7244889d67\"" Feb 12 19:10:19.984664 env[1139]: time="2024-02-12T19:10:19.984629342Z" level=info msg="StartContainer for \"14d3b6da47eb351ed558748a37c35e0fa7cad4618afe2888939c2d7244889d67\"" Feb 12 19:10:19.998331 systemd[1]: Started cri-containerd-14d3b6da47eb351ed558748a37c35e0fa7cad4618afe2888939c2d7244889d67.scope. Feb 12 19:10:20.035245 systemd[1]: cri-containerd-14d3b6da47eb351ed558748a37c35e0fa7cad4618afe2888939c2d7244889d67.scope: Deactivated successfully. Feb 12 19:10:20.043161 env[1139]: time="2024-02-12T19:10:20.043108651Z" level=info msg="StartContainer for \"14d3b6da47eb351ed558748a37c35e0fa7cad4618afe2888939c2d7244889d67\" returns successfully" Feb 12 19:10:20.062439 kubelet[1998]: I0212 19:10:20.061643 1998 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:10:20.085919 kubelet[1998]: I0212 19:10:20.084118 1998 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:10:20.086440 kubelet[1998]: I0212 19:10:20.086127 1998 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:10:20.090167 systemd[1]: Created slice kubepods-burstable-podda38ac4f_0a62_4bbb_a147_c496a3d1e4f9.slice. Feb 12 19:10:20.093856 systemd[1]: Created slice kubepods-burstable-pod2afbc787_ed75_47ab_ab6f_2372901b5131.slice. Feb 12 19:10:20.141688 env[1139]: time="2024-02-12T19:10:20.141624048Z" level=info msg="shim disconnected" id=14d3b6da47eb351ed558748a37c35e0fa7cad4618afe2888939c2d7244889d67 Feb 12 19:10:20.141688 env[1139]: time="2024-02-12T19:10:20.141673292Z" level=warning msg="cleaning up after shim disconnected" id=14d3b6da47eb351ed558748a37c35e0fa7cad4618afe2888939c2d7244889d67 namespace=k8s.io Feb 12 19:10:20.141688 env[1139]: time="2024-02-12T19:10:20.141683173Z" level=info msg="cleaning up dead shim" Feb 12 19:10:20.143823 kubelet[1998]: I0212 19:10:20.143156 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da38ac4f-0a62-4bbb-a147-c496a3d1e4f9-config-volume\") pod \"coredns-787d4945fb-kpxfq\" (UID: \"da38ac4f-0a62-4bbb-a147-c496a3d1e4f9\") " pod="kube-system/coredns-787d4945fb-kpxfq" Feb 12 19:10:20.143823 kubelet[1998]: I0212 19:10:20.143199 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzngm\" (UniqueName: \"kubernetes.io/projected/da38ac4f-0a62-4bbb-a147-c496a3d1e4f9-kube-api-access-bzngm\") pod \"coredns-787d4945fb-kpxfq\" (UID: \"da38ac4f-0a62-4bbb-a147-c496a3d1e4f9\") " pod="kube-system/coredns-787d4945fb-kpxfq" Feb 12 19:10:20.143823 kubelet[1998]: I0212 19:10:20.143237 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2afbc787-ed75-47ab-ab6f-2372901b5131-config-volume\") pod \"coredns-787d4945fb-5trf8\" (UID: \"2afbc787-ed75-47ab-ab6f-2372901b5131\") " pod="kube-system/coredns-787d4945fb-5trf8" Feb 12 19:10:20.143823 kubelet[1998]: I0212 19:10:20.143260 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwjmz\" (UniqueName: \"kubernetes.io/projected/2afbc787-ed75-47ab-ab6f-2372901b5131-kube-api-access-kwjmz\") pod \"coredns-787d4945fb-5trf8\" (UID: \"2afbc787-ed75-47ab-ab6f-2372901b5131\") " pod="kube-system/coredns-787d4945fb-5trf8" Feb 12 19:10:20.151995 env[1139]: time="2024-02-12T19:10:20.151941418Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:10:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2422 runtime=io.containerd.runc.v2\n" Feb 12 19:10:20.305955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2317167816.mount: Deactivated successfully. Feb 12 19:10:20.333122 kubelet[1998]: E0212 19:10:20.333097 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:20.336470 env[1139]: time="2024-02-12T19:10:20.336426658Z" level=info msg="CreateContainer within sandbox \"cc71650902522abb1bc924a4d5f04bffec624acffdd583277531b93517448cd1\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 12 19:10:20.391020 env[1139]: time="2024-02-12T19:10:20.390959992Z" level=info msg="CreateContainer within sandbox \"cc71650902522abb1bc924a4d5f04bffec624acffdd583277531b93517448cd1\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"ebb0a554f227cf5f02150c34b9c4784bdf46b63e7ed2dc0b8bfd0d19886fac9a\"" Feb 12 19:10:20.392913 env[1139]: time="2024-02-12T19:10:20.391757457Z" level=info msg="StartContainer for \"ebb0a554f227cf5f02150c34b9c4784bdf46b63e7ed2dc0b8bfd0d19886fac9a\"" Feb 12 19:10:20.395979 kubelet[1998]: E0212 19:10:20.395941 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:20.396604 env[1139]: time="2024-02-12T19:10:20.396569934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-kpxfq,Uid:da38ac4f-0a62-4bbb-a147-c496a3d1e4f9,Namespace:kube-system,Attempt:0,}" Feb 12 19:10:20.397532 kubelet[1998]: E0212 19:10:20.397511 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:20.397862 env[1139]: time="2024-02-12T19:10:20.397825277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-5trf8,Uid:2afbc787-ed75-47ab-ab6f-2372901b5131,Namespace:kube-system,Attempt:0,}" Feb 12 19:10:20.411173 systemd[1]: Started cri-containerd-ebb0a554f227cf5f02150c34b9c4784bdf46b63e7ed2dc0b8bfd0d19886fac9a.scope. Feb 12 19:10:20.452897 env[1139]: time="2024-02-12T19:10:20.452818368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-kpxfq,Uid:da38ac4f-0a62-4bbb-a147-c496a3d1e4f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"39e7dc3b29bdd37246eac2a64507a775df99bbf1fb02bc18cba262b4681ad09d\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 12 19:10:20.453085 kubelet[1998]: E0212 19:10:20.453042 1998 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39e7dc3b29bdd37246eac2a64507a775df99bbf1fb02bc18cba262b4681ad09d\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 12 19:10:20.453147 kubelet[1998]: E0212 19:10:20.453098 1998 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39e7dc3b29bdd37246eac2a64507a775df99bbf1fb02bc18cba262b4681ad09d\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-kpxfq" Feb 12 19:10:20.453147 kubelet[1998]: E0212 19:10:20.453120 1998 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39e7dc3b29bdd37246eac2a64507a775df99bbf1fb02bc18cba262b4681ad09d\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-kpxfq" Feb 12 19:10:20.453201 kubelet[1998]: E0212 19:10:20.453168 1998 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-kpxfq_kube-system(da38ac4f-0a62-4bbb-a147-c496a3d1e4f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-kpxfq_kube-system(da38ac4f-0a62-4bbb-a147-c496a3d1e4f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39e7dc3b29bdd37246eac2a64507a775df99bbf1fb02bc18cba262b4681ad09d\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-kpxfq" podUID=da38ac4f-0a62-4bbb-a147-c496a3d1e4f9 Feb 12 19:10:20.453274 env[1139]: time="2024-02-12T19:10:20.453183358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-5trf8,Uid:2afbc787-ed75-47ab-ab6f-2372901b5131,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"047ea3946d530940dfb62bd6dc6a3c6c16be5a73a2b95ed17db1980ffd5b86c7\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 12 19:10:20.453543 kubelet[1998]: E0212 19:10:20.453410 1998 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"047ea3946d530940dfb62bd6dc6a3c6c16be5a73a2b95ed17db1980ffd5b86c7\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 12 19:10:20.453543 kubelet[1998]: E0212 19:10:20.453448 1998 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"047ea3946d530940dfb62bd6dc6a3c6c16be5a73a2b95ed17db1980ffd5b86c7\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-5trf8" Feb 12 19:10:20.453543 kubelet[1998]: E0212 19:10:20.453476 1998 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"047ea3946d530940dfb62bd6dc6a3c6c16be5a73a2b95ed17db1980ffd5b86c7\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-5trf8" Feb 12 19:10:20.453543 kubelet[1998]: E0212 19:10:20.453514 1998 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-5trf8_kube-system(2afbc787-ed75-47ab-ab6f-2372901b5131)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-5trf8_kube-system(2afbc787-ed75-47ab-ab6f-2372901b5131)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"047ea3946d530940dfb62bd6dc6a3c6c16be5a73a2b95ed17db1980ffd5b86c7\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-5trf8" podUID=2afbc787-ed75-47ab-ab6f-2372901b5131 Feb 12 19:10:20.464546 env[1139]: time="2024-02-12T19:10:20.464021571Z" level=info msg="StartContainer for \"ebb0a554f227cf5f02150c34b9c4784bdf46b63e7ed2dc0b8bfd0d19886fac9a\" returns successfully" Feb 12 19:10:21.338252 kubelet[1998]: E0212 19:10:21.337442 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:22.100320 systemd-networkd[1040]: flannel.1: Link UP Feb 12 19:10:22.100327 systemd-networkd[1040]: flannel.1: Gained carrier Feb 12 19:10:22.343050 kubelet[1998]: E0212 19:10:22.343020 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:23.404462 systemd-networkd[1040]: flannel.1: Gained IPv6LL Feb 12 19:10:32.296956 kubelet[1998]: E0212 19:10:32.296922 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:32.297345 kubelet[1998]: E0212 19:10:32.297025 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:32.297417 env[1139]: time="2024-02-12T19:10:32.297365906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-kpxfq,Uid:da38ac4f-0a62-4bbb-a147-c496a3d1e4f9,Namespace:kube-system,Attempt:0,}" Feb 12 19:10:32.297830 env[1139]: time="2024-02-12T19:10:32.297788842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-5trf8,Uid:2afbc787-ed75-47ab-ab6f-2372901b5131,Namespace:kube-system,Attempt:0,}" Feb 12 19:10:32.331150 systemd-networkd[1040]: cni0: Link UP Feb 12 19:10:32.331155 systemd-networkd[1040]: cni0: Gained carrier Feb 12 19:10:32.332692 systemd-networkd[1040]: cni0: Lost carrier Feb 12 19:10:32.336440 systemd-networkd[1040]: veth13ef8877: Link UP Feb 12 19:10:32.338659 kernel: cni0: port 1(veth13ef8877) entered blocking state Feb 12 19:10:32.338737 kernel: cni0: port 1(veth13ef8877) entered disabled state Feb 12 19:10:32.338758 kernel: device veth13ef8877 entered promiscuous mode Feb 12 19:10:32.339769 kernel: cni0: port 1(veth13ef8877) entered blocking state Feb 12 19:10:32.339831 kernel: cni0: port 1(veth13ef8877) entered forwarding state Feb 12 19:10:32.345371 kernel: cni0: port 1(veth13ef8877) entered disabled state Feb 12 19:10:32.345458 kernel: cni0: port 2(vetha2d8b4f5) entered blocking state Feb 12 19:10:32.346548 kernel: cni0: port 2(vetha2d8b4f5) entered disabled state Feb 12 19:10:32.346608 kernel: device vetha2d8b4f5 entered promiscuous mode Feb 12 19:10:32.347558 kernel: cni0: port 2(vetha2d8b4f5) entered blocking state Feb 12 19:10:32.347613 kernel: cni0: port 2(vetha2d8b4f5) entered forwarding state Feb 12 19:10:32.349235 kernel: cni0: port 2(vetha2d8b4f5) entered disabled state Feb 12 19:10:32.350484 systemd-networkd[1040]: vetha2d8b4f5: Link UP Feb 12 19:10:32.351095 systemd-networkd[1040]: cni0: Gained carrier Feb 12 19:10:32.355429 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth13ef8877: link becomes ready Feb 12 19:10:32.355498 kernel: cni0: port 1(veth13ef8877) entered blocking state Feb 12 19:10:32.355514 kernel: cni0: port 1(veth13ef8877) entered forwarding state Feb 12 19:10:32.356018 systemd-networkd[1040]: veth13ef8877: Gained carrier Feb 12 19:10:32.358266 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vetha2d8b4f5: link becomes ready Feb 12 19:10:32.358425 kernel: cni0: port 2(vetha2d8b4f5) entered blocking state Feb 12 19:10:32.358471 kernel: cni0: port 2(vetha2d8b4f5) entered forwarding state Feb 12 19:10:32.358365 systemd-networkd[1040]: vetha2d8b4f5: Gained carrier Feb 12 19:10:32.360603 env[1139]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400011a8e8), "name":"cbr0", "type":"bridge"} Feb 12 19:10:32.360692 env[1139]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Feb 12 19:10:32.360692 env[1139]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000016928), "name":"cbr0", "type":"bridge"} Feb 12 19:10:32.370480 env[1139]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-12T19:10:32.370421601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:10:32.370581 env[1139]: time="2024-02-12T19:10:32.370490363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:10:32.370581 env[1139]: time="2024-02-12T19:10:32.370513404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:10:32.370797 env[1139]: time="2024-02-12T19:10:32.370761773Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3479dcefe92cf5fe6bbd6cbbf3e3a5e50a63217fbb7e893c43818eb1acf1e8ce pid=2713 runtime=io.containerd.runc.v2 Feb 12 19:10:32.371412 env[1139]: time="2024-02-12T19:10:32.371343396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:10:32.371412 env[1139]: time="2024-02-12T19:10:32.371382637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:10:32.371542 env[1139]: time="2024-02-12T19:10:32.371401038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:10:32.374415 env[1139]: time="2024-02-12T19:10:32.372009181Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c55545e8fd80cae1d15ef89f1ac703159c4186def563c83407e3e6a5583326d6 pid=2722 runtime=io.containerd.runc.v2 Feb 12 19:10:32.385525 systemd[1]: Started cri-containerd-3479dcefe92cf5fe6bbd6cbbf3e3a5e50a63217fbb7e893c43818eb1acf1e8ce.scope. Feb 12 19:10:32.400266 systemd[1]: Started cri-containerd-c55545e8fd80cae1d15ef89f1ac703159c4186def563c83407e3e6a5583326d6.scope. Feb 12 19:10:32.419275 systemd-resolved[1086]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:10:32.420682 systemd-resolved[1086]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:10:32.439358 env[1139]: time="2024-02-12T19:10:32.439312337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-5trf8,Uid:2afbc787-ed75-47ab-ab6f-2372901b5131,Namespace:kube-system,Attempt:0,} returns sandbox id \"c55545e8fd80cae1d15ef89f1ac703159c4186def563c83407e3e6a5583326d6\"" Feb 12 19:10:32.440020 kubelet[1998]: E0212 19:10:32.439996 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:32.440403 env[1139]: time="2024-02-12T19:10:32.440347976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-kpxfq,Uid:da38ac4f-0a62-4bbb-a147-c496a3d1e4f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"3479dcefe92cf5fe6bbd6cbbf3e3a5e50a63217fbb7e893c43818eb1acf1e8ce\"" Feb 12 19:10:32.441741 kubelet[1998]: E0212 19:10:32.441419 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:32.442984 env[1139]: time="2024-02-12T19:10:32.442712586Z" level=info msg="CreateContainer within sandbox \"c55545e8fd80cae1d15ef89f1ac703159c4186def563c83407e3e6a5583326d6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:10:32.443973 env[1139]: time="2024-02-12T19:10:32.443942633Z" level=info msg="CreateContainer within sandbox \"3479dcefe92cf5fe6bbd6cbbf3e3a5e50a63217fbb7e893c43818eb1acf1e8ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:10:32.456251 env[1139]: time="2024-02-12T19:10:32.456195858Z" level=info msg="CreateContainer within sandbox \"c55545e8fd80cae1d15ef89f1ac703159c4186def563c83407e3e6a5583326d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38c740a6e0e7901f45038dfcbd7395216d7e4344abb235f2813e53efb34db6ab\"" Feb 12 19:10:32.457795 env[1139]: time="2024-02-12T19:10:32.456866324Z" level=info msg="StartContainer for \"38c740a6e0e7901f45038dfcbd7395216d7e4344abb235f2813e53efb34db6ab\"" Feb 12 19:10:32.461985 env[1139]: time="2024-02-12T19:10:32.461945917Z" level=info msg="CreateContainer within sandbox \"3479dcefe92cf5fe6bbd6cbbf3e3a5e50a63217fbb7e893c43818eb1acf1e8ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c52d8ef3ad1781cbfe049aa0cc6de72639e3f837ced53bb24ff0226c4e4685ff\"" Feb 12 19:10:32.463552 env[1139]: time="2024-02-12T19:10:32.462505378Z" level=info msg="StartContainer for \"c52d8ef3ad1781cbfe049aa0cc6de72639e3f837ced53bb24ff0226c4e4685ff\"" Feb 12 19:10:32.472162 systemd[1]: Started cri-containerd-38c740a6e0e7901f45038dfcbd7395216d7e4344abb235f2813e53efb34db6ab.scope. Feb 12 19:10:32.479542 systemd[1]: Started cri-containerd-c52d8ef3ad1781cbfe049aa0cc6de72639e3f837ced53bb24ff0226c4e4685ff.scope. Feb 12 19:10:32.521974 env[1139]: time="2024-02-12T19:10:32.517262418Z" level=info msg="StartContainer for \"c52d8ef3ad1781cbfe049aa0cc6de72639e3f837ced53bb24ff0226c4e4685ff\" returns successfully" Feb 12 19:10:32.527117 env[1139]: time="2024-02-12T19:10:32.523644140Z" level=info msg="StartContainer for \"38c740a6e0e7901f45038dfcbd7395216d7e4344abb235f2813e53efb34db6ab\" returns successfully" Feb 12 19:10:33.309558 systemd[1]: run-containerd-runc-k8s.io-3479dcefe92cf5fe6bbd6cbbf3e3a5e50a63217fbb7e893c43818eb1acf1e8ce-runc.xZIIJE.mount: Deactivated successfully. Feb 12 19:10:33.360606 kubelet[1998]: E0212 19:10:33.360569 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:33.362722 kubelet[1998]: E0212 19:10:33.362687 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:33.371718 kubelet[1998]: I0212 19:10:33.371684 1998 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-kpxfq" podStartSLOduration=17.371656073 pod.CreationTimestamp="2024-02-12 19:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:10:33.371338622 +0000 UTC m=+32.233878361" watchObservedRunningTime="2024-02-12 19:10:33.371656073 +0000 UTC m=+32.234195812" Feb 12 19:10:33.371952 kubelet[1998]: I0212 19:10:33.371924 1998 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-pstqh" podStartSLOduration=-9.22337201848287e+09 pod.CreationTimestamp="2024-02-12 19:10:15 +0000 UTC" firstStartedPulling="2024-02-12 19:10:16.328536461 +0000 UTC m=+15.191076200" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:10:21.493005941 +0000 UTC m=+20.355545640" watchObservedRunningTime="2024-02-12 19:10:33.371905402 +0000 UTC m=+32.234445101" Feb 12 19:10:33.379529 kubelet[1998]: I0212 19:10:33.379494 1998 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-5trf8" podStartSLOduration=17.379460551 pod.CreationTimestamp="2024-02-12 19:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:10:33.379318946 +0000 UTC m=+32.241858685" watchObservedRunningTime="2024-02-12 19:10:33.379460551 +0000 UTC m=+32.242000250" Feb 12 19:10:33.580385 systemd-networkd[1040]: veth13ef8877: Gained IPv6LL Feb 12 19:10:33.772345 systemd-networkd[1040]: vetha2d8b4f5: Gained IPv6LL Feb 12 19:10:34.348325 systemd-networkd[1040]: cni0: Gained IPv6LL Feb 12 19:10:34.364815 kubelet[1998]: E0212 19:10:34.364778 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:34.365540 kubelet[1998]: E0212 19:10:34.365510 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:35.366689 kubelet[1998]: E0212 19:10:35.366653 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:35.367141 kubelet[1998]: E0212 19:10:35.367124 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:10:42.622193 systemd[1]: Started sshd@5-10.0.0.19:22-10.0.0.1:45080.service. Feb 12 19:10:42.656757 sshd[2955]: Accepted publickey for core from 10.0.0.1 port 45080 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:10:42.658522 sshd[2955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:10:42.663457 systemd[1]: Started session-6.scope. Feb 12 19:10:42.664812 systemd-logind[1127]: New session 6 of user core. Feb 12 19:10:42.799342 sshd[2955]: pam_unix(sshd:session): session closed for user core Feb 12 19:10:42.802209 systemd[1]: sshd@5-10.0.0.19:22-10.0.0.1:45080.service: Deactivated successfully. Feb 12 19:10:42.802988 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:10:42.805519 systemd-logind[1127]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:10:42.806594 systemd-logind[1127]: Removed session 6. Feb 12 19:10:47.807189 systemd[1]: Started sshd@6-10.0.0.19:22-10.0.0.1:45090.service. Feb 12 19:10:47.842556 sshd[2989]: Accepted publickey for core from 10.0.0.1 port 45090 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:10:47.844233 sshd[2989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:10:47.848673 systemd-logind[1127]: New session 7 of user core. Feb 12 19:10:47.849077 systemd[1]: Started session-7.scope. Feb 12 19:10:47.965665 sshd[2989]: pam_unix(sshd:session): session closed for user core Feb 12 19:10:47.969826 systemd[1]: sshd@6-10.0.0.19:22-10.0.0.1:45090.service: Deactivated successfully. Feb 12 19:10:47.970566 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:10:47.971366 systemd-logind[1127]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:10:47.973616 systemd-logind[1127]: Removed session 7. Feb 12 19:10:52.970705 systemd[1]: Started sshd@7-10.0.0.19:22-10.0.0.1:33924.service. Feb 12 19:10:53.028477 sshd[3027]: Accepted publickey for core from 10.0.0.1 port 33924 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:10:53.030076 sshd[3027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:10:53.035205 systemd-logind[1127]: New session 8 of user core. Feb 12 19:10:53.035560 systemd[1]: Started session-8.scope. Feb 12 19:10:53.150085 sshd[3027]: pam_unix(sshd:session): session closed for user core Feb 12 19:10:53.152611 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 19:10:53.153237 systemd[1]: sshd@7-10.0.0.19:22-10.0.0.1:33924.service: Deactivated successfully. Feb 12 19:10:53.154254 systemd-logind[1127]: Session 8 logged out. Waiting for processes to exit. Feb 12 19:10:53.155461 systemd-logind[1127]: Removed session 8. Feb 12 19:10:58.158974 systemd[1]: Started sshd@8-10.0.0.19:22-10.0.0.1:33932.service. Feb 12 19:10:58.194675 sshd[3061]: Accepted publickey for core from 10.0.0.1 port 33932 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:10:58.196027 sshd[3061]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:10:58.200268 systemd-logind[1127]: New session 9 of user core. Feb 12 19:10:58.201963 systemd[1]: Started session-9.scope. Feb 12 19:10:58.342505 sshd[3061]: pam_unix(sshd:session): session closed for user core Feb 12 19:10:58.345467 systemd[1]: sshd@8-10.0.0.19:22-10.0.0.1:33932.service: Deactivated successfully. Feb 12 19:10:58.346280 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 19:10:58.347149 systemd-logind[1127]: Session 9 logged out. Waiting for processes to exit. Feb 12 19:10:58.348056 systemd-logind[1127]: Removed session 9. Feb 12 19:11:03.346542 systemd[1]: Started sshd@9-10.0.0.19:22-10.0.0.1:50076.service. Feb 12 19:11:03.380980 sshd[3099]: Accepted publickey for core from 10.0.0.1 port 50076 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:03.386546 sshd[3099]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:03.391558 systemd-logind[1127]: New session 10 of user core. Feb 12 19:11:03.392091 systemd[1]: Started session-10.scope. Feb 12 19:11:03.515402 sshd[3099]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:03.519888 systemd-logind[1127]: Session 10 logged out. Waiting for processes to exit. Feb 12 19:11:03.519911 systemd[1]: sshd@9-10.0.0.19:22-10.0.0.1:50076.service: Deactivated successfully. Feb 12 19:11:03.520755 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 19:11:03.521540 systemd-logind[1127]: Removed session 10. Feb 12 19:11:08.520052 systemd[1]: Started sshd@10-10.0.0.19:22-10.0.0.1:50092.service. Feb 12 19:11:08.562420 sshd[3131]: Accepted publickey for core from 10.0.0.1 port 50092 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:08.564032 sshd[3131]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:08.573944 systemd-logind[1127]: New session 11 of user core. Feb 12 19:11:08.574542 systemd[1]: Started session-11.scope. Feb 12 19:11:08.710020 sshd[3131]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:08.714001 systemd[1]: Started sshd@11-10.0.0.19:22-10.0.0.1:50104.service. Feb 12 19:11:08.714597 systemd[1]: sshd@10-10.0.0.19:22-10.0.0.1:50092.service: Deactivated successfully. Feb 12 19:11:08.715414 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 19:11:08.715936 systemd-logind[1127]: Session 11 logged out. Waiting for processes to exit. Feb 12 19:11:08.716863 systemd-logind[1127]: Removed session 11. Feb 12 19:11:08.749050 sshd[3144]: Accepted publickey for core from 10.0.0.1 port 50104 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:08.750353 sshd[3144]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:08.753737 systemd-logind[1127]: New session 12 of user core. Feb 12 19:11:08.754667 systemd[1]: Started session-12.scope. Feb 12 19:11:08.974131 sshd[3144]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:08.977992 systemd[1]: sshd@11-10.0.0.19:22-10.0.0.1:50104.service: Deactivated successfully. Feb 12 19:11:08.978674 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 19:11:08.980671 systemd[1]: Started sshd@12-10.0.0.19:22-10.0.0.1:50108.service. Feb 12 19:11:08.982113 systemd-logind[1127]: Session 12 logged out. Waiting for processes to exit. Feb 12 19:11:08.983144 systemd-logind[1127]: Removed session 12. Feb 12 19:11:09.020226 sshd[3168]: Accepted publickey for core from 10.0.0.1 port 50108 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:09.021456 sshd[3168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:09.025956 systemd[1]: Started session-13.scope. Feb 12 19:11:09.026294 systemd-logind[1127]: New session 13 of user core. Feb 12 19:11:09.160529 sshd[3168]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:09.163110 systemd[1]: sshd@12-10.0.0.19:22-10.0.0.1:50108.service: Deactivated successfully. Feb 12 19:11:09.163874 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 19:11:09.166538 systemd-logind[1127]: Session 13 logged out. Waiting for processes to exit. Feb 12 19:11:09.167578 systemd-logind[1127]: Removed session 13. Feb 12 19:11:14.164806 systemd[1]: Started sshd@13-10.0.0.19:22-10.0.0.1:33520.service. Feb 12 19:11:14.199675 sshd[3199]: Accepted publickey for core from 10.0.0.1 port 33520 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:14.201324 sshd[3199]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:14.208130 systemd-logind[1127]: New session 14 of user core. Feb 12 19:11:14.208625 systemd[1]: Started session-14.scope. Feb 12 19:11:14.328432 sshd[3199]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:14.332281 systemd[1]: Started sshd@14-10.0.0.19:22-10.0.0.1:33522.service. Feb 12 19:11:14.332854 systemd[1]: sshd@13-10.0.0.19:22-10.0.0.1:33520.service: Deactivated successfully. Feb 12 19:11:14.334646 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 19:11:14.335614 systemd-logind[1127]: Session 14 logged out. Waiting for processes to exit. Feb 12 19:11:14.336765 systemd-logind[1127]: Removed session 14. Feb 12 19:11:14.371461 sshd[3211]: Accepted publickey for core from 10.0.0.1 port 33522 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:14.372713 sshd[3211]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:14.378934 systemd-logind[1127]: New session 15 of user core. Feb 12 19:11:14.379757 systemd[1]: Started session-15.scope. Feb 12 19:11:14.634311 sshd[3211]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:14.638187 systemd[1]: Started sshd@15-10.0.0.19:22-10.0.0.1:33534.service. Feb 12 19:11:14.640537 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 19:11:14.641186 systemd-logind[1127]: Session 15 logged out. Waiting for processes to exit. Feb 12 19:11:14.641400 systemd[1]: sshd@14-10.0.0.19:22-10.0.0.1:33522.service: Deactivated successfully. Feb 12 19:11:14.642424 systemd-logind[1127]: Removed session 15. Feb 12 19:11:14.674053 sshd[3223]: Accepted publickey for core from 10.0.0.1 port 33534 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:14.675843 sshd[3223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:14.679249 systemd-logind[1127]: New session 16 of user core. Feb 12 19:11:14.680157 systemd[1]: Started session-16.scope. Feb 12 19:11:15.420936 sshd[3223]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:15.425145 systemd[1]: Started sshd@16-10.0.0.19:22-10.0.0.1:33548.service. Feb 12 19:11:15.425717 systemd[1]: sshd@15-10.0.0.19:22-10.0.0.1:33534.service: Deactivated successfully. Feb 12 19:11:15.426771 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 19:11:15.427487 systemd-logind[1127]: Session 16 logged out. Waiting for processes to exit. Feb 12 19:11:15.429698 systemd-logind[1127]: Removed session 16. Feb 12 19:11:15.472574 sshd[3253]: Accepted publickey for core from 10.0.0.1 port 33548 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:15.474378 sshd[3253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:15.478016 systemd-logind[1127]: New session 17 of user core. Feb 12 19:11:15.478943 systemd[1]: Started session-17.scope. Feb 12 19:11:15.673306 sshd[3253]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:15.677528 systemd[1]: sshd@16-10.0.0.19:22-10.0.0.1:33548.service: Deactivated successfully. Feb 12 19:11:15.678178 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 19:11:15.679405 systemd-logind[1127]: Session 17 logged out. Waiting for processes to exit. Feb 12 19:11:15.680694 systemd[1]: Started sshd@17-10.0.0.19:22-10.0.0.1:33564.service. Feb 12 19:11:15.681401 systemd-logind[1127]: Removed session 17. Feb 12 19:11:15.716838 sshd[3302]: Accepted publickey for core from 10.0.0.1 port 33564 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:15.718527 sshd[3302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:15.721884 systemd-logind[1127]: New session 18 of user core. Feb 12 19:11:15.722846 systemd[1]: Started session-18.scope. Feb 12 19:11:15.829309 sshd[3302]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:15.831725 systemd[1]: sshd@17-10.0.0.19:22-10.0.0.1:33564.service: Deactivated successfully. Feb 12 19:11:15.832503 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 19:11:15.833060 systemd-logind[1127]: Session 18 logged out. Waiting for processes to exit. Feb 12 19:11:15.833737 systemd-logind[1127]: Removed session 18. Feb 12 19:11:20.833897 systemd[1]: Started sshd@18-10.0.0.19:22-10.0.0.1:33578.service. Feb 12 19:11:20.867599 sshd[3362]: Accepted publickey for core from 10.0.0.1 port 33578 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:20.868714 sshd[3362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:20.871914 systemd-logind[1127]: New session 19 of user core. Feb 12 19:11:20.873069 systemd[1]: Started session-19.scope. Feb 12 19:11:20.978321 sshd[3362]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:20.981810 systemd[1]: sshd@18-10.0.0.19:22-10.0.0.1:33578.service: Deactivated successfully. Feb 12 19:11:20.982557 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 19:11:20.984163 systemd-logind[1127]: Session 19 logged out. Waiting for processes to exit. Feb 12 19:11:20.984927 systemd-logind[1127]: Removed session 19. Feb 12 19:11:25.983366 systemd[1]: Started sshd@19-10.0.0.19:22-10.0.0.1:58326.service. Feb 12 19:11:26.017564 sshd[3393]: Accepted publickey for core from 10.0.0.1 port 58326 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:26.018723 sshd[3393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:26.022551 systemd-logind[1127]: New session 20 of user core. Feb 12 19:11:26.023236 systemd[1]: Started session-20.scope. Feb 12 19:11:26.131125 sshd[3393]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:26.133436 systemd[1]: sshd@19-10.0.0.19:22-10.0.0.1:58326.service: Deactivated successfully. Feb 12 19:11:26.134224 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 19:11:26.134815 systemd-logind[1127]: Session 20 logged out. Waiting for processes to exit. Feb 12 19:11:26.135589 systemd-logind[1127]: Removed session 20. Feb 12 19:11:26.296107 kubelet[1998]: E0212 19:11:26.295991 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:11:27.299413 kubelet[1998]: E0212 19:11:27.299376 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:11:31.135633 systemd[1]: Started sshd@20-10.0.0.19:22-10.0.0.1:58338.service. Feb 12 19:11:31.170168 sshd[3424]: Accepted publickey for core from 10.0.0.1 port 58338 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:31.171372 sshd[3424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:31.175318 systemd-logind[1127]: New session 21 of user core. Feb 12 19:11:31.176008 systemd[1]: Started session-21.scope. Feb 12 19:11:31.301008 sshd[3424]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:31.304231 systemd-logind[1127]: Session 21 logged out. Waiting for processes to exit. Feb 12 19:11:31.304457 systemd[1]: sshd@20-10.0.0.19:22-10.0.0.1:58338.service: Deactivated successfully. Feb 12 19:11:31.305160 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 19:11:31.306033 systemd-logind[1127]: Removed session 21. Feb 12 19:11:32.296120 kubelet[1998]: E0212 19:11:32.296083 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:11:36.302798 systemd[1]: Started sshd@21-10.0.0.19:22-10.0.0.1:46024.service. Feb 12 19:11:36.336280 sshd[3455]: Accepted publickey for core from 10.0.0.1 port 46024 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:36.337810 sshd[3455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:36.341700 systemd[1]: Started session-22.scope. Feb 12 19:11:36.342135 systemd-logind[1127]: New session 22 of user core. Feb 12 19:11:36.451111 sshd[3455]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:36.453426 systemd[1]: sshd@21-10.0.0.19:22-10.0.0.1:46024.service: Deactivated successfully. Feb 12 19:11:36.454251 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 19:11:36.455173 systemd-logind[1127]: Session 22 logged out. Waiting for processes to exit. Feb 12 19:11:36.455845 systemd-logind[1127]: Removed session 22. Feb 12 19:11:37.296882 kubelet[1998]: E0212 19:11:37.296841 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"