Feb 9 18:34:12.722718 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 18:34:12.722736 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 18:34:12.722744 kernel: efi: EFI v2.70 by EDK II Feb 9 18:34:12.722750 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 18:34:12.722755 kernel: random: crng init done Feb 9 18:34:12.722760 kernel: ACPI: Early table checksum verification disabled Feb 9 18:34:12.722766 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 18:34:12.722773 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 18:34:12.722778 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:12.722783 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:12.722791 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:12.722796 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:12.722801 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:12.722807 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:12.722814 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:12.722821 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:12.722827 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:12.722832 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 18:34:12.722838 kernel: NUMA: Failed to initialise from firmware Feb 9 18:34:12.722844 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:34:12.722849 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 9 18:34:12.722855 kernel: Zone ranges: Feb 9 18:34:12.722861 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:34:12.722868 kernel: DMA32 empty Feb 9 18:34:12.722873 kernel: Normal empty Feb 9 18:34:12.722879 kernel: Movable zone start for each node Feb 9 18:34:12.722884 kernel: Early memory node ranges Feb 9 18:34:12.722890 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 18:34:12.722896 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 18:34:12.722901 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 18:34:12.722907 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 18:34:12.722913 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 18:34:12.722918 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 18:34:12.722924 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 18:34:12.722930 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:34:12.722936 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 18:34:12.722942 kernel: psci: probing for conduit method from ACPI. Feb 9 18:34:12.722948 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 18:34:12.722954 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 18:34:12.722959 kernel: psci: Trusted OS migration not required Feb 9 18:34:12.722967 kernel: psci: SMC Calling Convention v1.1 Feb 9 18:34:12.722991 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 18:34:12.723000 kernel: ACPI: SRAT not present Feb 9 18:34:12.723006 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 18:34:12.723013 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 18:34:12.723019 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 18:34:12.723025 kernel: Detected PIPT I-cache on CPU0 Feb 9 18:34:12.723031 kernel: CPU features: detected: GIC system register CPU interface Feb 9 18:34:12.723037 kernel: CPU features: detected: Hardware dirty bit management Feb 9 18:34:12.723043 kernel: CPU features: detected: Spectre-v4 Feb 9 18:34:12.723049 kernel: CPU features: detected: Spectre-BHB Feb 9 18:34:12.723056 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 18:34:12.723062 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 18:34:12.723069 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 18:34:12.723074 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 18:34:12.723080 kernel: Policy zone: DMA Feb 9 18:34:12.723088 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:34:12.723094 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:34:12.723100 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 18:34:12.723106 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:34:12.723112 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:34:12.723119 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 9 18:34:12.723126 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 18:34:12.723132 kernel: trace event string verifier disabled Feb 9 18:34:12.723138 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 18:34:12.723144 kernel: rcu: RCU event tracing is enabled. Feb 9 18:34:12.723151 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 18:34:12.723157 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 18:34:12.723163 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:34:12.723169 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:34:12.723175 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 18:34:12.723181 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 18:34:12.723187 kernel: GICv3: 256 SPIs implemented Feb 9 18:34:12.723194 kernel: GICv3: 0 Extended SPIs implemented Feb 9 18:34:12.723200 kernel: GICv3: Distributor has no Range Selector support Feb 9 18:34:12.723206 kernel: Root IRQ handler: gic_handle_irq Feb 9 18:34:12.723212 kernel: GICv3: 16 PPIs implemented Feb 9 18:34:12.723218 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 18:34:12.723224 kernel: ACPI: SRAT not present Feb 9 18:34:12.723229 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 18:34:12.723236 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 18:34:12.723242 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 18:34:12.723248 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 18:34:12.723254 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 18:34:12.723260 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:34:12.723267 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 18:34:12.723273 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 18:34:12.723280 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 18:34:12.723286 kernel: arm-pv: using stolen time PV Feb 9 18:34:12.723292 kernel: Console: colour dummy device 80x25 Feb 9 18:34:12.723298 kernel: ACPI: Core revision 20210730 Feb 9 18:34:12.723304 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 18:34:12.723311 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:34:12.723317 kernel: LSM: Security Framework initializing Feb 9 18:34:12.723323 kernel: SELinux: Initializing. Feb 9 18:34:12.723330 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:34:12.723336 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:34:12.723343 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:34:12.723349 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 18:34:12.723355 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 18:34:12.723361 kernel: Remapping and enabling EFI services. Feb 9 18:34:12.723367 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:34:12.723373 kernel: Detected PIPT I-cache on CPU1 Feb 9 18:34:12.723380 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 18:34:12.723388 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 18:34:12.723394 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:34:12.723400 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 18:34:12.723407 kernel: Detected PIPT I-cache on CPU2 Feb 9 18:34:12.723413 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 18:34:12.723419 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 18:34:12.723426 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:34:12.723432 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 18:34:12.723438 kernel: Detected PIPT I-cache on CPU3 Feb 9 18:34:12.723444 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 18:34:12.723452 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 18:34:12.723459 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:34:12.723465 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 18:34:12.723471 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 18:34:12.723481 kernel: SMP: Total of 4 processors activated. Feb 9 18:34:12.723489 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 18:34:12.723495 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 18:34:12.723502 kernel: CPU features: detected: Common not Private translations Feb 9 18:34:12.723508 kernel: CPU features: detected: CRC32 instructions Feb 9 18:34:12.723515 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 18:34:12.723521 kernel: CPU features: detected: LSE atomic instructions Feb 9 18:34:12.723528 kernel: CPU features: detected: Privileged Access Never Feb 9 18:34:12.723535 kernel: CPU features: detected: RAS Extension Support Feb 9 18:34:12.723542 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 18:34:12.723548 kernel: CPU: All CPU(s) started at EL1 Feb 9 18:34:12.723555 kernel: alternatives: patching kernel code Feb 9 18:34:12.723562 kernel: devtmpfs: initialized Feb 9 18:34:12.723569 kernel: KASLR enabled Feb 9 18:34:12.723576 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:34:12.723582 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 18:34:12.723589 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:34:12.723595 kernel: SMBIOS 3.0.0 present. Feb 9 18:34:12.723610 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 18:34:12.723616 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:34:12.723623 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 18:34:12.723636 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 18:34:12.723656 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 18:34:12.723662 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:34:12.723669 kernel: audit: type=2000 audit(0.034:1): state=initialized audit_enabled=0 res=1 Feb 9 18:34:12.723675 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:34:12.723682 kernel: cpuidle: using governor menu Feb 9 18:34:12.723688 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 18:34:12.723695 kernel: ASID allocator initialised with 32768 entries Feb 9 18:34:12.723701 kernel: ACPI: bus type PCI registered Feb 9 18:34:12.723708 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:34:12.723715 kernel: Serial: AMBA PL011 UART driver Feb 9 18:34:12.723722 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:34:12.723728 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 18:34:12.723735 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:34:12.723742 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 18:34:12.723748 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:34:12.723754 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 18:34:12.723761 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:34:12.723767 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:34:12.723775 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:34:12.723782 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:34:12.723788 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:34:12.723795 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:34:12.723801 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:34:12.723808 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 18:34:12.723814 kernel: ACPI: Interpreter enabled Feb 9 18:34:12.723821 kernel: ACPI: Using GIC for interrupt routing Feb 9 18:34:12.723827 kernel: ACPI: MCFG table detected, 1 entries Feb 9 18:34:12.723835 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 18:34:12.723841 kernel: printk: console [ttyAMA0] enabled Feb 9 18:34:12.723848 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 18:34:12.723966 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 18:34:12.724065 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 18:34:12.724126 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 18:34:12.724185 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 18:34:12.724246 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 18:34:12.724255 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 18:34:12.724261 kernel: PCI host bridge to bus 0000:00 Feb 9 18:34:12.724328 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 18:34:12.724382 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 18:34:12.724435 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 18:34:12.724487 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 18:34:12.724559 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 18:34:12.724633 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 18:34:12.724694 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 18:34:12.724755 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 18:34:12.724814 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 18:34:12.724875 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 18:34:12.724934 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 18:34:12.725049 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 18:34:12.725111 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 18:34:12.725229 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 18:34:12.725284 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 18:34:12.725293 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 18:34:12.725300 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 18:34:12.725306 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 18:34:12.725316 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 18:34:12.725322 kernel: iommu: Default domain type: Translated Feb 9 18:34:12.725329 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 18:34:12.725335 kernel: vgaarb: loaded Feb 9 18:34:12.725342 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:34:12.725348 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:34:12.725355 kernel: PTP clock support registered Feb 9 18:34:12.725362 kernel: Registered efivars operations Feb 9 18:34:12.725368 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 18:34:12.725375 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:34:12.725383 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:34:12.725389 kernel: pnp: PnP ACPI init Feb 9 18:34:12.725452 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 18:34:12.725462 kernel: pnp: PnP ACPI: found 1 devices Feb 9 18:34:12.725469 kernel: NET: Registered PF_INET protocol family Feb 9 18:34:12.725475 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 18:34:12.725482 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 18:34:12.725489 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:34:12.725496 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:34:12.725503 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 18:34:12.725509 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 18:34:12.725516 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:34:12.725523 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:34:12.725529 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:34:12.725535 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:34:12.725542 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 18:34:12.725550 kernel: kvm [1]: HYP mode not available Feb 9 18:34:12.725559 kernel: Initialise system trusted keyrings Feb 9 18:34:12.725565 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 18:34:12.725572 kernel: Key type asymmetric registered Feb 9 18:34:12.725579 kernel: Asymmetric key parser 'x509' registered Feb 9 18:34:12.725585 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:34:12.725592 kernel: io scheduler mq-deadline registered Feb 9 18:34:12.725598 kernel: io scheduler kyber registered Feb 9 18:34:12.725605 kernel: io scheduler bfq registered Feb 9 18:34:12.725612 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 18:34:12.725620 kernel: ACPI: button: Power Button [PWRB] Feb 9 18:34:12.725626 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 18:34:12.725690 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 18:34:12.725699 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:34:12.725705 kernel: thunder_xcv, ver 1.0 Feb 9 18:34:12.725712 kernel: thunder_bgx, ver 1.0 Feb 9 18:34:12.725718 kernel: nicpf, ver 1.0 Feb 9 18:34:12.725724 kernel: nicvf, ver 1.0 Feb 9 18:34:12.725791 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 18:34:12.725848 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T18:34:12 UTC (1707503652) Feb 9 18:34:12.725857 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 18:34:12.725863 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:34:12.725870 kernel: Segment Routing with IPv6 Feb 9 18:34:12.725876 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:34:12.725883 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:34:12.725889 kernel: Key type dns_resolver registered Feb 9 18:34:12.725896 kernel: registered taskstats version 1 Feb 9 18:34:12.725904 kernel: Loading compiled-in X.509 certificates Feb 9 18:34:12.725910 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 18:34:12.725917 kernel: Key type .fscrypt registered Feb 9 18:34:12.725923 kernel: Key type fscrypt-provisioning registered Feb 9 18:34:12.725930 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:34:12.725936 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:34:12.725943 kernel: ima: No architecture policies found Feb 9 18:34:12.725949 kernel: Freeing unused kernel memory: 34688K Feb 9 18:34:12.725956 kernel: Run /init as init process Feb 9 18:34:12.725963 kernel: with arguments: Feb 9 18:34:12.725975 kernel: /init Feb 9 18:34:12.726007 kernel: with environment: Feb 9 18:34:12.726014 kernel: HOME=/ Feb 9 18:34:12.726020 kernel: TERM=linux Feb 9 18:34:12.726026 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:34:12.726035 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:34:12.726043 systemd[1]: Detected virtualization kvm. Feb 9 18:34:12.726053 systemd[1]: Detected architecture arm64. Feb 9 18:34:12.726060 systemd[1]: Running in initrd. Feb 9 18:34:12.726066 systemd[1]: No hostname configured, using default hostname. Feb 9 18:34:12.726073 systemd[1]: Hostname set to . Feb 9 18:34:12.726081 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:34:12.726087 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:34:12.726094 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:34:12.726101 systemd[1]: Reached target cryptsetup.target. Feb 9 18:34:12.726109 systemd[1]: Reached target paths.target. Feb 9 18:34:12.726116 systemd[1]: Reached target slices.target. Feb 9 18:34:12.726122 systemd[1]: Reached target swap.target. Feb 9 18:34:12.726129 systemd[1]: Reached target timers.target. Feb 9 18:34:12.726136 systemd[1]: Listening on iscsid.socket. Feb 9 18:34:12.726143 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:34:12.726150 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:34:12.726159 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:34:12.726166 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:34:12.726172 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:34:12.726179 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:34:12.726186 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:34:12.726193 systemd[1]: Reached target sockets.target. Feb 9 18:34:12.726200 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:34:12.726207 systemd[1]: Finished network-cleanup.service. Feb 9 18:34:12.726214 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:34:12.726222 systemd[1]: Starting systemd-journald.service... Feb 9 18:34:12.726229 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:34:12.726236 systemd[1]: Starting systemd-resolved.service... Feb 9 18:34:12.726243 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:34:12.726250 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:34:12.726256 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:34:12.726263 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:34:12.726270 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:34:12.726280 systemd-journald[289]: Journal started Feb 9 18:34:12.726320 systemd-journald[289]: Runtime Journal (/run/log/journal/698c2c81fa2c446498699d9499f2ba3d) is 6.0M, max 48.7M, 42.6M free. Feb 9 18:34:12.719864 systemd-modules-load[290]: Inserted module 'overlay' Feb 9 18:34:12.729343 kernel: audit: type=1130 audit(1707503652.726:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:12.729360 systemd[1]: Started systemd-journald.service. Feb 9 18:34:12.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:12.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:12.729993 kernel: audit: type=1130 audit(1707503652.729:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:12.730695 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:34:12.738933 kernel: audit: type=1130 audit(1707503652.733:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:12.738952 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:34:12.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:12.732413 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:34:12.743658 systemd-modules-load[290]: Inserted module 'br_netfilter' Feb 9 18:34:12.744351 kernel: Bridge firewalling registered Feb 9 18:34:12.747222 systemd-resolved[291]: Positive Trust Anchors: Feb 9 18:34:12.747248 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:34:12.747276 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:34:12.751343 systemd-resolved[291]: Defaulting to hostname 'linux'. Feb 9 18:34:12.755242 kernel: SCSI subsystem initialized Feb 9 18:34:12.755267 kernel: audit: type=1130 audit(1707503652.754:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:12.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:12.754128 systemd[1]: Started systemd-resolved.service. Feb 9 18:34:12.755211 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:34:12.757533 systemd[1]: Reached target nss-lookup.target. Feb 9 18:34:12.759179 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:34:12.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:12.763314 kernel: audit: type=1130 audit(1707503652.757:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:12.763342 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:34:12.764099 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:34:12.765173 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:34:12.767402 systemd-modules-load[290]: Inserted module 'dm_multipath' Feb 9 18:34:12.768100 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:34:12.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:12.770146 dracut-cmdline[308]: dracut-dracut-053 Feb 9 18:34:12.770910 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:34:12.772787 kernel: audit: type=1130 audit(1707503652.769:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:12.772817 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:34:12.777847 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:34:12.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:12.781008 kernel: audit: type=1130 audit(1707503652.777:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:12.829002 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:34:12.837007 kernel: iscsi: registered transport (tcp) Feb 9 18:34:12.853030 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:34:12.853081 kernel: QLogic iSCSI HBA Driver Feb 9 18:34:12.894898 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:34:12.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:12.896327 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:34:12.898564 kernel: audit: type=1130 audit(1707503652.894:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:12.940026 kernel: raid6: neonx8 gen() 13760 MB/s Feb 9 18:34:12.957013 kernel: raid6: neonx8 xor() 10797 MB/s Feb 9 18:34:12.974023 kernel: raid6: neonx4 gen() 13562 MB/s Feb 9 18:34:12.991023 kernel: raid6: neonx4 xor() 11225 MB/s Feb 9 18:34:13.008010 kernel: raid6: neonx2 gen() 13032 MB/s Feb 9 18:34:13.025018 kernel: raid6: neonx2 xor() 10204 MB/s Feb 9 18:34:13.042020 kernel: raid6: neonx1 gen() 10466 MB/s Feb 9 18:34:13.059030 kernel: raid6: neonx1 xor() 8767 MB/s Feb 9 18:34:13.076024 kernel: raid6: int64x8 gen() 6272 MB/s Feb 9 18:34:13.093010 kernel: raid6: int64x8 xor() 3533 MB/s Feb 9 18:34:13.110021 kernel: raid6: int64x4 gen() 7242 MB/s Feb 9 18:34:13.127014 kernel: raid6: int64x4 xor() 3832 MB/s Feb 9 18:34:13.144005 kernel: raid6: int64x2 gen() 6136 MB/s Feb 9 18:34:13.161021 kernel: raid6: int64x2 xor() 3306 MB/s Feb 9 18:34:13.178031 kernel: raid6: int64x1 gen() 5044 MB/s Feb 9 18:34:13.195189 kernel: raid6: int64x1 xor() 2633 MB/s Feb 9 18:34:13.195217 kernel: raid6: using algorithm neonx8 gen() 13760 MB/s Feb 9 18:34:13.195226 kernel: raid6: .... xor() 10797 MB/s, rmw enabled Feb 9 18:34:13.195234 kernel: raid6: using neon recovery algorithm Feb 9 18:34:13.207926 kernel: xor: measuring software checksum speed Feb 9 18:34:13.208994 kernel: 8regs : 17289 MB/sec Feb 9 18:34:13.210004 kernel: 32regs : 20749 MB/sec Feb 9 18:34:13.211004 kernel: arm64_neon : 27731 MB/sec Feb 9 18:34:13.211022 kernel: xor: using function: arm64_neon (27731 MB/sec) Feb 9 18:34:13.265015 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 18:34:13.277818 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:34:13.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:13.278000 audit: BPF prog-id=7 op=LOAD Feb 9 18:34:13.279000 audit: BPF prog-id=8 op=LOAD Feb 9 18:34:13.280992 kernel: audit: type=1130 audit(1707503653.277:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:13.281248 systemd[1]: Starting systemd-udevd.service... Feb 9 18:34:13.297884 systemd-udevd[491]: Using default interface naming scheme 'v252'. Feb 9 18:34:13.301326 systemd[1]: Started systemd-udevd.service. Feb 9 18:34:13.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:13.302878 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:34:13.315753 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Feb 9 18:34:13.342766 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:34:13.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:13.344397 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:34:13.378396 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:34:13.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:13.407361 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 18:34:13.411275 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 18:34:13.411310 kernel: GPT:9289727 != 19775487 Feb 9 18:34:13.411320 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 18:34:13.412121 kernel: GPT:9289727 != 19775487 Feb 9 18:34:13.412137 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 18:34:13.413005 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:34:13.426008 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (544) Feb 9 18:34:13.426631 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:34:13.430149 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:34:13.435382 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:34:13.436199 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:34:13.442058 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:34:13.443511 systemd[1]: Starting disk-uuid.service... Feb 9 18:34:13.449121 disk-uuid[562]: Primary Header is updated. Feb 9 18:34:13.449121 disk-uuid[562]: Secondary Entries is updated. Feb 9 18:34:13.449121 disk-uuid[562]: Secondary Header is updated. Feb 9 18:34:13.451990 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:34:14.463856 disk-uuid[563]: The operation has completed successfully. Feb 9 18:34:14.465220 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:34:14.488276 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:34:14.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.488366 systemd[1]: Finished disk-uuid.service. Feb 9 18:34:14.489813 systemd[1]: Starting verity-setup.service... Feb 9 18:34:14.504007 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 18:34:14.523645 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:34:14.525622 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:34:14.528161 systemd[1]: Finished verity-setup.service. Feb 9 18:34:14.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.576007 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:34:14.576424 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:34:14.577211 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:34:14.577793 systemd[1]: Starting ignition-setup.service... Feb 9 18:34:14.579810 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:34:14.585377 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:34:14.585409 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:34:14.585419 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:34:14.592635 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:34:14.598680 systemd[1]: Finished ignition-setup.service. Feb 9 18:34:14.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.600118 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:34:14.656683 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:34:14.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.658000 audit: BPF prog-id=9 op=LOAD Feb 9 18:34:14.659780 systemd[1]: Starting systemd-networkd.service... Feb 9 18:34:14.687509 systemd-networkd[739]: lo: Link UP Feb 9 18:34:14.687523 systemd-networkd[739]: lo: Gained carrier Feb 9 18:34:14.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.688285 ignition[647]: Ignition 2.14.0 Feb 9 18:34:14.687878 systemd-networkd[739]: Enumeration completed Feb 9 18:34:14.688292 ignition[647]: Stage: fetch-offline Feb 9 18:34:14.687953 systemd[1]: Started systemd-networkd.service. Feb 9 18:34:14.688332 ignition[647]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:34:14.689010 systemd[1]: Reached target network.target. Feb 9 18:34:14.688342 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:34:14.689584 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:34:14.688467 ignition[647]: parsed url from cmdline: "" Feb 9 18:34:14.690537 systemd-networkd[739]: eth0: Link UP Feb 9 18:34:14.688471 ignition[647]: no config URL provided Feb 9 18:34:14.690541 systemd-networkd[739]: eth0: Gained carrier Feb 9 18:34:14.688476 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:34:14.691407 systemd[1]: Starting iscsiuio.service... Feb 9 18:34:14.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.688483 ignition[647]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:34:14.702886 systemd[1]: Started iscsiuio.service. Feb 9 18:34:14.688499 ignition[647]: op(1): [started] loading QEMU firmware config module Feb 9 18:34:14.704663 systemd[1]: Starting iscsid.service... Feb 9 18:34:14.708290 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:34:14.708290 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 18:34:14.708290 iscsid[746]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:34:14.708290 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:34:14.708290 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:34:14.708290 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:34:14.708290 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:34:14.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.688504 ignition[647]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 18:34:14.706165 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:34:14.702878 ignition[647]: op(1): [finished] loading QEMU firmware config module Feb 9 18:34:14.710693 systemd[1]: Started iscsid.service. Feb 9 18:34:14.713475 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:34:14.723168 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:34:14.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.724166 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:34:14.725269 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:34:14.726485 systemd[1]: Reached target remote-fs.target. Feb 9 18:34:14.728300 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:34:14.735622 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:34:14.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.763582 ignition[647]: parsing config with SHA512: b636f2e351bf1d8074bf1621e886279ff274eb2447655a08d5b8ce90176b6f2031d1091649e1704f0c5cac04240688f1902beddc51c27570fcfcbeefdbf986ed Feb 9 18:34:14.800099 unknown[647]: fetched base config from "system" Feb 9 18:34:14.800109 unknown[647]: fetched user config from "qemu" Feb 9 18:34:14.802056 ignition[647]: fetch-offline: fetch-offline passed Feb 9 18:34:14.802783 ignition[647]: Ignition finished successfully Feb 9 18:34:14.804482 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:34:14.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.805229 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 18:34:14.805957 systemd[1]: Starting ignition-kargs.service... Feb 9 18:34:14.814407 ignition[761]: Ignition 2.14.0 Feb 9 18:34:14.814416 ignition[761]: Stage: kargs Feb 9 18:34:14.814503 ignition[761]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:34:14.814513 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:34:14.815602 ignition[761]: kargs: kargs passed Feb 9 18:34:14.815644 ignition[761]: Ignition finished successfully Feb 9 18:34:14.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.816947 systemd[1]: Finished ignition-kargs.service. Feb 9 18:34:14.818449 systemd[1]: Starting ignition-disks.service... Feb 9 18:34:14.824834 ignition[767]: Ignition 2.14.0 Feb 9 18:34:14.824844 ignition[767]: Stage: disks Feb 9 18:34:14.824935 ignition[767]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:34:14.824945 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:34:14.826266 ignition[767]: disks: disks passed Feb 9 18:34:14.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.827231 systemd[1]: Finished ignition-disks.service. Feb 9 18:34:14.826316 ignition[767]: Ignition finished successfully Feb 9 18:34:14.828038 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:34:14.828818 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:34:14.829800 systemd[1]: Reached target local-fs.target. Feb 9 18:34:14.830722 systemd[1]: Reached target sysinit.target. Feb 9 18:34:14.831681 systemd[1]: Reached target basic.target. Feb 9 18:34:14.833415 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:34:14.844486 systemd-fsck[775]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 18:34:14.847725 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:34:14.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.849333 systemd[1]: Mounting sysroot.mount... Feb 9 18:34:14.856009 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:34:14.856510 systemd[1]: Mounted sysroot.mount. Feb 9 18:34:14.857268 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:34:14.859421 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:34:14.860203 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 18:34:14.860252 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:34:14.860274 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:34:14.862807 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:34:14.864451 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:34:14.868822 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:34:14.873356 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:34:14.876867 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:34:14.880669 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:34:14.905844 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:34:14.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.907399 systemd[1]: Starting ignition-mount.service... Feb 9 18:34:14.908528 systemd[1]: Starting sysroot-boot.service... Feb 9 18:34:14.912748 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 18:34:14.921849 ignition[828]: INFO : Ignition 2.14.0 Feb 9 18:34:14.922688 ignition[828]: INFO : Stage: mount Feb 9 18:34:14.923370 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:34:14.924098 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:34:14.926184 ignition[828]: INFO : mount: mount passed Feb 9 18:34:14.926728 ignition[828]: INFO : Ignition finished successfully Feb 9 18:34:14.926958 systemd[1]: Finished ignition-mount.service. Feb 9 18:34:14.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.928042 systemd[1]: Finished sysroot-boot.service. Feb 9 18:34:14.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:15.534307 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:34:15.539003 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) Feb 9 18:34:15.540416 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:34:15.540440 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:34:15.540449 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:34:15.543480 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:34:15.544893 systemd[1]: Starting ignition-files.service... Feb 9 18:34:15.559269 ignition[856]: INFO : Ignition 2.14.0 Feb 9 18:34:15.559269 ignition[856]: INFO : Stage: files Feb 9 18:34:15.560385 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:34:15.560385 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:34:15.561863 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:34:15.565118 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:34:15.565118 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:34:15.569883 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:34:15.570865 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:34:15.572010 unknown[856]: wrote ssh authorized keys file for user: core Feb 9 18:34:15.572811 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:34:15.572811 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:34:15.572811 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 18:34:15.880428 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 18:34:15.918174 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:34:15.919913 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:34:15.919913 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 18:34:15.983142 systemd-networkd[739]: eth0: Gained IPv6LL Feb 9 18:34:16.298877 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 18:34:16.503077 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 18:34:16.505175 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:34:16.505175 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:34:16.505175 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 18:34:16.771242 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 18:34:16.890162 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 18:34:16.892274 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:34:16.892274 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 18:34:16.892274 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 18:34:16.892274 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:34:16.897127 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 18:34:17.014197 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 18:34:18.524102 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 18:34:18.526486 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:34:18.526486 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:34:18.526486 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 18:34:18.573736 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 18:34:20.291485 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 18:34:20.293814 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:34:20.293814 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:34:20.293814 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 18:34:20.338813 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 18:34:24.194344 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 18:34:24.194344 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:34:24.198169 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:34:24.198169 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:34:24.198169 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:34:24.198169 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:34:24.198169 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:34:24.198169 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:34:24.198169 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:34:24.198169 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:34:24.198169 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:34:24.198169 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:34:24.198169 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:34:24.198169 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:34:24.198169 ignition[856]: INFO : files: op(10): [started] processing unit "containerd.service" Feb 9 18:34:24.198169 ignition[856]: INFO : files: op(10): op(11): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 18:34:24.198169 ignition[856]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 18:34:24.198169 ignition[856]: INFO : files: op(10): [finished] processing unit "containerd.service" Feb 9 18:34:24.198169 ignition[856]: INFO : files: op(12): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(12): op(13): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(12): op(13): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(12): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(14): [started] processing unit "prepare-critools.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(14): op(15): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(14): [finished] processing unit "prepare-critools.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(18): [started] processing unit "coreos-metadata.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(18): op(19): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(18): op(19): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(18): [finished] processing unit "coreos-metadata.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 18:34:24.223882 ignition[856]: INFO : files: op(1b): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 18:34:24.247376 ignition[856]: INFO : files: op(1b): op(1c): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:34:24.247376 ignition[856]: INFO : files: op(1b): op(1c): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:34:24.247376 ignition[856]: INFO : files: op(1b): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 18:34:24.247376 ignition[856]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:34:24.247376 ignition[856]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:34:24.247376 ignition[856]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:34:24.247376 ignition[856]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:34:24.247376 ignition[856]: INFO : files: createResultFile: createFiles: op(1f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:34:24.247376 ignition[856]: INFO : files: createResultFile: createFiles: op(1f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:34:24.247376 ignition[856]: INFO : files: files passed Feb 9 18:34:24.247376 ignition[856]: INFO : Ignition finished successfully Feb 9 18:34:24.270719 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 18:34:24.270740 kernel: audit: type=1130 audit(1707503664.247:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.270752 kernel: audit: type=1130 audit(1707503664.258:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.270761 kernel: audit: type=1131 audit(1707503664.258:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.270770 kernel: audit: type=1130 audit(1707503664.263:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.246122 systemd[1]: Finished ignition-files.service. Feb 9 18:34:24.248789 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:34:24.251966 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:34:24.273864 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 18:34:24.252598 systemd[1]: Starting ignition-quench.service... Feb 9 18:34:24.275695 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:34:24.257022 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:34:24.257101 systemd[1]: Finished ignition-quench.service. Feb 9 18:34:24.258775 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:34:24.264111 systemd[1]: Reached target ignition-complete.target. Feb 9 18:34:24.268421 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:34:24.279722 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:34:24.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.279800 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:34:24.286310 kernel: audit: type=1130 audit(1707503664.280:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.286329 kernel: audit: type=1131 audit(1707503664.280:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.280953 systemd[1]: Reached target initrd-fs.target. Feb 9 18:34:24.285820 systemd[1]: Reached target initrd.target. Feb 9 18:34:24.286960 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:34:24.287621 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:34:24.297122 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:34:24.300009 kernel: audit: type=1130 audit(1707503664.296:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.298517 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:34:24.305702 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:34:24.306438 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:34:24.307540 systemd[1]: Stopped target timers.target. Feb 9 18:34:24.308590 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:34:24.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.308684 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:34:24.312912 kernel: audit: type=1131 audit(1707503664.308:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.309682 systemd[1]: Stopped target initrd.target. Feb 9 18:34:24.312492 systemd[1]: Stopped target basic.target. Feb 9 18:34:24.313486 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:34:24.314525 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:34:24.315579 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:34:24.316695 systemd[1]: Stopped target remote-fs.target. Feb 9 18:34:24.317742 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:34:24.318922 systemd[1]: Stopped target sysinit.target. Feb 9 18:34:24.319898 systemd[1]: Stopped target local-fs.target. Feb 9 18:34:24.320909 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:34:24.321901 systemd[1]: Stopped target swap.target. Feb 9 18:34:24.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.322831 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:34:24.327284 kernel: audit: type=1131 audit(1707503664.323:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.322925 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:34:24.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.324036 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:34:24.331116 kernel: audit: type=1131 audit(1707503664.327:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.326646 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:34:24.326736 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:34:24.327906 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:34:24.328018 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:34:24.330759 systemd[1]: Stopped target paths.target. Feb 9 18:34:24.331628 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:34:24.336013 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:34:24.336735 systemd[1]: Stopped target slices.target. Feb 9 18:34:24.337761 systemd[1]: Stopped target sockets.target. Feb 9 18:34:24.338700 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:34:24.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.338798 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:34:24.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.339828 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:34:24.339913 systemd[1]: Stopped ignition-files.service. Feb 9 18:34:24.343375 iscsid[746]: iscsid shutting down. Feb 9 18:34:24.341753 systemd[1]: Stopping ignition-mount.service... Feb 9 18:34:24.344395 systemd[1]: Stopping iscsid.service... Feb 9 18:34:24.344948 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:34:24.345077 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:34:24.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.348096 ignition[896]: INFO : Ignition 2.14.0 Feb 9 18:34:24.348096 ignition[896]: INFO : Stage: umount Feb 9 18:34:24.348096 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:34:24.348096 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:34:24.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.346737 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:34:24.353001 ignition[896]: INFO : umount: umount passed Feb 9 18:34:24.353001 ignition[896]: INFO : Ignition finished successfully Feb 9 18:34:24.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.348628 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:34:24.348768 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:34:24.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.349839 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:34:24.349932 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:34:24.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.352623 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 18:34:24.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.352722 systemd[1]: Stopped iscsid.service. Feb 9 18:34:24.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.354015 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:34:24.354082 systemd[1]: Closed iscsid.socket. Feb 9 18:34:24.354670 systemd[1]: Stopping iscsiuio.service... Feb 9 18:34:24.356526 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:34:24.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.356604 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:34:24.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.358565 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:34:24.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.358913 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:34:24.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.359006 systemd[1]: Stopped iscsiuio.service. Feb 9 18:34:24.359914 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:34:24.360057 systemd[1]: Stopped ignition-mount.service. Feb 9 18:34:24.361088 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:34:24.361149 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:34:24.362720 systemd[1]: Stopped target network.target. Feb 9 18:34:24.363636 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:34:24.363669 systemd[1]: Closed iscsiuio.socket. Feb 9 18:34:24.364467 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:34:24.364501 systemd[1]: Stopped ignition-disks.service. Feb 9 18:34:24.365452 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:34:24.365489 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:34:24.366633 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:34:24.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.366667 systemd[1]: Stopped ignition-setup.service. Feb 9 18:34:24.367630 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:34:24.367660 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:34:24.369059 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:34:24.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.369881 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:34:24.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.376055 systemd-networkd[739]: eth0: DHCPv6 lease lost Feb 9 18:34:24.383000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:34:24.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.377047 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:34:24.377127 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:34:24.378180 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:34:24.378208 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:34:24.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.379578 systemd[1]: Stopping network-cleanup.service... Feb 9 18:34:24.380697 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:34:24.380749 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:34:24.381855 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:34:24.381889 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:34:24.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.383514 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:34:24.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.383553 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:34:24.394000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:34:24.384466 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:34:24.386242 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:34:24.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.386610 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:34:24.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.386691 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:34:24.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.390745 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:34:24.390856 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:34:24.392092 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:34:24.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.392165 systemd[1]: Stopped network-cleanup.service. Feb 9 18:34:24.393082 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:34:24.393114 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:34:24.394330 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:34:24.394361 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:34:24.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.395521 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:34:24.395561 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:34:24.396773 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:34:24.396810 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:34:24.398170 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:34:24.398204 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:34:24.400106 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:34:24.401269 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:34:24.401321 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:34:24.405019 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:34:24.405092 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:34:24.406626 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:34:24.415000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:34:24.415000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:34:24.408259 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:34:24.413752 systemd[1]: Switching root. Feb 9 18:34:24.417000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:34:24.417000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:34:24.417000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:34:24.432079 systemd-journald[289]: Journal stopped Feb 9 18:34:26.684633 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Feb 9 18:34:26.684694 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:34:26.684706 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:34:26.684720 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:34:26.684731 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:34:26.684745 kernel: SELinux: policy capability open_perms=1 Feb 9 18:34:26.684755 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:34:26.684766 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:34:26.684778 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:34:26.684788 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:34:26.684797 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:34:26.684808 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:34:26.684818 systemd[1]: Successfully loaded SELinux policy in 34.970ms. Feb 9 18:34:26.684837 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.407ms. Feb 9 18:34:26.684849 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:34:26.684860 systemd[1]: Detected virtualization kvm. Feb 9 18:34:26.684871 systemd[1]: Detected architecture arm64. Feb 9 18:34:26.684883 systemd[1]: Detected first boot. Feb 9 18:34:26.684894 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:34:26.684904 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:34:26.684914 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:34:26.684925 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:34:26.684946 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:34:26.684958 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:34:26.684972 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:34:26.685003 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 18:34:26.685015 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:34:26.685027 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:34:26.685038 systemd[1]: Created slice system-getty.slice. Feb 9 18:34:26.685048 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:34:26.685059 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:34:26.685071 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:34:26.685082 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:34:26.685093 systemd[1]: Created slice user.slice. Feb 9 18:34:26.685103 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:34:26.685114 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:34:26.685124 systemd[1]: Set up automount boot.automount. Feb 9 18:34:26.685134 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:34:26.685145 systemd[1]: Reached target integritysetup.target. Feb 9 18:34:26.685157 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:34:26.685170 systemd[1]: Reached target remote-fs.target. Feb 9 18:34:26.685180 systemd[1]: Reached target slices.target. Feb 9 18:34:26.685191 systemd[1]: Reached target swap.target. Feb 9 18:34:26.685202 systemd[1]: Reached target torcx.target. Feb 9 18:34:26.685212 systemd[1]: Reached target veritysetup.target. Feb 9 18:34:26.685223 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:34:26.685234 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:34:26.685245 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:34:26.685256 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:34:26.685267 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:34:26.685278 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:34:26.685288 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:34:26.685299 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:34:26.685310 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:34:26.685321 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:34:26.685331 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:34:26.685345 systemd[1]: Mounting media.mount... Feb 9 18:34:26.685356 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:34:26.685367 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:34:26.685377 systemd[1]: Mounting tmp.mount... Feb 9 18:34:26.685389 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:34:26.685400 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:34:26.685410 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:34:26.685421 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:34:26.685432 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:34:26.685442 systemd[1]: Starting modprobe@drm.service... Feb 9 18:34:26.685453 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:34:26.685465 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:34:26.685476 systemd[1]: Starting modprobe@loop.service... Feb 9 18:34:26.685487 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:34:26.685499 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 18:34:26.685510 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 18:34:26.685521 systemd[1]: Starting systemd-journald.service... Feb 9 18:34:26.685532 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:34:26.685543 kernel: fuse: init (API version 7.34) Feb 9 18:34:26.685553 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:34:26.685564 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:34:26.685575 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:34:26.685586 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:34:26.685597 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:34:26.685608 systemd[1]: Mounted media.mount. Feb 9 18:34:26.685620 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:34:26.685631 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:34:26.685641 systemd[1]: Mounted tmp.mount. Feb 9 18:34:26.685651 kernel: loop: module loaded Feb 9 18:34:26.685663 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:34:26.685676 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:34:26.685686 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:34:26.685699 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:34:26.685709 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:34:26.685720 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:34:26.685730 systemd[1]: Finished modprobe@drm.service. Feb 9 18:34:26.685741 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:34:26.685752 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:34:26.685762 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:34:26.685772 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:34:26.685784 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:34:26.685795 systemd[1]: Finished modprobe@loop.service. Feb 9 18:34:26.685805 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:34:26.685816 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:34:26.685826 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:34:26.685837 systemd[1]: Reached target network-pre.target. Feb 9 18:34:26.685848 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:34:26.685860 systemd-journald[1025]: Journal started Feb 9 18:34:26.685901 systemd-journald[1025]: Runtime Journal (/run/log/journal/698c2c81fa2c446498699d9499f2ba3d) is 6.0M, max 48.7M, 42.6M free. Feb 9 18:34:26.573000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:34:26.573000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 18:34:26.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.680000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:34:26.680000 audit[1025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffffb5ecef0 a2=4000 a3=1 items=0 ppid=1 pid=1025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:26.680000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:34:26.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.692207 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:34:26.692377 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:34:26.694516 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:34:26.698152 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:34:26.698202 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:34:26.702004 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:34:26.704118 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:34:26.708004 systemd[1]: Started systemd-journald.service. Feb 9 18:34:26.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.707910 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:34:26.708906 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:34:26.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.710123 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:34:26.711175 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:34:26.713017 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:34:26.722435 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:34:26.723725 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:34:26.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.725164 systemd-journald[1025]: Time spent on flushing to /var/log/journal/698c2c81fa2c446498699d9499f2ba3d is 12.912ms for 970 entries. Feb 9 18:34:26.725164 systemd-journald[1025]: System Journal (/var/log/journal/698c2c81fa2c446498699d9499f2ba3d) is 8.0M, max 195.6M, 187.6M free. Feb 9 18:34:26.753464 systemd-journald[1025]: Received client request to flush runtime journal. Feb 9 18:34:26.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.725817 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:34:26.729163 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:34:26.731226 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:34:26.754733 udevadm[1079]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 18:34:26.755656 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:34:26.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.758508 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:34:26.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:26.760623 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:34:26.775822 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:34:26.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.065826 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:34:27.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.067911 systemd[1]: Starting systemd-udevd.service... Feb 9 18:34:27.084075 systemd-udevd[1087]: Using default interface naming scheme 'v252'. Feb 9 18:34:27.095351 systemd[1]: Started systemd-udevd.service. Feb 9 18:34:27.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.097484 systemd[1]: Starting systemd-networkd.service... Feb 9 18:34:27.107128 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:34:27.125676 systemd[1]: Found device dev-ttyAMA0.device. Feb 9 18:34:27.149038 systemd[1]: Started systemd-userdbd.service. Feb 9 18:34:27.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.157777 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:34:27.206959 systemd-networkd[1094]: lo: Link UP Feb 9 18:34:27.207351 systemd-networkd[1094]: lo: Gained carrier Feb 9 18:34:27.207383 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:34:27.207803 systemd-networkd[1094]: Enumeration completed Feb 9 18:34:27.207998 systemd-networkd[1094]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:34:27.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.208384 systemd[1]: Started systemd-networkd.service. Feb 9 18:34:27.210174 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:34:27.211444 systemd-networkd[1094]: eth0: Link UP Feb 9 18:34:27.211609 systemd-networkd[1094]: eth0: Gained carrier Feb 9 18:34:27.220471 lvm[1121]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:34:27.232099 systemd-networkd[1094]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:34:27.256828 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:34:27.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.257676 systemd[1]: Reached target cryptsetup.target. Feb 9 18:34:27.259443 systemd[1]: Starting lvm2-activation.service... Feb 9 18:34:27.263015 lvm[1123]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:34:27.296875 systemd[1]: Finished lvm2-activation.service. Feb 9 18:34:27.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.297619 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:34:27.298275 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:34:27.298300 systemd[1]: Reached target local-fs.target. Feb 9 18:34:27.298830 systemd[1]: Reached target machines.target. Feb 9 18:34:27.300588 systemd[1]: Starting ldconfig.service... Feb 9 18:34:27.301422 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:34:27.301473 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:34:27.302780 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:34:27.304556 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:34:27.306530 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:34:27.307306 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:34:27.307359 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:34:27.308964 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:34:27.310016 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1126 (bootctl) Feb 9 18:34:27.311313 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:34:27.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.315412 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:34:27.323257 systemd-tmpfiles[1129]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:34:27.324060 systemd-tmpfiles[1129]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:34:27.325113 systemd-tmpfiles[1129]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:34:27.349462 systemd-fsck[1135]: fsck.fat 4.2 (2021-01-31) Feb 9 18:34:27.349462 systemd-fsck[1135]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 18:34:27.351373 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:34:27.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.356213 systemd[1]: Mounting boot.mount... Feb 9 18:34:27.501293 systemd[1]: Mounted boot.mount. Feb 9 18:34:27.507730 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:34:27.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.511427 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:34:27.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.579668 ldconfig[1125]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:34:27.583874 systemd[1]: Finished ldconfig.service. Feb 9 18:34:27.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.587922 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:34:27.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.589971 systemd[1]: Starting audit-rules.service... Feb 9 18:34:27.591878 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:34:27.593649 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:34:27.595886 systemd[1]: Starting systemd-resolved.service... Feb 9 18:34:27.598744 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:34:27.600702 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:34:27.602492 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:34:27.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.603677 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:34:27.609000 audit[1156]: SYSTEM_BOOT pid=1156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.609529 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:34:27.613696 systemd[1]: Starting systemd-update-done.service... Feb 9 18:34:27.616349 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:34:27.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.619420 systemd[1]: Finished systemd-update-done.service. Feb 9 18:34:27.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.629000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:34:27.629000 audit[1170]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe7d3e430 a2=420 a3=0 items=0 ppid=1144 pid=1170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:27.629000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:34:27.630719 augenrules[1170]: No rules Feb 9 18:34:27.631286 systemd[1]: Finished audit-rules.service. Feb 9 18:34:27.652400 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:34:27.662579 systemd-resolved[1152]: Positive Trust Anchors: Feb 9 18:34:27.662590 systemd-resolved[1152]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:34:27.662617 systemd-resolved[1152]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:34:27.666772 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:34:27.667566 systemd-timesyncd[1155]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 18:34:27.667903 systemd-timesyncd[1155]: Initial clock synchronization to Fri 2024-02-09 18:34:28.022795 UTC. Feb 9 18:34:27.668012 systemd[1]: Reached target time-set.target. Feb 9 18:34:27.679473 systemd-resolved[1152]: Defaulting to hostname 'linux'. Feb 9 18:34:27.680803 systemd[1]: Started systemd-resolved.service. Feb 9 18:34:27.681613 systemd[1]: Reached target network.target. Feb 9 18:34:27.682219 systemd[1]: Reached target nss-lookup.target. Feb 9 18:34:27.682988 systemd[1]: Reached target sysinit.target. Feb 9 18:34:27.683772 systemd[1]: Started motdgen.path. Feb 9 18:34:27.684488 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:34:27.685594 systemd[1]: Started logrotate.timer. Feb 9 18:34:27.686338 systemd[1]: Started mdadm.timer. Feb 9 18:34:27.686944 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:34:27.687706 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:34:27.687735 systemd[1]: Reached target paths.target. Feb 9 18:34:27.688307 systemd[1]: Reached target timers.target. Feb 9 18:34:27.689301 systemd[1]: Listening on dbus.socket. Feb 9 18:34:27.691009 systemd[1]: Starting docker.socket... Feb 9 18:34:27.692515 systemd[1]: Listening on sshd.socket. Feb 9 18:34:27.693327 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:34:27.693619 systemd[1]: Listening on docker.socket. Feb 9 18:34:27.694341 systemd[1]: Reached target sockets.target. Feb 9 18:34:27.695042 systemd[1]: Reached target basic.target. Feb 9 18:34:27.695828 systemd[1]: System is tainted: cgroupsv1 Feb 9 18:34:27.695876 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:34:27.695895 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:34:27.696874 systemd[1]: Starting containerd.service... Feb 9 18:34:27.698546 systemd[1]: Starting dbus.service... Feb 9 18:34:27.700055 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:34:27.701861 systemd[1]: Starting extend-filesystems.service... Feb 9 18:34:27.702656 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:34:27.703879 systemd[1]: Starting motdgen.service... Feb 9 18:34:27.705623 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:34:27.707463 systemd[1]: Starting prepare-critools.service... Feb 9 18:34:27.709335 systemd[1]: Starting prepare-helm.service... Feb 9 18:34:27.711513 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:34:27.713951 systemd[1]: Starting sshd-keygen.service... Feb 9 18:34:27.719998 systemd[1]: Starting systemd-logind.service... Feb 9 18:34:27.720700 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:34:27.720776 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:34:27.722076 systemd[1]: Starting update-engine.service... Feb 9 18:34:27.724365 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:34:27.731252 jq[1203]: true Feb 9 18:34:27.730084 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:34:27.730396 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:34:27.733001 jq[1182]: false Feb 9 18:34:27.743182 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:34:27.745188 tar[1205]: ./ Feb 9 18:34:27.749691 tar[1205]: ./macvlan Feb 9 18:34:27.751715 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:34:27.753172 tar[1206]: crictl Feb 9 18:34:27.756087 jq[1215]: true Feb 9 18:34:27.762849 tar[1207]: linux-arm64/helm Feb 9 18:34:27.763811 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:34:27.764084 systemd[1]: Finished motdgen.service. Feb 9 18:34:27.784248 extend-filesystems[1183]: Found vda Feb 9 18:34:27.784248 extend-filesystems[1183]: Found vda1 Feb 9 18:34:27.784248 extend-filesystems[1183]: Found vda2 Feb 9 18:34:27.784248 extend-filesystems[1183]: Found vda3 Feb 9 18:34:27.784248 extend-filesystems[1183]: Found usr Feb 9 18:34:27.784248 extend-filesystems[1183]: Found vda4 Feb 9 18:34:27.784248 extend-filesystems[1183]: Found vda6 Feb 9 18:34:27.784248 extend-filesystems[1183]: Found vda7 Feb 9 18:34:27.784248 extend-filesystems[1183]: Found vda9 Feb 9 18:34:27.784248 extend-filesystems[1183]: Checking size of /dev/vda9 Feb 9 18:34:27.780541 systemd[1]: Started dbus.service. Feb 9 18:34:27.780347 dbus-daemon[1181]: [system] SELinux support is enabled Feb 9 18:34:27.783241 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:34:27.783272 systemd[1]: Reached target system-config.target. Feb 9 18:34:27.784093 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:34:27.784114 systemd[1]: Reached target user-config.target. Feb 9 18:34:27.812189 systemd-logind[1198]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 18:34:27.816743 systemd-logind[1198]: New seat seat0. Feb 9 18:34:27.821902 systemd[1]: Started systemd-logind.service. Feb 9 18:34:27.831963 extend-filesystems[1183]: Resized partition /dev/vda9 Feb 9 18:34:27.843852 extend-filesystems[1248]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 18:34:27.854527 tar[1205]: ./static Feb 9 18:34:27.863002 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 18:34:27.885998 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 18:34:27.906907 extend-filesystems[1248]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 18:34:27.906907 extend-filesystems[1248]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 18:34:27.906907 extend-filesystems[1248]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 18:34:27.910060 extend-filesystems[1183]: Resized filesystem in /dev/vda9 Feb 9 18:34:27.911145 bash[1244]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:34:27.906969 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:34:27.907216 systemd[1]: Finished extend-filesystems.service. Feb 9 18:34:27.910690 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:34:27.924158 update_engine[1199]: I0209 18:34:27.917994 1199 main.cc:92] Flatcar Update Engine starting Feb 9 18:34:27.927516 systemd[1]: Started update-engine.service. Feb 9 18:34:27.928266 update_engine[1199]: I0209 18:34:27.927564 1199 update_check_scheduler.cc:74] Next update check in 5m0s Feb 9 18:34:27.930033 systemd[1]: Started locksmithd.service. Feb 9 18:34:27.934820 tar[1205]: ./vlan Feb 9 18:34:27.948557 env[1212]: time="2024-02-09T18:34:27.948508480Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:34:27.962575 tar[1205]: ./portmap Feb 9 18:34:27.988116 tar[1205]: ./host-local Feb 9 18:34:28.021221 env[1212]: time="2024-02-09T18:34:28.021071012Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:34:28.021359 env[1212]: time="2024-02-09T18:34:28.021335457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:34:28.022825 tar[1205]: ./vrf Feb 9 18:34:28.027276 env[1212]: time="2024-02-09T18:34:28.027235420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:34:28.027276 env[1212]: time="2024-02-09T18:34:28.027273228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:34:28.027575 env[1212]: time="2024-02-09T18:34:28.027545569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:34:28.027575 env[1212]: time="2024-02-09T18:34:28.027570718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:34:28.027636 env[1212]: time="2024-02-09T18:34:28.027587679Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:34:28.027636 env[1212]: time="2024-02-09T18:34:28.027599293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:34:28.027699 env[1212]: time="2024-02-09T18:34:28.027677791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:34:28.027999 env[1212]: time="2024-02-09T18:34:28.027970101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:34:28.028186 env[1212]: time="2024-02-09T18:34:28.028161228Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:34:28.028186 env[1212]: time="2024-02-09T18:34:28.028185459Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:34:28.028269 env[1212]: time="2024-02-09T18:34:28.028248040Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:34:28.028269 env[1212]: time="2024-02-09T18:34:28.028266380Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:34:28.044096 env[1212]: time="2024-02-09T18:34:28.044049190Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:34:28.044096 env[1212]: time="2024-02-09T18:34:28.044098612Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:34:28.044189 env[1212]: time="2024-02-09T18:34:28.044114821Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:34:28.044189 env[1212]: time="2024-02-09T18:34:28.044149955Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.044189 env[1212]: time="2024-02-09T18:34:28.044166081Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.044189 env[1212]: time="2024-02-09T18:34:28.044181914Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.044283 env[1212]: time="2024-02-09T18:34:28.044195241Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.044586 env[1212]: time="2024-02-09T18:34:28.044555228Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.044626 env[1212]: time="2024-02-09T18:34:28.044586561Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.044626 env[1212]: time="2024-02-09T18:34:28.044602812Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.044626 env[1212]: time="2024-02-09T18:34:28.044615178Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.044692 env[1212]: time="2024-02-09T18:34:28.044627961Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:34:28.044782 env[1212]: time="2024-02-09T18:34:28.044758387Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:34:28.044879 env[1212]: time="2024-02-09T18:34:28.044858567Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:34:28.045213 env[1212]: time="2024-02-09T18:34:28.045188016Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.045253 env[1212]: time="2024-02-09T18:34:28.045223234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.045253 env[1212]: time="2024-02-09T18:34:28.045237563Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:34:28.045363 env[1212]: time="2024-02-09T18:34:28.045343508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.045398 env[1212]: time="2024-02-09T18:34:28.045368616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.045398 env[1212]: time="2024-02-09T18:34:28.045383029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.045398 env[1212]: time="2024-02-09T18:34:28.045394559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.045455 env[1212]: time="2024-02-09T18:34:28.045407092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.045455 env[1212]: time="2024-02-09T18:34:28.045419416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.045455 env[1212]: time="2024-02-09T18:34:28.045431030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.045455 env[1212]: time="2024-02-09T18:34:28.045443563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.045540 env[1212]: time="2024-02-09T18:34:28.045457057Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:34:28.045599 env[1212]: time="2024-02-09T18:34:28.045577874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.045638 env[1212]: time="2024-02-09T18:34:28.045600559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.045638 env[1212]: time="2024-02-09T18:34:28.045614095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.045638 env[1212]: time="2024-02-09T18:34:28.045626669Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:34:28.045702 env[1212]: time="2024-02-09T18:34:28.045642252Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:34:28.045702 env[1212]: time="2024-02-09T18:34:28.045653991Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:34:28.045702 env[1212]: time="2024-02-09T18:34:28.045672999Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:34:28.045797 env[1212]: time="2024-02-09T18:34:28.045707716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.045971 env[1212]: time="2024-02-09T18:34:28.045915470Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:34:28.045971 env[1212]: time="2024-02-09T18:34:28.045976672Z" level=info msg="Connect containerd service" Feb 9 18:34:28.048645 env[1212]: time="2024-02-09T18:34:28.046013937Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:34:28.048645 env[1212]: time="2024-02-09T18:34:28.046794990Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:34:28.048645 env[1212]: time="2024-02-09T18:34:28.047194164Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:34:28.048645 env[1212]: time="2024-02-09T18:34:28.047199971Z" level=info msg="Start subscribing containerd event" Feb 9 18:34:28.048645 env[1212]: time="2024-02-09T18:34:28.047233350Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:34:28.048645 env[1212]: time="2024-02-09T18:34:28.047255283Z" level=info msg="Start recovering state" Feb 9 18:34:28.048645 env[1212]: time="2024-02-09T18:34:28.047286072Z" level=info msg="containerd successfully booted in 0.100811s" Feb 9 18:34:28.048645 env[1212]: time="2024-02-09T18:34:28.047326553Z" level=info msg="Start event monitor" Feb 9 18:34:28.048645 env[1212]: time="2024-02-09T18:34:28.047346648Z" level=info msg="Start snapshots syncer" Feb 9 18:34:28.048645 env[1212]: time="2024-02-09T18:34:28.047358512Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:34:28.048645 env[1212]: time="2024-02-09T18:34:28.047366450Z" level=info msg="Start streaming server" Feb 9 18:34:28.047396 systemd[1]: Started containerd.service. Feb 9 18:34:28.050727 tar[1205]: ./bridge Feb 9 18:34:28.081606 tar[1205]: ./tuning Feb 9 18:34:28.106484 tar[1205]: ./firewall Feb 9 18:34:28.137813 tar[1205]: ./host-device Feb 9 18:34:28.165991 tar[1205]: ./sbr Feb 9 18:34:28.191189 tar[1205]: ./loopback Feb 9 18:34:28.215681 tar[1205]: ./dhcp Feb 9 18:34:28.251219 systemd[1]: Created slice system-sshd.slice. Feb 9 18:34:28.285257 tar[1207]: linux-arm64/LICENSE Feb 9 18:34:28.285459 tar[1207]: linux-arm64/README.md Feb 9 18:34:28.290081 systemd[1]: Finished prepare-helm.service. Feb 9 18:34:28.339544 tar[1205]: ./ptp Feb 9 18:34:28.362140 locksmithd[1253]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:34:28.374953 tar[1205]: ./ipvlan Feb 9 18:34:28.388525 systemd[1]: Finished prepare-critools.service. Feb 9 18:34:28.410093 tar[1205]: ./bandwidth Feb 9 18:34:28.451407 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:34:28.463212 systemd-networkd[1094]: eth0: Gained IPv6LL Feb 9 18:34:31.281011 sshd_keygen[1208]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:34:31.299318 systemd[1]: Finished sshd-keygen.service. Feb 9 18:34:31.301593 systemd[1]: Starting issuegen.service... Feb 9 18:34:31.303215 systemd[1]: Started sshd@0-10.0.0.88:22-10.0.0.1:47674.service. Feb 9 18:34:31.306340 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:34:31.306575 systemd[1]: Finished issuegen.service. Feb 9 18:34:31.308677 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:34:31.317715 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:34:31.319881 systemd[1]: Started getty@tty1.service. Feb 9 18:34:31.321842 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 18:34:31.322897 systemd[1]: Reached target getty.target. Feb 9 18:34:31.323781 systemd[1]: Reached target multi-user.target. Feb 9 18:34:31.325802 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:34:31.334091 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:34:31.334296 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:34:31.335289 systemd[1]: Startup finished in 12.496s (kernel) + 6.855s (userspace) = 19.351s. Feb 9 18:34:31.363043 sshd[1284]: Accepted publickey for core from 10.0.0.1 port 47674 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:34:31.364852 sshd[1284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:34:31.374212 systemd[1]: Created slice user-500.slice. Feb 9 18:34:31.375042 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:34:31.379889 systemd-logind[1198]: New session 1 of user core. Feb 9 18:34:31.382933 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:34:31.383989 systemd[1]: Starting user@500.service... Feb 9 18:34:31.386945 (systemd)[1298]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:34:31.442813 systemd[1298]: Queued start job for default target default.target. Feb 9 18:34:31.442996 systemd[1298]: Reached target paths.target. Feb 9 18:34:31.443028 systemd[1298]: Reached target sockets.target. Feb 9 18:34:31.443039 systemd[1298]: Reached target timers.target. Feb 9 18:34:31.443062 systemd[1298]: Reached target basic.target. Feb 9 18:34:31.443100 systemd[1298]: Reached target default.target. Feb 9 18:34:31.443122 systemd[1298]: Startup finished in 51ms. Feb 9 18:34:31.443173 systemd[1]: Started user@500.service. Feb 9 18:34:31.444028 systemd[1]: Started session-1.scope. Feb 9 18:34:31.494057 systemd[1]: Started sshd@1-10.0.0.88:22-10.0.0.1:47676.service. Feb 9 18:34:31.533387 sshd[1307]: Accepted publickey for core from 10.0.0.1 port 47676 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:34:31.534805 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:34:31.538719 systemd-logind[1198]: New session 2 of user core. Feb 9 18:34:31.539246 systemd[1]: Started session-2.scope. Feb 9 18:34:31.597932 sshd[1307]: pam_unix(sshd:session): session closed for user core Feb 9 18:34:31.600084 systemd[1]: Started sshd@2-10.0.0.88:22-10.0.0.1:47686.service. Feb 9 18:34:31.600743 systemd[1]: sshd@1-10.0.0.88:22-10.0.0.1:47676.service: Deactivated successfully. Feb 9 18:34:31.601681 systemd-logind[1198]: Session 2 logged out. Waiting for processes to exit. Feb 9 18:34:31.601741 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 18:34:31.602479 systemd-logind[1198]: Removed session 2. Feb 9 18:34:31.640585 sshd[1312]: Accepted publickey for core from 10.0.0.1 port 47686 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:34:31.641862 sshd[1312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:34:31.644658 systemd-logind[1198]: New session 3 of user core. Feb 9 18:34:31.645401 systemd[1]: Started session-3.scope. Feb 9 18:34:31.694944 sshd[1312]: pam_unix(sshd:session): session closed for user core Feb 9 18:34:31.697117 systemd[1]: Started sshd@3-10.0.0.88:22-10.0.0.1:47688.service. Feb 9 18:34:31.697561 systemd[1]: sshd@2-10.0.0.88:22-10.0.0.1:47686.service: Deactivated successfully. Feb 9 18:34:31.698576 systemd-logind[1198]: Session 3 logged out. Waiting for processes to exit. Feb 9 18:34:31.698603 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 18:34:31.699543 systemd-logind[1198]: Removed session 3. Feb 9 18:34:31.738154 sshd[1319]: Accepted publickey for core from 10.0.0.1 port 47688 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:34:31.739215 sshd[1319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:34:31.742181 systemd-logind[1198]: New session 4 of user core. Feb 9 18:34:31.742934 systemd[1]: Started session-4.scope. Feb 9 18:34:31.796215 sshd[1319]: pam_unix(sshd:session): session closed for user core Feb 9 18:34:31.798413 systemd[1]: Started sshd@4-10.0.0.88:22-10.0.0.1:47698.service. Feb 9 18:34:31.798853 systemd[1]: sshd@3-10.0.0.88:22-10.0.0.1:47688.service: Deactivated successfully. Feb 9 18:34:31.799795 systemd-logind[1198]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:34:31.799862 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:34:31.800720 systemd-logind[1198]: Removed session 4. Feb 9 18:34:31.839356 sshd[1327]: Accepted publickey for core from 10.0.0.1 port 47698 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:34:31.840332 sshd[1327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:34:31.843068 systemd-logind[1198]: New session 5 of user core. Feb 9 18:34:31.843767 systemd[1]: Started session-5.scope. Feb 9 18:34:31.904493 sudo[1332]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:34:31.904692 sudo[1332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:34:32.638316 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:34:32.643814 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:34:32.644249 systemd[1]: Reached target network-online.target. Feb 9 18:34:32.645543 systemd[1]: Starting docker.service... Feb 9 18:34:32.726506 env[1351]: time="2024-02-09T18:34:32.726453460Z" level=info msg="Starting up" Feb 9 18:34:32.727898 env[1351]: time="2024-02-09T18:34:32.727875419Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:34:32.727898 env[1351]: time="2024-02-09T18:34:32.727894790Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:34:32.728008 env[1351]: time="2024-02-09T18:34:32.727913875Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:34:32.728008 env[1351]: time="2024-02-09T18:34:32.727923642Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:34:32.730056 env[1351]: time="2024-02-09T18:34:32.730034521Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:34:32.730301 env[1351]: time="2024-02-09T18:34:32.730282698Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:34:32.730375 env[1351]: time="2024-02-09T18:34:32.730360717Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:34:32.730428 env[1351]: time="2024-02-09T18:34:32.730414481Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:34:32.958070 env[1351]: time="2024-02-09T18:34:32.957957072Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 18:34:32.958247 env[1351]: time="2024-02-09T18:34:32.958229832Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 18:34:32.958444 env[1351]: time="2024-02-09T18:34:32.958425928Z" level=info msg="Loading containers: start." Feb 9 18:34:33.052019 kernel: Initializing XFRM netlink socket Feb 9 18:34:33.074055 env[1351]: time="2024-02-09T18:34:33.074010739Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 18:34:33.123286 systemd-networkd[1094]: docker0: Link UP Feb 9 18:34:33.131183 env[1351]: time="2024-02-09T18:34:33.131153893Z" level=info msg="Loading containers: done." Feb 9 18:34:33.152251 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1618351940-merged.mount: Deactivated successfully. Feb 9 18:34:33.154911 env[1351]: time="2024-02-09T18:34:33.154875534Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 18:34:33.155177 env[1351]: time="2024-02-09T18:34:33.155155448Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 18:34:33.155335 env[1351]: time="2024-02-09T18:34:33.155317906Z" level=info msg="Daemon has completed initialization" Feb 9 18:34:33.173061 systemd[1]: Started docker.service. Feb 9 18:34:33.174377 env[1351]: time="2024-02-09T18:34:33.174338568Z" level=info msg="API listen on /run/docker.sock" Feb 9 18:34:33.189814 systemd[1]: Reloading. Feb 9 18:34:33.229824 /usr/lib/systemd/system-generators/torcx-generator[1493]: time="2024-02-09T18:34:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:34:33.229853 /usr/lib/systemd/system-generators/torcx-generator[1493]: time="2024-02-09T18:34:33Z" level=info msg="torcx already run" Feb 9 18:34:33.291240 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:34:33.291256 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:34:33.307608 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:34:33.368305 systemd[1]: Started kubelet.service. Feb 9 18:34:33.529749 kubelet[1536]: E0209 18:34:33.529621 1536 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:34:33.531690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:34:33.531853 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:34:33.748335 env[1212]: time="2024-02-09T18:34:33.748290682Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 18:34:34.472318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2756589185.mount: Deactivated successfully. Feb 9 18:34:36.357069 env[1212]: time="2024-02-09T18:34:36.357017502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:36.358565 env[1212]: time="2024-02-09T18:34:36.358532636Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:36.360497 env[1212]: time="2024-02-09T18:34:36.360471052Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:36.362147 env[1212]: time="2024-02-09T18:34:36.362121052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:36.363715 env[1212]: time="2024-02-09T18:34:36.363684025Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 18:34:36.372485 env[1212]: time="2024-02-09T18:34:36.372455304Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 18:34:38.590500 env[1212]: time="2024-02-09T18:34:38.590449269Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:38.591945 env[1212]: time="2024-02-09T18:34:38.591915320Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:38.593833 env[1212]: time="2024-02-09T18:34:38.593809030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:38.595595 env[1212]: time="2024-02-09T18:34:38.595571342Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:38.596452 env[1212]: time="2024-02-09T18:34:38.596424758Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 18:34:38.604675 env[1212]: time="2024-02-09T18:34:38.604651728Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 18:34:40.001995 env[1212]: time="2024-02-09T18:34:40.001936266Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:40.004929 env[1212]: time="2024-02-09T18:34:40.004897844Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:40.006793 env[1212]: time="2024-02-09T18:34:40.006769397Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:40.008719 env[1212]: time="2024-02-09T18:34:40.008695433Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:40.009485 env[1212]: time="2024-02-09T18:34:40.009454321Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 18:34:40.018628 env[1212]: time="2024-02-09T18:34:40.018596696Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 18:34:41.236955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1314114191.mount: Deactivated successfully. Feb 9 18:34:41.575343 env[1212]: time="2024-02-09T18:34:41.575219212Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:41.579546 env[1212]: time="2024-02-09T18:34:41.579510701Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:41.585067 env[1212]: time="2024-02-09T18:34:41.585034440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:41.586769 env[1212]: time="2024-02-09T18:34:41.586712674Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:41.587253 env[1212]: time="2024-02-09T18:34:41.587218966Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 18:34:41.596797 env[1212]: time="2024-02-09T18:34:41.596768168Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 18:34:42.106527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184809158.mount: Deactivated successfully. Feb 9 18:34:42.111232 env[1212]: time="2024-02-09T18:34:42.111186051Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:42.113234 env[1212]: time="2024-02-09T18:34:42.113197493Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:42.115442 env[1212]: time="2024-02-09T18:34:42.115412842Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:42.117475 env[1212]: time="2024-02-09T18:34:42.117447965Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:42.118550 env[1212]: time="2024-02-09T18:34:42.118513614Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 18:34:42.128559 env[1212]: time="2024-02-09T18:34:42.128533973Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 18:34:43.003923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount518837317.mount: Deactivated successfully. Feb 9 18:34:43.699975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 18:34:43.700250 systemd[1]: Stopped kubelet.service. Feb 9 18:34:43.701819 systemd[1]: Started kubelet.service. Feb 9 18:34:43.749248 kubelet[1591]: E0209 18:34:43.749195 1591 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:34:43.751902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:34:43.752057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:34:44.758451 env[1212]: time="2024-02-09T18:34:44.758405926Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:44.759596 env[1212]: time="2024-02-09T18:34:44.759572330Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:44.761277 env[1212]: time="2024-02-09T18:34:44.761247789Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:44.762920 env[1212]: time="2024-02-09T18:34:44.762890960Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:44.763516 env[1212]: time="2024-02-09T18:34:44.763491171Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 18:34:44.771970 env[1212]: time="2024-02-09T18:34:44.771944746Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 18:34:45.369954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548810859.mount: Deactivated successfully. Feb 9 18:34:46.150206 env[1212]: time="2024-02-09T18:34:46.150139887Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:46.151869 env[1212]: time="2024-02-09T18:34:46.151833660Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:46.153113 env[1212]: time="2024-02-09T18:34:46.153078156Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:46.154538 env[1212]: time="2024-02-09T18:34:46.154499278Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:46.155192 env[1212]: time="2024-02-09T18:34:46.155158193Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 18:34:50.510475 systemd[1]: Stopped kubelet.service. Feb 9 18:34:50.523599 systemd[1]: Reloading. Feb 9 18:34:50.565191 /usr/lib/systemd/system-generators/torcx-generator[1696]: time="2024-02-09T18:34:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:34:50.565551 /usr/lib/systemd/system-generators/torcx-generator[1696]: time="2024-02-09T18:34:50Z" level=info msg="torcx already run" Feb 9 18:34:50.628576 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:34:50.628751 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:34:50.645250 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:34:50.712597 systemd[1]: Started kubelet.service. Feb 9 18:34:50.750019 kubelet[1740]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:34:50.750019 kubelet[1740]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:34:50.750346 kubelet[1740]: I0209 18:34:50.750120 1740 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:34:50.751244 kubelet[1740]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:34:50.751244 kubelet[1740]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:34:52.098384 kubelet[1740]: I0209 18:34:52.098344 1740 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:34:52.098384 kubelet[1740]: I0209 18:34:52.098372 1740 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:34:52.098722 kubelet[1740]: I0209 18:34:52.098589 1740 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:34:52.102649 kubelet[1740]: I0209 18:34:52.102632 1740 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:34:52.103225 kubelet[1740]: E0209 18:34:52.103210 1740 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:52.104927 kubelet[1740]: W0209 18:34:52.104904 1740 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:34:52.105709 kubelet[1740]: I0209 18:34:52.105690 1740 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:34:52.106329 kubelet[1740]: I0209 18:34:52.106308 1740 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:34:52.106388 kubelet[1740]: I0209 18:34:52.106377 1740 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:34:52.106502 kubelet[1740]: I0209 18:34:52.106458 1740 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:34:52.106502 kubelet[1740]: I0209 18:34:52.106469 1740 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:34:52.106554 kubelet[1740]: I0209 18:34:52.106543 1740 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:34:52.110141 kubelet[1740]: I0209 18:34:52.110122 1740 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:34:52.110141 kubelet[1740]: I0209 18:34:52.110144 1740 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:34:52.110349 kubelet[1740]: I0209 18:34:52.110338 1740 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:34:52.110378 kubelet[1740]: I0209 18:34:52.110352 1740 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:34:52.111410 kubelet[1740]: W0209 18:34:52.111360 1740 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:52.111410 kubelet[1740]: E0209 18:34:52.111414 1740 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:52.111710 kubelet[1740]: W0209 18:34:52.111652 1740 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:52.111710 kubelet[1740]: E0209 18:34:52.111691 1740 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:52.111835 kubelet[1740]: I0209 18:34:52.111818 1740 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:34:52.112738 kubelet[1740]: W0209 18:34:52.112710 1740 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:34:52.113230 kubelet[1740]: I0209 18:34:52.113188 1740 server.go:1186] "Started kubelet" Feb 9 18:34:52.115426 kubelet[1740]: E0209 18:34:52.115314 1740 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2458fad12bb36", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 34, 52, 113165110, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 34, 52, 113165110, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.88:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.88:6443: connect: connection refused'(may retry after sleeping) Feb 9 18:34:52.115556 kubelet[1740]: E0209 18:34:52.115532 1740 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:34:52.115603 kubelet[1740]: E0209 18:34:52.115559 1740 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:34:52.116083 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 18:34:52.116300 kubelet[1740]: I0209 18:34:52.116279 1740 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:34:52.116551 kubelet[1740]: I0209 18:34:52.116530 1740 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:34:52.117904 kubelet[1740]: I0209 18:34:52.117882 1740 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:34:52.118357 kubelet[1740]: I0209 18:34:52.118343 1740 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:34:52.118647 kubelet[1740]: I0209 18:34:52.118577 1740 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:34:52.118945 kubelet[1740]: W0209 18:34:52.118894 1740 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:52.118945 kubelet[1740]: E0209 18:34:52.118946 1740 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:52.119252 kubelet[1740]: E0209 18:34:52.119189 1740 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:34:52.119433 kubelet[1740]: E0209 18:34:52.119362 1740 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:52.147444 kubelet[1740]: I0209 18:34:52.147426 1740 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:34:52.147544 kubelet[1740]: I0209 18:34:52.147533 1740 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:34:52.147602 kubelet[1740]: I0209 18:34:52.147593 1740 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:34:52.154562 kubelet[1740]: I0209 18:34:52.154541 1740 policy_none.go:49] "None policy: Start" Feb 9 18:34:52.155201 kubelet[1740]: I0209 18:34:52.155167 1740 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:34:52.155201 kubelet[1740]: I0209 18:34:52.155194 1740 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:34:52.160101 kubelet[1740]: I0209 18:34:52.160077 1740 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:34:52.160280 kubelet[1740]: I0209 18:34:52.160258 1740 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:34:52.162331 kubelet[1740]: E0209 18:34:52.162314 1740 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 18:34:52.163410 kubelet[1740]: I0209 18:34:52.163390 1740 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:34:52.181501 kubelet[1740]: I0209 18:34:52.181484 1740 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:34:52.181501 kubelet[1740]: I0209 18:34:52.181502 1740 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:34:52.181613 kubelet[1740]: I0209 18:34:52.181516 1740 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:34:52.181613 kubelet[1740]: E0209 18:34:52.181551 1740 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 18:34:52.182134 kubelet[1740]: W0209 18:34:52.182084 1740 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:52.182199 kubelet[1740]: E0209 18:34:52.182152 1740 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:52.220430 kubelet[1740]: I0209 18:34:52.220409 1740 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:34:52.220908 kubelet[1740]: E0209 18:34:52.220848 1740 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Feb 9 18:34:52.282030 kubelet[1740]: I0209 18:34:52.281997 1740 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:52.282899 kubelet[1740]: I0209 18:34:52.282878 1740 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:52.283595 kubelet[1740]: I0209 18:34:52.283574 1740 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:52.284614 kubelet[1740]: I0209 18:34:52.284590 1740 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.88:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.88:6443: connect: connection refused" Feb 9 18:34:52.285362 kubelet[1740]: I0209 18:34:52.285327 1740 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.88:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.88:6443: connect: connection refused" Feb 9 18:34:52.289385 kubelet[1740]: I0209 18:34:52.289355 1740 status_manager.go:698] "Failed to get status for pod" podUID=73322c7b2168cca9dcb3389b3357e440 pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.88:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.88:6443: connect: connection refused" Feb 9 18:34:52.320166 kubelet[1740]: I0209 18:34:52.320131 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:34:52.320235 kubelet[1740]: I0209 18:34:52.320177 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:34:52.320235 kubelet[1740]: I0209 18:34:52.320200 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73322c7b2168cca9dcb3389b3357e440-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"73322c7b2168cca9dcb3389b3357e440\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:34:52.320235 kubelet[1740]: I0209 18:34:52.320219 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:34:52.320336 kubelet[1740]: I0209 18:34:52.320239 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:34:52.320336 kubelet[1740]: I0209 18:34:52.320259 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:34:52.320336 kubelet[1740]: I0209 18:34:52.320297 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 18:34:52.320336 kubelet[1740]: I0209 18:34:52.320320 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73322c7b2168cca9dcb3389b3357e440-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"73322c7b2168cca9dcb3389b3357e440\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:34:52.320419 kubelet[1740]: I0209 18:34:52.320342 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73322c7b2168cca9dcb3389b3357e440-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"73322c7b2168cca9dcb3389b3357e440\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:34:52.320669 kubelet[1740]: E0209 18:34:52.320629 1740 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:52.422605 kubelet[1740]: I0209 18:34:52.421939 1740 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:34:52.422605 kubelet[1740]: E0209 18:34:52.422252 1740 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Feb 9 18:34:52.433718 kubelet[1740]: E0209 18:34:52.433626 1740 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2458fad12bb36", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 34, 52, 113165110, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 34, 52, 113165110, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.88:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.88:6443: connect: connection refused'(may retry after sleeping) Feb 9 18:34:52.589653 kubelet[1740]: E0209 18:34:52.589629 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:52.590398 env[1212]: time="2024-02-09T18:34:52.590347313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:52.592283 kubelet[1740]: E0209 18:34:52.592264 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:52.592584 kubelet[1740]: E0209 18:34:52.592571 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:52.592622 env[1212]: time="2024-02-09T18:34:52.592579928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:52.592937 env[1212]: time="2024-02-09T18:34:52.592827494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:73322c7b2168cca9dcb3389b3357e440,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:52.722004 kubelet[1740]: E0209 18:34:52.721942 1740 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:52.823078 kubelet[1740]: I0209 18:34:52.823047 1740 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:34:52.823350 kubelet[1740]: E0209 18:34:52.823336 1740 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Feb 9 18:34:53.052222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2147863033.mount: Deactivated successfully. Feb 9 18:34:53.055911 env[1212]: time="2024-02-09T18:34:53.055874280Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:53.057338 env[1212]: time="2024-02-09T18:34:53.057310741Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:53.059654 env[1212]: time="2024-02-09T18:34:53.059617011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:53.060497 env[1212]: time="2024-02-09T18:34:53.060470795Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:53.062378 env[1212]: time="2024-02-09T18:34:53.062352356Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:53.064645 env[1212]: time="2024-02-09T18:34:53.064585551Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:53.066393 env[1212]: time="2024-02-09T18:34:53.066354416Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:53.067156 env[1212]: time="2024-02-09T18:34:53.067122024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:53.070174 env[1212]: time="2024-02-09T18:34:53.070133684Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:53.071094 env[1212]: time="2024-02-09T18:34:53.071061184Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:53.071700 env[1212]: time="2024-02-09T18:34:53.071670263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:53.073762 env[1212]: time="2024-02-09T18:34:53.073726980Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:53.114430 env[1212]: time="2024-02-09T18:34:53.114356492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:53.114430 env[1212]: time="2024-02-09T18:34:53.114395794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:53.114430 env[1212]: time="2024-02-09T18:34:53.114410617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:53.114670 env[1212]: time="2024-02-09T18:34:53.114612535Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7623e3664f68044755320174130cfbbce44ef159572ada2949e7920a31fd2a35 pid=1834 runtime=io.containerd.runc.v2 Feb 9 18:34:53.115133 env[1212]: time="2024-02-09T18:34:53.115064046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:53.115133 env[1212]: time="2024-02-09T18:34:53.115099902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:53.115133 env[1212]: time="2024-02-09T18:34:53.115116608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:53.115273 env[1212]: time="2024-02-09T18:34:53.115234795Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9791a8dc5271623b6d8fe2fdf3c0a4b9b8f1a85ff58c3973f2214dafdf8e7a0 pid=1835 runtime=io.containerd.runc.v2 Feb 9 18:34:53.116151 env[1212]: time="2024-02-09T18:34:53.116081287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:53.116151 env[1212]: time="2024-02-09T18:34:53.116116903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:53.116151 env[1212]: time="2024-02-09T18:34:53.116127319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:53.116924 env[1212]: time="2024-02-09T18:34:53.116860994Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4863bcac35c4b500c4819d4924ca7bff4de43dd2936338faff30fb30359c217e pid=1836 runtime=io.containerd.runc.v2 Feb 9 18:34:53.162261 kubelet[1740]: W0209 18:34:53.162208 1740 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:53.162261 kubelet[1740]: E0209 18:34:53.162269 1740 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:53.194626 env[1212]: time="2024-02-09T18:34:53.194576000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:73322c7b2168cca9dcb3389b3357e440,Namespace:kube-system,Attempt:0,} returns sandbox id \"4863bcac35c4b500c4819d4924ca7bff4de43dd2936338faff30fb30359c217e\"" Feb 9 18:34:53.195549 kubelet[1740]: E0209 18:34:53.195528 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:53.197791 env[1212]: time="2024-02-09T18:34:53.197749635Z" level=info msg="CreateContainer within sandbox \"4863bcac35c4b500c4819d4924ca7bff4de43dd2936338faff30fb30359c217e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 18:34:53.198220 env[1212]: time="2024-02-09T18:34:53.198176106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7623e3664f68044755320174130cfbbce44ef159572ada2949e7920a31fd2a35\"" Feb 9 18:34:53.199567 env[1212]: time="2024-02-09T18:34:53.199540414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9791a8dc5271623b6d8fe2fdf3c0a4b9b8f1a85ff58c3973f2214dafdf8e7a0\"" Feb 9 18:34:53.199782 kubelet[1740]: E0209 18:34:53.199768 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:53.200609 kubelet[1740]: E0209 18:34:53.200346 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:53.207510 env[1212]: time="2024-02-09T18:34:53.207457515Z" level=info msg="CreateContainer within sandbox \"7623e3664f68044755320174130cfbbce44ef159572ada2949e7920a31fd2a35\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 18:34:53.208593 env[1212]: time="2024-02-09T18:34:53.208563496Z" level=info msg="CreateContainer within sandbox \"d9791a8dc5271623b6d8fe2fdf3c0a4b9b8f1a85ff58c3973f2214dafdf8e7a0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 18:34:53.223502 env[1212]: time="2024-02-09T18:34:53.223464911Z" level=info msg="CreateContainer within sandbox \"7623e3664f68044755320174130cfbbce44ef159572ada2949e7920a31fd2a35\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f7eadefa553c4aa51973646ac21beaa79f92c17847c1cc9f8f1fec010a81fa6c\"" Feb 9 18:34:53.224017 env[1212]: time="2024-02-09T18:34:53.223952158Z" level=info msg="StartContainer for \"f7eadefa553c4aa51973646ac21beaa79f92c17847c1cc9f8f1fec010a81fa6c\"" Feb 9 18:34:53.224649 env[1212]: time="2024-02-09T18:34:53.224615963Z" level=info msg="CreateContainer within sandbox \"4863bcac35c4b500c4819d4924ca7bff4de43dd2936338faff30fb30359c217e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0b9162930bee93b93d901f43f20e2ceb3a8d0362af4378d6abd647a99aa35fb9\"" Feb 9 18:34:53.224916 env[1212]: time="2024-02-09T18:34:53.224894962Z" level=info msg="StartContainer for \"0b9162930bee93b93d901f43f20e2ceb3a8d0362af4378d6abd647a99aa35fb9\"" Feb 9 18:34:53.228191 env[1212]: time="2024-02-09T18:34:53.228154453Z" level=info msg="CreateContainer within sandbox \"d9791a8dc5271623b6d8fe2fdf3c0a4b9b8f1a85ff58c3973f2214dafdf8e7a0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"431739fc8e8286dcb155a0c4abb5fadb1fd576a0663ebcd209a21bcbdf6b1d08\"" Feb 9 18:34:53.228715 env[1212]: time="2024-02-09T18:34:53.228689816Z" level=info msg="StartContainer for \"431739fc8e8286dcb155a0c4abb5fadb1fd576a0663ebcd209a21bcbdf6b1d08\"" Feb 9 18:34:53.255189 kubelet[1740]: W0209 18:34:53.254686 1740 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:53.255189 kubelet[1740]: E0209 18:34:53.254753 1740 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:53.301041 env[1212]: time="2024-02-09T18:34:53.300959289Z" level=info msg="StartContainer for \"0b9162930bee93b93d901f43f20e2ceb3a8d0362af4378d6abd647a99aa35fb9\" returns successfully" Feb 9 18:34:53.313477 env[1212]: time="2024-02-09T18:34:53.313392339Z" level=info msg="StartContainer for \"f7eadefa553c4aa51973646ac21beaa79f92c17847c1cc9f8f1fec010a81fa6c\" returns successfully" Feb 9 18:34:53.336455 env[1212]: time="2024-02-09T18:34:53.336412333Z" level=info msg="StartContainer for \"431739fc8e8286dcb155a0c4abb5fadb1fd576a0663ebcd209a21bcbdf6b1d08\" returns successfully" Feb 9 18:34:53.449607 kubelet[1740]: W0209 18:34:53.449548 1740 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:53.449749 kubelet[1740]: E0209 18:34:53.449622 1740 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Feb 9 18:34:53.624596 kubelet[1740]: I0209 18:34:53.624505 1740 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:34:54.192988 kubelet[1740]: E0209 18:34:54.192949 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:54.195415 kubelet[1740]: E0209 18:34:54.195395 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:54.197409 kubelet[1740]: E0209 18:34:54.197391 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:55.200346 kubelet[1740]: E0209 18:34:55.200073 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:55.200628 kubelet[1740]: E0209 18:34:55.200492 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:55.201058 kubelet[1740]: E0209 18:34:55.201033 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:55.385458 kubelet[1740]: I0209 18:34:55.385429 1740 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 18:34:56.114410 kubelet[1740]: I0209 18:34:56.114328 1740 apiserver.go:52] "Watching apiserver" Feb 9 18:34:56.119036 kubelet[1740]: I0209 18:34:56.119019 1740 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:34:56.145395 kubelet[1740]: I0209 18:34:56.145379 1740 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:34:56.202833 kubelet[1740]: E0209 18:34:56.202813 1740 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 9 18:34:56.203541 kubelet[1740]: E0209 18:34:56.203526 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:56.203653 kubelet[1740]: E0209 18:34:56.202818 1740 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 9 18:34:56.203926 kubelet[1740]: E0209 18:34:56.203907 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:57.814862 systemd[1]: Reloading. Feb 9 18:34:57.851498 /usr/lib/systemd/system-generators/torcx-generator[2074]: time="2024-02-09T18:34:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:34:57.851527 /usr/lib/systemd/system-generators/torcx-generator[2074]: time="2024-02-09T18:34:57Z" level=info msg="torcx already run" Feb 9 18:34:57.925074 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:34:57.925091 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:34:57.941872 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:34:58.015472 systemd[1]: Stopping kubelet.service... Feb 9 18:34:58.036361 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 18:34:58.036917 systemd[1]: Stopped kubelet.service. Feb 9 18:34:58.039469 systemd[1]: Started kubelet.service. Feb 9 18:34:58.099031 kubelet[2119]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:34:58.099031 kubelet[2119]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:34:58.099031 kubelet[2119]: I0209 18:34:58.098629 2119 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:34:58.099864 kubelet[2119]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:34:58.099864 kubelet[2119]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:34:58.102462 kubelet[2119]: I0209 18:34:58.102433 2119 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:34:58.102462 kubelet[2119]: I0209 18:34:58.102457 2119 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:34:58.102631 kubelet[2119]: I0209 18:34:58.102608 2119 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:34:58.103829 kubelet[2119]: I0209 18:34:58.103808 2119 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 18:34:58.104427 kubelet[2119]: I0209 18:34:58.104405 2119 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:34:58.105889 kubelet[2119]: W0209 18:34:58.105877 2119 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:34:58.106570 kubelet[2119]: I0209 18:34:58.106557 2119 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:34:58.106912 kubelet[2119]: I0209 18:34:58.106902 2119 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:34:58.106970 kubelet[2119]: I0209 18:34:58.106960 2119 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:34:58.107052 kubelet[2119]: I0209 18:34:58.107015 2119 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:34:58.107052 kubelet[2119]: I0209 18:34:58.107029 2119 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:34:58.107102 kubelet[2119]: I0209 18:34:58.107091 2119 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:34:58.109246 kubelet[2119]: I0209 18:34:58.109230 2119 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:34:58.109311 kubelet[2119]: I0209 18:34:58.109251 2119 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:34:58.109311 kubelet[2119]: I0209 18:34:58.109273 2119 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:34:58.109311 kubelet[2119]: I0209 18:34:58.109288 2119 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:34:58.110894 kubelet[2119]: I0209 18:34:58.110877 2119 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:34:58.111446 kubelet[2119]: I0209 18:34:58.111431 2119 server.go:1186] "Started kubelet" Feb 9 18:34:58.113149 kubelet[2119]: I0209 18:34:58.113125 2119 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:34:58.114049 kubelet[2119]: I0209 18:34:58.114031 2119 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:34:58.114352 kubelet[2119]: I0209 18:34:58.114337 2119 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:34:58.116100 kubelet[2119]: E0209 18:34:58.116073 2119 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:34:58.116100 kubelet[2119]: E0209 18:34:58.116101 2119 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:34:58.117115 kubelet[2119]: I0209 18:34:58.117103 2119 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:34:58.117999 kubelet[2119]: I0209 18:34:58.117985 2119 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:34:58.159566 kubelet[2119]: I0209 18:34:58.159537 2119 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:34:58.172203 kubelet[2119]: I0209 18:34:58.172176 2119 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:34:58.172203 kubelet[2119]: I0209 18:34:58.172197 2119 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:34:58.172295 kubelet[2119]: I0209 18:34:58.172214 2119 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:34:58.172295 kubelet[2119]: E0209 18:34:58.172257 2119 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 18:34:58.192602 kubelet[2119]: I0209 18:34:58.192575 2119 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:34:58.192759 kubelet[2119]: I0209 18:34:58.192748 2119 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:34:58.192832 kubelet[2119]: I0209 18:34:58.192823 2119 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:34:58.193039 kubelet[2119]: I0209 18:34:58.193026 2119 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 18:34:58.193121 kubelet[2119]: I0209 18:34:58.193110 2119 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 18:34:58.193173 kubelet[2119]: I0209 18:34:58.193164 2119 policy_none.go:49] "None policy: Start" Feb 9 18:34:58.193889 kubelet[2119]: I0209 18:34:58.193758 2119 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:34:58.193889 kubelet[2119]: I0209 18:34:58.193792 2119 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:34:58.193968 kubelet[2119]: I0209 18:34:58.193950 2119 state_mem.go:75] "Updated machine memory state" Feb 9 18:34:58.197007 kubelet[2119]: I0209 18:34:58.195119 2119 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:34:58.197007 kubelet[2119]: I0209 18:34:58.195314 2119 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:34:58.217095 kubelet[2119]: I0209 18:34:58.217061 2119 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:34:58.224634 kubelet[2119]: I0209 18:34:58.224598 2119 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 18:34:58.224786 kubelet[2119]: I0209 18:34:58.224775 2119 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 18:34:58.272550 kubelet[2119]: I0209 18:34:58.272519 2119 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:58.272760 kubelet[2119]: I0209 18:34:58.272742 2119 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:58.272906 kubelet[2119]: I0209 18:34:58.272891 2119 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:58.416542 kubelet[2119]: I0209 18:34:58.415573 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:34:58.416542 kubelet[2119]: I0209 18:34:58.416247 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:34:58.416542 kubelet[2119]: I0209 18:34:58.416284 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 18:34:58.416542 kubelet[2119]: I0209 18:34:58.416350 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73322c7b2168cca9dcb3389b3357e440-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"73322c7b2168cca9dcb3389b3357e440\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:34:58.416542 kubelet[2119]: I0209 18:34:58.416373 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:34:58.416919 kubelet[2119]: I0209 18:34:58.416394 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:34:58.416919 kubelet[2119]: I0209 18:34:58.416452 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:34:58.416919 kubelet[2119]: I0209 18:34:58.416492 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73322c7b2168cca9dcb3389b3357e440-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"73322c7b2168cca9dcb3389b3357e440\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:34:58.416919 kubelet[2119]: I0209 18:34:58.416515 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73322c7b2168cca9dcb3389b3357e440-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"73322c7b2168cca9dcb3389b3357e440\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:34:58.577739 kubelet[2119]: E0209 18:34:58.577708 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:58.615170 kubelet[2119]: E0209 18:34:58.615137 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:58.815743 kubelet[2119]: E0209 18:34:58.815714 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:59.110362 kubelet[2119]: I0209 18:34:59.110256 2119 apiserver.go:52] "Watching apiserver" Feb 9 18:34:59.115566 kubelet[2119]: I0209 18:34:59.115529 2119 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:34:59.121691 kubelet[2119]: I0209 18:34:59.121653 2119 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:34:59.186288 kubelet[2119]: E0209 18:34:59.186259 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:59.186546 kubelet[2119]: E0209 18:34:59.186526 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:59.308803 sudo[1332]: pam_unix(sudo:session): session closed for user root Feb 9 18:34:59.314162 sshd[1327]: pam_unix(sshd:session): session closed for user core Feb 9 18:34:59.316756 systemd[1]: sshd@4-10.0.0.88:22-10.0.0.1:47698.service: Deactivated successfully. Feb 9 18:34:59.318239 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:34:59.318903 systemd-logind[1198]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:34:59.319943 systemd-logind[1198]: Removed session 5. Feb 9 18:34:59.514641 kubelet[2119]: E0209 18:34:59.514594 2119 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 18:34:59.514932 kubelet[2119]: E0209 18:34:59.514906 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:00.115467 kubelet[2119]: I0209 18:35:00.115431 2119 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.115316526 pod.CreationTimestamp="2024-02-09 18:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:59.722867577 +0000 UTC m=+1.679522965" watchObservedRunningTime="2024-02-09 18:35:00.115316526 +0000 UTC m=+2.071971914" Feb 9 18:35:00.187798 kubelet[2119]: E0209 18:35:00.187773 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:00.188191 kubelet[2119]: E0209 18:35:00.188175 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:00.845433 kubelet[2119]: E0209 18:35:00.845399 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:00.914803 kubelet[2119]: I0209 18:35:00.914771 2119 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.9147362919999997 pod.CreationTimestamp="2024-02-09 18:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:00.515465097 +0000 UTC m=+2.472120525" watchObservedRunningTime="2024-02-09 18:35:00.914736292 +0000 UTC m=+2.871391680" Feb 9 18:35:00.914939 kubelet[2119]: I0209 18:35:00.914860 2119 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.914845234 pod.CreationTimestamp="2024-02-09 18:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:00.914349572 +0000 UTC m=+2.871004960" watchObservedRunningTime="2024-02-09 18:35:00.914845234 +0000 UTC m=+2.871500662" Feb 9 18:35:02.548797 kubelet[2119]: E0209 18:35:02.548767 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:04.441179 kubelet[2119]: E0209 18:35:04.441142 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:05.194769 kubelet[2119]: E0209 18:35:05.193538 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:10.853091 kubelet[2119]: E0209 18:35:10.853063 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:11.713493 kubelet[2119]: I0209 18:35:11.713464 2119 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 18:35:11.713841 env[1212]: time="2024-02-09T18:35:11.713798629Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:35:11.714152 kubelet[2119]: I0209 18:35:11.714026 2119 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 18:35:12.556929 kubelet[2119]: E0209 18:35:12.556897 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:12.604142 kubelet[2119]: I0209 18:35:12.604095 2119 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:12.604572 kubelet[2119]: I0209 18:35:12.604546 2119 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:12.617266 kubelet[2119]: I0209 18:35:12.617226 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a8aef582-49de-4fa0-8550-b9dac7ddffad-run\") pod \"kube-flannel-ds-bmrsv\" (UID: \"a8aef582-49de-4fa0-8550-b9dac7ddffad\") " pod="kube-flannel/kube-flannel-ds-bmrsv" Feb 9 18:35:12.617266 kubelet[2119]: I0209 18:35:12.617273 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/531c73d6-d91b-40a1-a581-f35fc8f47b9c-lib-modules\") pod \"kube-proxy-6dprs\" (UID: \"531c73d6-d91b-40a1-a581-f35fc8f47b9c\") " pod="kube-system/kube-proxy-6dprs" Feb 9 18:35:12.617396 kubelet[2119]: I0209 18:35:12.617295 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/a8aef582-49de-4fa0-8550-b9dac7ddffad-cni\") pod \"kube-flannel-ds-bmrsv\" (UID: \"a8aef582-49de-4fa0-8550-b9dac7ddffad\") " pod="kube-flannel/kube-flannel-ds-bmrsv" Feb 9 18:35:12.617396 kubelet[2119]: I0209 18:35:12.617316 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/a8aef582-49de-4fa0-8550-b9dac7ddffad-flannel-cfg\") pod \"kube-flannel-ds-bmrsv\" (UID: \"a8aef582-49de-4fa0-8550-b9dac7ddffad\") " pod="kube-flannel/kube-flannel-ds-bmrsv" Feb 9 18:35:12.617396 kubelet[2119]: I0209 18:35:12.617336 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8aef582-49de-4fa0-8550-b9dac7ddffad-xtables-lock\") pod \"kube-flannel-ds-bmrsv\" (UID: \"a8aef582-49de-4fa0-8550-b9dac7ddffad\") " pod="kube-flannel/kube-flannel-ds-bmrsv" Feb 9 18:35:12.617396 kubelet[2119]: I0209 18:35:12.617356 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w967m\" (UniqueName: \"kubernetes.io/projected/a8aef582-49de-4fa0-8550-b9dac7ddffad-kube-api-access-w967m\") pod \"kube-flannel-ds-bmrsv\" (UID: \"a8aef582-49de-4fa0-8550-b9dac7ddffad\") " pod="kube-flannel/kube-flannel-ds-bmrsv" Feb 9 18:35:12.617396 kubelet[2119]: I0209 18:35:12.617377 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/531c73d6-d91b-40a1-a581-f35fc8f47b9c-xtables-lock\") pod \"kube-proxy-6dprs\" (UID: \"531c73d6-d91b-40a1-a581-f35fc8f47b9c\") " pod="kube-system/kube-proxy-6dprs" Feb 9 18:35:12.617513 kubelet[2119]: I0209 18:35:12.617399 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zx6z\" (UniqueName: \"kubernetes.io/projected/531c73d6-d91b-40a1-a581-f35fc8f47b9c-kube-api-access-9zx6z\") pod \"kube-proxy-6dprs\" (UID: \"531c73d6-d91b-40a1-a581-f35fc8f47b9c\") " pod="kube-system/kube-proxy-6dprs" Feb 9 18:35:12.617513 kubelet[2119]: I0209 18:35:12.617427 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/a8aef582-49de-4fa0-8550-b9dac7ddffad-cni-plugin\") pod \"kube-flannel-ds-bmrsv\" (UID: \"a8aef582-49de-4fa0-8550-b9dac7ddffad\") " pod="kube-flannel/kube-flannel-ds-bmrsv" Feb 9 18:35:12.617513 kubelet[2119]: I0209 18:35:12.617447 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/531c73d6-d91b-40a1-a581-f35fc8f47b9c-kube-proxy\") pod \"kube-proxy-6dprs\" (UID: \"531c73d6-d91b-40a1-a581-f35fc8f47b9c\") " pod="kube-system/kube-proxy-6dprs" Feb 9 18:35:12.998037 update_engine[1199]: I0209 18:35:12.997915 1199 update_attempter.cc:509] Updating boot flags... Feb 9 18:35:13.207562 kubelet[2119]: E0209 18:35:13.207536 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:13.208403 env[1212]: time="2024-02-09T18:35:13.208015717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-bmrsv,Uid:a8aef582-49de-4fa0-8550-b9dac7ddffad,Namespace:kube-flannel,Attempt:0,}" Feb 9 18:35:13.211386 kubelet[2119]: E0209 18:35:13.211242 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:13.211907 env[1212]: time="2024-02-09T18:35:13.211654276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6dprs,Uid:531c73d6-d91b-40a1-a581-f35fc8f47b9c,Namespace:kube-system,Attempt:0,}" Feb 9 18:35:13.228126 env[1212]: time="2024-02-09T18:35:13.228059486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:13.228257 env[1212]: time="2024-02-09T18:35:13.228155451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:13.230123 env[1212]: time="2024-02-09T18:35:13.229024732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:13.230123 env[1212]: time="2024-02-09T18:35:13.229220342Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ebdd489d1695a17ccb0a3da501a826e667115b1cd3f07961b9f22aab9b5b3fb pid=2226 runtime=io.containerd.runc.v2 Feb 9 18:35:13.235707 env[1212]: time="2024-02-09T18:35:13.235644387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:13.235809 env[1212]: time="2024-02-09T18:35:13.235716820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:13.235809 env[1212]: time="2024-02-09T18:35:13.235742352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:13.235994 env[1212]: time="2024-02-09T18:35:13.235932320Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/29ed4fe66140ee77949344847b68460f5464669bac5a748ef91686b6308b143f pid=2245 runtime=io.containerd.runc.v2 Feb 9 18:35:13.287107 env[1212]: time="2024-02-09T18:35:13.287008610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-bmrsv,Uid:a8aef582-49de-4fa0-8550-b9dac7ddffad,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"2ebdd489d1695a17ccb0a3da501a826e667115b1cd3f07961b9f22aab9b5b3fb\"" Feb 9 18:35:13.287849 kubelet[2119]: E0209 18:35:13.287825 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:13.288891 env[1212]: time="2024-02-09T18:35:13.288866627Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\"" Feb 9 18:35:13.289556 env[1212]: time="2024-02-09T18:35:13.289528853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6dprs,Uid:531c73d6-d91b-40a1-a581-f35fc8f47b9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"29ed4fe66140ee77949344847b68460f5464669bac5a748ef91686b6308b143f\"" Feb 9 18:35:13.291619 kubelet[2119]: E0209 18:35:13.291144 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:13.294785 env[1212]: time="2024-02-09T18:35:13.294742219Z" level=info msg="CreateContainer within sandbox \"29ed4fe66140ee77949344847b68460f5464669bac5a748ef91686b6308b143f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:35:13.307777 env[1212]: time="2024-02-09T18:35:13.307733774Z" level=info msg="CreateContainer within sandbox \"29ed4fe66140ee77949344847b68460f5464669bac5a748ef91686b6308b143f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1f2889518a044c8285cea071c84c8c76a10a902e3d52f94b7fdba622df3cbd8d\"" Feb 9 18:35:13.309059 env[1212]: time="2024-02-09T18:35:13.309026451Z" level=info msg="StartContainer for \"1f2889518a044c8285cea071c84c8c76a10a902e3d52f94b7fdba622df3cbd8d\"" Feb 9 18:35:13.364807 env[1212]: time="2024-02-09T18:35:13.364751366Z" level=info msg="StartContainer for \"1f2889518a044c8285cea071c84c8c76a10a902e3d52f94b7fdba622df3cbd8d\" returns successfully" Feb 9 18:35:14.209470 kubelet[2119]: E0209 18:35:14.209152 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:14.353565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119843832.mount: Deactivated successfully. Feb 9 18:35:14.390371 env[1212]: time="2024-02-09T18:35:14.390325295Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:14.391824 env[1212]: time="2024-02-09T18:35:14.391784896Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b04a1a4152e14ddc6c26adc946baca3226718fa1acce540c015ac593e50218a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:14.393451 env[1212]: time="2024-02-09T18:35:14.393426977Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:14.394587 env[1212]: time="2024-02-09T18:35:14.394558274Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:14.395715 env[1212]: time="2024-02-09T18:35:14.395677845Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\" returns image reference \"sha256:b04a1a4152e14ddc6c26adc946baca3226718fa1acce540c015ac593e50218a9\"" Feb 9 18:35:14.397122 env[1212]: time="2024-02-09T18:35:14.397091506Z" level=info msg="CreateContainer within sandbox \"2ebdd489d1695a17ccb0a3da501a826e667115b1cd3f07961b9f22aab9b5b3fb\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 9 18:35:14.405722 env[1212]: time="2024-02-09T18:35:14.405679677Z" level=info msg="CreateContainer within sandbox \"2ebdd489d1695a17ccb0a3da501a826e667115b1cd3f07961b9f22aab9b5b3fb\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"ca9b5ca9debe2a7c5f4251a842d07693cae3ff6cc2a1e312026fff3d0079d438\"" Feb 9 18:35:14.407272 env[1212]: time="2024-02-09T18:35:14.406966682Z" level=info msg="StartContainer for \"ca9b5ca9debe2a7c5f4251a842d07693cae3ff6cc2a1e312026fff3d0079d438\"" Feb 9 18:35:14.455441 env[1212]: time="2024-02-09T18:35:14.455391226Z" level=info msg="StartContainer for \"ca9b5ca9debe2a7c5f4251a842d07693cae3ff6cc2a1e312026fff3d0079d438\" returns successfully" Feb 9 18:35:14.494296 env[1212]: time="2024-02-09T18:35:14.494189383Z" level=info msg="shim disconnected" id=ca9b5ca9debe2a7c5f4251a842d07693cae3ff6cc2a1e312026fff3d0079d438 Feb 9 18:35:14.494296 env[1212]: time="2024-02-09T18:35:14.494240125Z" level=warning msg="cleaning up after shim disconnected" id=ca9b5ca9debe2a7c5f4251a842d07693cae3ff6cc2a1e312026fff3d0079d438 namespace=k8s.io Feb 9 18:35:14.494296 env[1212]: time="2024-02-09T18:35:14.494250009Z" level=info msg="cleaning up dead shim" Feb 9 18:35:14.501318 env[1212]: time="2024-02-09T18:35:14.501270932Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:35:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2483 runtime=io.containerd.runc.v2\n" Feb 9 18:35:15.212879 kubelet[2119]: E0209 18:35:15.212851 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:15.215150 kubelet[2119]: E0209 18:35:15.213574 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:15.216760 env[1212]: time="2024-02-09T18:35:15.216241292Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\"" Feb 9 18:35:15.227104 kubelet[2119]: I0209 18:35:15.227070 2119 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6dprs" podStartSLOduration=3.227038927 pod.CreationTimestamp="2024-02-09 18:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:14.219267341 +0000 UTC m=+16.175922729" watchObservedRunningTime="2024-02-09 18:35:15.227038927 +0000 UTC m=+17.183694315" Feb 9 18:35:16.930718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3200489248.mount: Deactivated successfully. Feb 9 18:35:17.520692 env[1212]: time="2024-02-09T18:35:17.520396652Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:17.521975 env[1212]: time="2024-02-09T18:35:17.521917910Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:37c457685cef0c53d8641973794ca8ca8b89902c01fd7b52bc718f9b434da459,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:17.523696 env[1212]: time="2024-02-09T18:35:17.523633562Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:17.529004 env[1212]: time="2024-02-09T18:35:17.528951784Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel@sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:17.529325 env[1212]: time="2024-02-09T18:35:17.529293634Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\" returns image reference \"sha256:37c457685cef0c53d8641973794ca8ca8b89902c01fd7b52bc718f9b434da459\"" Feb 9 18:35:17.533565 env[1212]: time="2024-02-09T18:35:17.533482746Z" level=info msg="CreateContainer within sandbox \"2ebdd489d1695a17ccb0a3da501a826e667115b1cd3f07961b9f22aab9b5b3fb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 18:35:17.543046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782536416.mount: Deactivated successfully. Feb 9 18:35:17.545360 env[1212]: time="2024-02-09T18:35:17.545319284Z" level=info msg="CreateContainer within sandbox \"2ebdd489d1695a17ccb0a3da501a826e667115b1cd3f07961b9f22aab9b5b3fb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0fd0557bbd38df1023db2636a23ef28daccb04321c817b595d44b22a84ae9fba\"" Feb 9 18:35:17.545765 env[1212]: time="2024-02-09T18:35:17.545740004Z" level=info msg="StartContainer for \"0fd0557bbd38df1023db2636a23ef28daccb04321c817b595d44b22a84ae9fba\"" Feb 9 18:35:17.631079 env[1212]: time="2024-02-09T18:35:17.631035102Z" level=info msg="StartContainer for \"0fd0557bbd38df1023db2636a23ef28daccb04321c817b595d44b22a84ae9fba\" returns successfully" Feb 9 18:35:17.657010 kubelet[2119]: I0209 18:35:17.656230 2119 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:35:17.678050 kubelet[2119]: I0209 18:35:17.678008 2119 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:17.682466 kubelet[2119]: I0209 18:35:17.682417 2119 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:17.722630 env[1212]: time="2024-02-09T18:35:17.722575493Z" level=info msg="shim disconnected" id=0fd0557bbd38df1023db2636a23ef28daccb04321c817b595d44b22a84ae9fba Feb 9 18:35:17.722630 env[1212]: time="2024-02-09T18:35:17.722624232Z" level=warning msg="cleaning up after shim disconnected" id=0fd0557bbd38df1023db2636a23ef28daccb04321c817b595d44b22a84ae9fba namespace=k8s.io Feb 9 18:35:17.722630 env[1212]: time="2024-02-09T18:35:17.722635676Z" level=info msg="cleaning up dead shim" Feb 9 18:35:17.729575 env[1212]: time="2024-02-09T18:35:17.729538420Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:35:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2539 runtime=io.containerd.runc.v2\n" Feb 9 18:35:17.764888 kubelet[2119]: I0209 18:35:17.764849 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/335d0a07-33e1-4591-b062-024a671a2335-config-volume\") pod \"coredns-787d4945fb-5v5hv\" (UID: \"335d0a07-33e1-4591-b062-024a671a2335\") " pod="kube-system/coredns-787d4945fb-5v5hv" Feb 9 18:35:17.765023 kubelet[2119]: I0209 18:35:17.764903 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d7dd89d-0502-4f41-9afe-53bfca70450a-config-volume\") pod \"coredns-787d4945fb-t66br\" (UID: \"4d7dd89d-0502-4f41-9afe-53bfca70450a\") " pod="kube-system/coredns-787d4945fb-t66br" Feb 9 18:35:17.765023 kubelet[2119]: I0209 18:35:17.764931 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-767c4\" (UniqueName: \"kubernetes.io/projected/4d7dd89d-0502-4f41-9afe-53bfca70450a-kube-api-access-767c4\") pod \"coredns-787d4945fb-t66br\" (UID: \"4d7dd89d-0502-4f41-9afe-53bfca70450a\") " pod="kube-system/coredns-787d4945fb-t66br" Feb 9 18:35:17.765023 kubelet[2119]: I0209 18:35:17.764970 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh7js\" (UniqueName: \"kubernetes.io/projected/335d0a07-33e1-4591-b062-024a671a2335-kube-api-access-nh7js\") pod \"coredns-787d4945fb-5v5hv\" (UID: \"335d0a07-33e1-4591-b062-024a671a2335\") " pod="kube-system/coredns-787d4945fb-5v5hv" Feb 9 18:35:17.981159 kubelet[2119]: E0209 18:35:17.981119 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:17.982404 env[1212]: time="2024-02-09T18:35:17.982332858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-5v5hv,Uid:335d0a07-33e1-4591-b062-024a671a2335,Namespace:kube-system,Attempt:0,}" Feb 9 18:35:17.986557 kubelet[2119]: E0209 18:35:17.986228 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:17.986826 env[1212]: time="2024-02-09T18:35:17.986785590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-t66br,Uid:4d7dd89d-0502-4f41-9afe-53bfca70450a,Namespace:kube-system,Attempt:0,}" Feb 9 18:35:18.030324 env[1212]: time="2024-02-09T18:35:18.030253085Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-5v5hv,Uid:335d0a07-33e1-4591-b062-024a671a2335,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5780885229a7bafea14bf3c5cf7c0979da4dcd2bfdd04ae8ec43303091e96d7\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 18:35:18.030529 kubelet[2119]: E0209 18:35:18.030502 2119 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5780885229a7bafea14bf3c5cf7c0979da4dcd2bfdd04ae8ec43303091e96d7\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 18:35:18.030584 kubelet[2119]: E0209 18:35:18.030562 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5780885229a7bafea14bf3c5cf7c0979da4dcd2bfdd04ae8ec43303091e96d7\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-5v5hv" Feb 9 18:35:18.030584 kubelet[2119]: E0209 18:35:18.030581 2119 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5780885229a7bafea14bf3c5cf7c0979da4dcd2bfdd04ae8ec43303091e96d7\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-5v5hv" Feb 9 18:35:18.030651 kubelet[2119]: E0209 18:35:18.030634 2119 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-5v5hv_kube-system(335d0a07-33e1-4591-b062-024a671a2335)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-5v5hv_kube-system(335d0a07-33e1-4591-b062-024a671a2335)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5780885229a7bafea14bf3c5cf7c0979da4dcd2bfdd04ae8ec43303091e96d7\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-5v5hv" podUID=335d0a07-33e1-4591-b062-024a671a2335 Feb 9 18:35:18.033742 env[1212]: time="2024-02-09T18:35:18.033665763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-t66br,Uid:4d7dd89d-0502-4f41-9afe-53bfca70450a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4885bd6dbab4802343279b80d03e8d5e1b31f08937fd3c20617aa09bb1fd0bb5\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 18:35:18.034008 kubelet[2119]: E0209 18:35:18.033971 2119 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4885bd6dbab4802343279b80d03e8d5e1b31f08937fd3c20617aa09bb1fd0bb5\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 18:35:18.034066 kubelet[2119]: E0209 18:35:18.034016 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4885bd6dbab4802343279b80d03e8d5e1b31f08937fd3c20617aa09bb1fd0bb5\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-t66br" Feb 9 18:35:18.034066 kubelet[2119]: E0209 18:35:18.034035 2119 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4885bd6dbab4802343279b80d03e8d5e1b31f08937fd3c20617aa09bb1fd0bb5\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-t66br" Feb 9 18:35:18.034133 kubelet[2119]: E0209 18:35:18.034069 2119 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-t66br_kube-system(4d7dd89d-0502-4f41-9afe-53bfca70450a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-t66br_kube-system(4d7dd89d-0502-4f41-9afe-53bfca70450a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4885bd6dbab4802343279b80d03e8d5e1b31f08937fd3c20617aa09bb1fd0bb5\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-t66br" podUID=4d7dd89d-0502-4f41-9afe-53bfca70450a Feb 9 18:35:18.219109 kubelet[2119]: E0209 18:35:18.218707 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:18.223197 env[1212]: time="2024-02-09T18:35:18.223133380Z" level=info msg="CreateContainer within sandbox \"2ebdd489d1695a17ccb0a3da501a826e667115b1cd3f07961b9f22aab9b5b3fb\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 9 18:35:18.237417 env[1212]: time="2024-02-09T18:35:18.236805860Z" level=info msg="CreateContainer within sandbox \"2ebdd489d1695a17ccb0a3da501a826e667115b1cd3f07961b9f22aab9b5b3fb\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"5c1677d5eda98695124557710a5ba6f9fa1d3126bcfef79eed3c5034c2cced85\"" Feb 9 18:35:18.238147 env[1212]: time="2024-02-09T18:35:18.238114855Z" level=info msg="StartContainer for \"5c1677d5eda98695124557710a5ba6f9fa1d3126bcfef79eed3c5034c2cced85\"" Feb 9 18:35:18.289466 env[1212]: time="2024-02-09T18:35:18.289415666Z" level=info msg="StartContainer for \"5c1677d5eda98695124557710a5ba6f9fa1d3126bcfef79eed3c5034c2cced85\" returns successfully" Feb 9 18:35:18.834667 systemd[1]: run-netns-cni\x2d90903e87\x2d0d1b\x2d8c52\x2d2910\x2db5717561e87a.mount: Deactivated successfully. Feb 9 18:35:18.834803 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5780885229a7bafea14bf3c5cf7c0979da4dcd2bfdd04ae8ec43303091e96d7-shm.mount: Deactivated successfully. Feb 9 18:35:19.224010 kubelet[2119]: E0209 18:35:19.223217 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:19.235011 kubelet[2119]: I0209 18:35:19.234847 2119 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-bmrsv" podStartSLOduration=-9.223372029619968e+09 pod.CreationTimestamp="2024-02-09 18:35:12 +0000 UTC" firstStartedPulling="2024-02-09 18:35:13.288531393 +0000 UTC m=+15.245186741" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:19.232766157 +0000 UTC m=+21.189421585" watchObservedRunningTime="2024-02-09 18:35:19.234807704 +0000 UTC m=+21.191463092" Feb 9 18:35:19.808732 systemd-networkd[1094]: flannel.1: Link UP Feb 9 18:35:19.808739 systemd-networkd[1094]: flannel.1: Gained carrier Feb 9 18:35:20.225168 kubelet[2119]: E0209 18:35:20.225144 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:21.327091 systemd-networkd[1094]: flannel.1: Gained IPv6LL Feb 9 18:35:29.172716 kubelet[2119]: E0209 18:35:29.172668 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:29.173281 env[1212]: time="2024-02-09T18:35:29.173223013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-t66br,Uid:4d7dd89d-0502-4f41-9afe-53bfca70450a,Namespace:kube-system,Attempt:0,}" Feb 9 18:35:29.193283 systemd-networkd[1094]: cni0: Link UP Feb 9 18:35:29.193295 systemd-networkd[1094]: cni0: Gained carrier Feb 9 18:35:29.195195 systemd-networkd[1094]: cni0: Lost carrier Feb 9 18:35:29.198366 systemd-networkd[1094]: vethd1148156: Link UP Feb 9 18:35:29.200024 kernel: cni0: port 1(vethd1148156) entered blocking state Feb 9 18:35:29.200095 kernel: cni0: port 1(vethd1148156) entered disabled state Feb 9 18:35:29.201256 kernel: device vethd1148156 entered promiscuous mode Feb 9 18:35:29.201329 kernel: cni0: port 1(vethd1148156) entered blocking state Feb 9 18:35:29.201358 kernel: cni0: port 1(vethd1148156) entered forwarding state Feb 9 18:35:29.204012 kernel: cni0: port 1(vethd1148156) entered disabled state Feb 9 18:35:29.216234 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethd1148156: link becomes ready Feb 9 18:35:29.216312 kernel: cni0: port 1(vethd1148156) entered blocking state Feb 9 18:35:29.216331 kernel: cni0: port 1(vethd1148156) entered forwarding state Feb 9 18:35:29.216470 systemd-networkd[1094]: vethd1148156: Gained carrier Feb 9 18:35:29.216660 systemd-networkd[1094]: cni0: Gained carrier Feb 9 18:35:29.217864 env[1212]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001a928), "name":"cbr0", "type":"bridge"} Feb 9 18:35:29.227358 env[1212]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-09T18:35:29.227286504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:29.227534 env[1212]: time="2024-02-09T18:35:29.227327274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:29.227534 env[1212]: time="2024-02-09T18:35:29.227338476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:29.228708 env[1212]: time="2024-02-09T18:35:29.227970662Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9aaace9d9396f07662d31823a661615ea8399c0d082cab6cb687cbfcf2063df pid=2792 runtime=io.containerd.runc.v2 Feb 9 18:35:29.272912 systemd-resolved[1152]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:35:29.290415 env[1212]: time="2024-02-09T18:35:29.290372720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-t66br,Uid:4d7dd89d-0502-4f41-9afe-53bfca70450a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9aaace9d9396f07662d31823a661615ea8399c0d082cab6cb687cbfcf2063df\"" Feb 9 18:35:29.291235 kubelet[2119]: E0209 18:35:29.291065 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:29.293313 env[1212]: time="2024-02-09T18:35:29.293275070Z" level=info msg="CreateContainer within sandbox \"c9aaace9d9396f07662d31823a661615ea8399c0d082cab6cb687cbfcf2063df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:35:29.302634 env[1212]: time="2024-02-09T18:35:29.302594223Z" level=info msg="CreateContainer within sandbox \"c9aaace9d9396f07662d31823a661615ea8399c0d082cab6cb687cbfcf2063df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eccd1e24a8ae8e6c3efd77de7268712c1ae1f17f1b4a105f8a7b69ce1b11f5c4\"" Feb 9 18:35:29.303013 env[1212]: time="2024-02-09T18:35:29.302975912Z" level=info msg="StartContainer for \"eccd1e24a8ae8e6c3efd77de7268712c1ae1f17f1b4a105f8a7b69ce1b11f5c4\"" Feb 9 18:35:29.379995 env[1212]: time="2024-02-09T18:35:29.377507251Z" level=info msg="StartContainer for \"eccd1e24a8ae8e6c3efd77de7268712c1ae1f17f1b4a105f8a7b69ce1b11f5c4\" returns successfully" Feb 9 18:35:30.174033 kubelet[2119]: E0209 18:35:30.173663 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:30.174407 env[1212]: time="2024-02-09T18:35:30.174349262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-5v5hv,Uid:335d0a07-33e1-4591-b062-024a671a2335,Namespace:kube-system,Attempt:0,}" Feb 9 18:35:30.191374 systemd-networkd[1094]: veth3703425f: Link UP Feb 9 18:35:30.193214 kernel: cni0: port 2(veth3703425f) entered blocking state Feb 9 18:35:30.193284 kernel: cni0: port 2(veth3703425f) entered disabled state Feb 9 18:35:30.193306 kernel: device veth3703425f entered promiscuous mode Feb 9 18:35:30.193324 kernel: cni0: port 2(veth3703425f) entered blocking state Feb 9 18:35:30.194291 kernel: cni0: port 2(veth3703425f) entered forwarding state Feb 9 18:35:30.198008 kernel: cni0: port 2(veth3703425f) entered disabled state Feb 9 18:35:30.198063 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:35:30.199156 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth3703425f: link becomes ready Feb 9 18:35:30.199199 kernel: cni0: port 2(veth3703425f) entered blocking state Feb 9 18:35:30.200120 kernel: cni0: port 2(veth3703425f) entered forwarding state Feb 9 18:35:30.200227 systemd-networkd[1094]: veth3703425f: Gained carrier Feb 9 18:35:30.201707 env[1212]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001e928), "name":"cbr0", "type":"bridge"} Feb 9 18:35:30.210791 env[1212]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-09T18:35:30.210729057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:30.210947 env[1212]: time="2024-02-09T18:35:30.210923981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:30.211041 env[1212]: time="2024-02-09T18:35:30.211020082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:30.211274 env[1212]: time="2024-02-09T18:35:30.211245372Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ff641e772db0866eec22ecb1b5ad9942ce3b7a713cae872c800f14b4c06793e pid=2905 runtime=io.containerd.runc.v2 Feb 9 18:35:30.227755 systemd[1]: run-containerd-runc-k8s.io-4ff641e772db0866eec22ecb1b5ad9942ce3b7a713cae872c800f14b4c06793e-runc.g4cwPH.mount: Deactivated successfully. Feb 9 18:35:30.243189 kubelet[2119]: E0209 18:35:30.242967 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:30.245321 systemd-resolved[1152]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:35:30.263231 kubelet[2119]: I0209 18:35:30.262497 2119 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-t66br" podStartSLOduration=18.262456116 pod.CreationTimestamp="2024-02-09 18:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:30.253169565 +0000 UTC m=+32.209824953" watchObservedRunningTime="2024-02-09 18:35:30.262456116 +0000 UTC m=+32.219111664" Feb 9 18:35:30.275850 env[1212]: time="2024-02-09T18:35:30.275779488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-5v5hv,Uid:335d0a07-33e1-4591-b062-024a671a2335,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ff641e772db0866eec22ecb1b5ad9942ce3b7a713cae872c800f14b4c06793e\"" Feb 9 18:35:30.276505 kubelet[2119]: E0209 18:35:30.276483 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:30.278885 env[1212]: time="2024-02-09T18:35:30.278841331Z" level=info msg="CreateContainer within sandbox \"4ff641e772db0866eec22ecb1b5ad9942ce3b7a713cae872c800f14b4c06793e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:35:30.288102 systemd-networkd[1094]: cni0: Gained IPv6LL Feb 9 18:35:30.289469 env[1212]: time="2024-02-09T18:35:30.289433334Z" level=info msg="CreateContainer within sandbox \"4ff641e772db0866eec22ecb1b5ad9942ce3b7a713cae872c800f14b4c06793e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"635157ad74de20a1cea436d8bfcfd4b9786b90e12173fce49a23b1a9b951d4c2\"" Feb 9 18:35:30.290107 env[1212]: time="2024-02-09T18:35:30.290078478Z" level=info msg="StartContainer for \"635157ad74de20a1cea436d8bfcfd4b9786b90e12173fce49a23b1a9b951d4c2\"" Feb 9 18:35:30.375511 env[1212]: time="2024-02-09T18:35:30.375450722Z" level=info msg="StartContainer for \"635157ad74de20a1cea436d8bfcfd4b9786b90e12173fce49a23b1a9b951d4c2\" returns successfully" Feb 9 18:35:30.607134 systemd-networkd[1094]: vethd1148156: Gained IPv6LL Feb 9 18:35:31.245510 kubelet[2119]: E0209 18:35:31.245470 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:31.245510 kubelet[2119]: E0209 18:35:31.245491 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:31.254654 kubelet[2119]: I0209 18:35:31.254614 2119 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-5v5hv" podStartSLOduration=19.254580927 pod.CreationTimestamp="2024-02-09 18:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:31.254520114 +0000 UTC m=+33.211175502" watchObservedRunningTime="2024-02-09 18:35:31.254580927 +0000 UTC m=+33.211236275" Feb 9 18:35:32.143102 systemd-networkd[1094]: veth3703425f: Gained IPv6LL Feb 9 18:35:32.246964 kubelet[2119]: E0209 18:35:32.246919 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:32.247425 kubelet[2119]: E0209 18:35:32.247151 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:33.248332 kubelet[2119]: E0209 18:35:33.248307 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:34.249863 kubelet[2119]: E0209 18:35:34.249831 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:36.344855 systemd[1]: Started sshd@5-10.0.0.88:22-10.0.0.1:54546.service. Feb 9 18:35:36.384167 sshd[3118]: Accepted publickey for core from 10.0.0.1 port 54546 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:36.385614 sshd[3118]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:36.389278 systemd-logind[1198]: New session 6 of user core. Feb 9 18:35:36.389735 systemd[1]: Started session-6.scope. Feb 9 18:35:36.512448 sshd[3118]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:36.514771 systemd[1]: sshd@5-10.0.0.88:22-10.0.0.1:54546.service: Deactivated successfully. Feb 9 18:35:36.515754 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 18:35:36.515771 systemd-logind[1198]: Session 6 logged out. Waiting for processes to exit. Feb 9 18:35:36.516459 systemd-logind[1198]: Removed session 6. Feb 9 18:35:41.516084 systemd[1]: Started sshd@6-10.0.0.88:22-10.0.0.1:54556.service. Feb 9 18:35:41.557297 sshd[3151]: Accepted publickey for core from 10.0.0.1 port 54556 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:41.558329 sshd[3151]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:41.561423 systemd-logind[1198]: New session 7 of user core. Feb 9 18:35:41.562288 systemd[1]: Started session-7.scope. Feb 9 18:35:41.666799 sshd[3151]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:41.669299 systemd[1]: sshd@6-10.0.0.88:22-10.0.0.1:54556.service: Deactivated successfully. Feb 9 18:35:41.670221 systemd-logind[1198]: Session 7 logged out. Waiting for processes to exit. Feb 9 18:35:41.670280 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 18:35:41.670974 systemd-logind[1198]: Removed session 7. Feb 9 18:35:46.670506 systemd[1]: Started sshd@7-10.0.0.88:22-10.0.0.1:43918.service. Feb 9 18:35:46.709704 sshd[3187]: Accepted publickey for core from 10.0.0.1 port 43918 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:46.711199 sshd[3187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:46.715405 systemd[1]: Started session-8.scope. Feb 9 18:35:46.715513 systemd-logind[1198]: New session 8 of user core. Feb 9 18:35:46.830027 sshd[3187]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:46.832317 systemd[1]: Started sshd@8-10.0.0.88:22-10.0.0.1:43928.service. Feb 9 18:35:46.833142 systemd[1]: sshd@7-10.0.0.88:22-10.0.0.1:43918.service: Deactivated successfully. Feb 9 18:35:46.834440 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 18:35:46.834917 systemd-logind[1198]: Session 8 logged out. Waiting for processes to exit. Feb 9 18:35:46.835692 systemd-logind[1198]: Removed session 8. Feb 9 18:35:46.871957 sshd[3200]: Accepted publickey for core from 10.0.0.1 port 43928 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:46.873061 sshd[3200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:46.877163 systemd-logind[1198]: New session 9 of user core. Feb 9 18:35:46.877454 systemd[1]: Started session-9.scope. Feb 9 18:35:47.076356 sshd[3200]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:47.080274 systemd[1]: Started sshd@9-10.0.0.88:22-10.0.0.1:43944.service. Feb 9 18:35:47.087891 systemd-logind[1198]: Session 9 logged out. Waiting for processes to exit. Feb 9 18:35:47.088446 systemd[1]: sshd@8-10.0.0.88:22-10.0.0.1:43928.service: Deactivated successfully. Feb 9 18:35:47.089438 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 18:35:47.091694 systemd-logind[1198]: Removed session 9. Feb 9 18:35:47.128713 sshd[3213]: Accepted publickey for core from 10.0.0.1 port 43944 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:47.129858 sshd[3213]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:47.134190 systemd-logind[1198]: New session 10 of user core. Feb 9 18:35:47.134844 systemd[1]: Started session-10.scope. Feb 9 18:35:47.242239 sshd[3213]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:47.244768 systemd-logind[1198]: Session 10 logged out. Waiting for processes to exit. Feb 9 18:35:47.245244 systemd[1]: sshd@9-10.0.0.88:22-10.0.0.1:43944.service: Deactivated successfully. Feb 9 18:35:47.246100 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 18:35:47.246676 systemd-logind[1198]: Removed session 10. Feb 9 18:35:52.245507 systemd[1]: Started sshd@10-10.0.0.88:22-10.0.0.1:43956.service. Feb 9 18:35:52.284858 sshd[3247]: Accepted publickey for core from 10.0.0.1 port 43956 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:52.286000 sshd[3247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:52.289168 systemd-logind[1198]: New session 11 of user core. Feb 9 18:35:52.290070 systemd[1]: Started session-11.scope. Feb 9 18:35:52.394629 sshd[3247]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:52.396822 systemd[1]: Started sshd@11-10.0.0.88:22-10.0.0.1:43966.service. Feb 9 18:35:52.397376 systemd[1]: sshd@10-10.0.0.88:22-10.0.0.1:43956.service: Deactivated successfully. Feb 9 18:35:52.398323 systemd-logind[1198]: Session 11 logged out. Waiting for processes to exit. Feb 9 18:35:52.398352 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 18:35:52.399197 systemd-logind[1198]: Removed session 11. Feb 9 18:35:52.437083 sshd[3259]: Accepted publickey for core from 10.0.0.1 port 43966 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:52.438221 sshd[3259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:52.441450 systemd-logind[1198]: New session 12 of user core. Feb 9 18:35:52.442328 systemd[1]: Started session-12.scope. Feb 9 18:35:52.656362 sshd[3259]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:52.658547 systemd[1]: Started sshd@12-10.0.0.88:22-10.0.0.1:50318.service. Feb 9 18:35:52.660140 systemd[1]: sshd@11-10.0.0.88:22-10.0.0.1:43966.service: Deactivated successfully. Feb 9 18:35:52.661020 systemd-logind[1198]: Session 12 logged out. Waiting for processes to exit. Feb 9 18:35:52.661078 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 18:35:52.661786 systemd-logind[1198]: Removed session 12. Feb 9 18:35:52.699033 sshd[3271]: Accepted publickey for core from 10.0.0.1 port 50318 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:52.700454 sshd[3271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:52.704272 systemd-logind[1198]: New session 13 of user core. Feb 9 18:35:52.704416 systemd[1]: Started session-13.scope. Feb 9 18:35:53.467432 systemd[1]: Started sshd@13-10.0.0.88:22-10.0.0.1:50324.service. Feb 9 18:35:53.468128 sshd[3271]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:53.472554 systemd[1]: sshd@12-10.0.0.88:22-10.0.0.1:50318.service: Deactivated successfully. Feb 9 18:35:53.473540 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 18:35:53.473580 systemd-logind[1198]: Session 13 logged out. Waiting for processes to exit. Feb 9 18:35:53.479272 systemd-logind[1198]: Removed session 13. Feb 9 18:35:53.537484 sshd[3295]: Accepted publickey for core from 10.0.0.1 port 50324 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:53.538626 sshd[3295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:53.542937 systemd[1]: Started session-14.scope. Feb 9 18:35:53.543159 systemd-logind[1198]: New session 14 of user core. Feb 9 18:35:53.727536 sshd[3295]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:53.732130 systemd[1]: Started sshd@14-10.0.0.88:22-10.0.0.1:50338.service. Feb 9 18:35:53.733753 systemd[1]: sshd@13-10.0.0.88:22-10.0.0.1:50324.service: Deactivated successfully. Feb 9 18:35:53.734844 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 18:35:53.735032 systemd-logind[1198]: Session 14 logged out. Waiting for processes to exit. Feb 9 18:35:53.737069 systemd-logind[1198]: Removed session 14. Feb 9 18:35:53.772132 sshd[3351]: Accepted publickey for core from 10.0.0.1 port 50338 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:53.773356 sshd[3351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:53.777038 systemd-logind[1198]: New session 15 of user core. Feb 9 18:35:53.777506 systemd[1]: Started session-15.scope. Feb 9 18:35:53.882677 sshd[3351]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:53.885217 systemd-logind[1198]: Session 15 logged out. Waiting for processes to exit. Feb 9 18:35:53.885364 systemd[1]: sshd@14-10.0.0.88:22-10.0.0.1:50338.service: Deactivated successfully. Feb 9 18:35:53.886204 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 18:35:53.886695 systemd-logind[1198]: Removed session 15. Feb 9 18:35:58.886316 systemd[1]: Started sshd@15-10.0.0.88:22-10.0.0.1:50352.service. Feb 9 18:35:58.925167 sshd[3414]: Accepted publickey for core from 10.0.0.1 port 50352 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:58.926384 sshd[3414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:58.929802 systemd-logind[1198]: New session 16 of user core. Feb 9 18:35:58.930588 systemd[1]: Started session-16.scope. Feb 9 18:35:59.034703 sshd[3414]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:59.037495 systemd[1]: sshd@15-10.0.0.88:22-10.0.0.1:50352.service: Deactivated successfully. Feb 9 18:35:59.038511 systemd-logind[1198]: Session 16 logged out. Waiting for processes to exit. Feb 9 18:35:59.038564 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 18:35:59.039181 systemd-logind[1198]: Removed session 16. Feb 9 18:36:04.039915 systemd[1]: Started sshd@16-10.0.0.88:22-10.0.0.1:54846.service. Feb 9 18:36:04.081685 sshd[3446]: Accepted publickey for core from 10.0.0.1 port 54846 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:04.083230 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:04.086391 systemd-logind[1198]: New session 17 of user core. Feb 9 18:36:04.087252 systemd[1]: Started session-17.scope. Feb 9 18:36:04.194307 sshd[3446]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:04.196809 systemd[1]: sshd@16-10.0.0.88:22-10.0.0.1:54846.service: Deactivated successfully. Feb 9 18:36:04.197934 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 18:36:04.197945 systemd-logind[1198]: Session 17 logged out. Waiting for processes to exit. Feb 9 18:36:04.198832 systemd-logind[1198]: Removed session 17. Feb 9 18:36:09.196783 systemd[1]: Started sshd@17-10.0.0.88:22-10.0.0.1:54862.service. Feb 9 18:36:09.236117 sshd[3478]: Accepted publickey for core from 10.0.0.1 port 54862 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:09.237543 sshd[3478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:09.241464 systemd[1]: Started session-18.scope. Feb 9 18:36:09.241770 systemd-logind[1198]: New session 18 of user core. Feb 9 18:36:09.343542 sshd[3478]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:09.345931 systemd[1]: sshd@17-10.0.0.88:22-10.0.0.1:54862.service: Deactivated successfully. Feb 9 18:36:09.346915 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 18:36:09.347292 systemd-logind[1198]: Session 18 logged out. Waiting for processes to exit. Feb 9 18:36:09.347912 systemd-logind[1198]: Removed session 18. Feb 9 18:36:10.173780 kubelet[2119]: E0209 18:36:10.173746 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:36:13.174971 kubelet[2119]: E0209 18:36:13.173595 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:36:14.346442 systemd[1]: Started sshd@18-10.0.0.88:22-10.0.0.1:52374.service. Feb 9 18:36:14.385425 sshd[3512]: Accepted publickey for core from 10.0.0.1 port 52374 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:14.386457 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:14.389629 systemd-logind[1198]: New session 19 of user core. Feb 9 18:36:14.390456 systemd[1]: Started session-19.scope. Feb 9 18:36:14.493199 sshd[3512]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:14.495679 systemd[1]: sshd@18-10.0.0.88:22-10.0.0.1:52374.service: Deactivated successfully. Feb 9 18:36:14.496583 systemd-logind[1198]: Session 19 logged out. Waiting for processes to exit. Feb 9 18:36:14.496648 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 18:36:14.497374 systemd-logind[1198]: Removed session 19.