Feb 9 18:37:56.727535 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 18:37:56.727554 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 18:37:56.727562 kernel: efi: EFI v2.70 by EDK II Feb 9 18:37:56.727568 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 18:37:56.727573 kernel: random: crng init done Feb 9 18:37:56.727578 kernel: ACPI: Early table checksum verification disabled Feb 9 18:37:56.727584 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 18:37:56.727591 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 18:37:56.727596 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:37:56.727602 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:37:56.727607 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:37:56.727613 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:37:56.727618 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:37:56.727624 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:37:56.727631 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:37:56.727637 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:37:56.727643 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:37:56.727649 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 18:37:56.727654 kernel: NUMA: Failed to initialise from firmware Feb 9 18:37:56.727660 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:37:56.727666 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 9 18:37:56.727671 kernel: Zone ranges: Feb 9 18:37:56.727677 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:37:56.727684 kernel: DMA32 empty Feb 9 18:37:56.727689 kernel: Normal empty Feb 9 18:37:56.727695 kernel: Movable zone start for each node Feb 9 18:37:56.727701 kernel: Early memory node ranges Feb 9 18:37:56.727706 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 18:37:56.727712 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 18:37:56.727718 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 18:37:56.727724 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 18:37:56.727730 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 18:37:56.727735 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 18:37:56.727741 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 18:37:56.727756 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:37:56.727766 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 18:37:56.727772 kernel: psci: probing for conduit method from ACPI. Feb 9 18:37:56.727777 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 18:37:56.727783 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 18:37:56.727789 kernel: psci: Trusted OS migration not required Feb 9 18:37:56.727798 kernel: psci: SMC Calling Convention v1.1 Feb 9 18:37:56.727804 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 18:37:56.727811 kernel: ACPI: SRAT not present Feb 9 18:37:56.727817 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 18:37:56.727823 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 18:37:56.727830 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 18:37:56.727836 kernel: Detected PIPT I-cache on CPU0 Feb 9 18:37:56.727842 kernel: CPU features: detected: GIC system register CPU interface Feb 9 18:37:56.727848 kernel: CPU features: detected: Hardware dirty bit management Feb 9 18:37:56.727854 kernel: CPU features: detected: Spectre-v4 Feb 9 18:37:56.727860 kernel: CPU features: detected: Spectre-BHB Feb 9 18:37:56.727868 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 18:37:56.727874 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 18:37:56.727880 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 18:37:56.727886 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 18:37:56.727892 kernel: Policy zone: DMA Feb 9 18:37:56.727899 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:37:56.727906 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:37:56.727912 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 18:37:56.727918 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:37:56.727924 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:37:56.727930 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 9 18:37:56.727938 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 18:37:56.727944 kernel: trace event string verifier disabled Feb 9 18:37:56.727958 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 18:37:56.727966 kernel: rcu: RCU event tracing is enabled. Feb 9 18:37:56.727972 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 18:37:56.727978 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 18:37:56.727985 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:37:56.727991 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:37:56.727997 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 18:37:56.728003 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 18:37:56.728009 kernel: GICv3: 256 SPIs implemented Feb 9 18:37:56.728017 kernel: GICv3: 0 Extended SPIs implemented Feb 9 18:37:56.728023 kernel: GICv3: Distributor has no Range Selector support Feb 9 18:37:56.728029 kernel: Root IRQ handler: gic_handle_irq Feb 9 18:37:56.728035 kernel: GICv3: 16 PPIs implemented Feb 9 18:37:56.728041 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 18:37:56.728047 kernel: ACPI: SRAT not present Feb 9 18:37:56.728053 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 18:37:56.728059 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 18:37:56.728065 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 18:37:56.728071 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 18:37:56.728078 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 18:37:56.728084 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:37:56.728091 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 18:37:56.728106 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 18:37:56.728112 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 18:37:56.728119 kernel: arm-pv: using stolen time PV Feb 9 18:37:56.728125 kernel: Console: colour dummy device 80x25 Feb 9 18:37:56.728131 kernel: ACPI: Core revision 20210730 Feb 9 18:37:56.728138 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 18:37:56.728144 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:37:56.728151 kernel: LSM: Security Framework initializing Feb 9 18:37:56.728157 kernel: SELinux: Initializing. Feb 9 18:37:56.728165 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:37:56.728171 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:37:56.728181 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:37:56.728187 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 18:37:56.728193 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 18:37:56.728200 kernel: Remapping and enabling EFI services. Feb 9 18:37:56.728206 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:37:56.728212 kernel: Detected PIPT I-cache on CPU1 Feb 9 18:37:56.728218 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 18:37:56.728226 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 18:37:56.728232 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:37:56.728238 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 18:37:56.728245 kernel: Detected PIPT I-cache on CPU2 Feb 9 18:37:56.728251 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 18:37:56.728258 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 18:37:56.728264 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:37:56.728270 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 18:37:56.728276 kernel: Detected PIPT I-cache on CPU3 Feb 9 18:37:56.728283 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 18:37:56.728290 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 18:37:56.728296 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:37:56.728302 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 18:37:56.728309 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 18:37:56.728319 kernel: SMP: Total of 4 processors activated. Feb 9 18:37:56.728327 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 18:37:56.728333 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 18:37:56.728340 kernel: CPU features: detected: Common not Private translations Feb 9 18:37:56.728346 kernel: CPU features: detected: CRC32 instructions Feb 9 18:37:56.728353 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 18:37:56.728359 kernel: CPU features: detected: LSE atomic instructions Feb 9 18:37:56.728366 kernel: CPU features: detected: Privileged Access Never Feb 9 18:37:56.728374 kernel: CPU features: detected: RAS Extension Support Feb 9 18:37:56.728380 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 18:37:56.728387 kernel: CPU: All CPU(s) started at EL1 Feb 9 18:37:56.728393 kernel: alternatives: patching kernel code Feb 9 18:37:56.728401 kernel: devtmpfs: initialized Feb 9 18:37:56.728408 kernel: KASLR enabled Feb 9 18:37:56.728415 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:37:56.728421 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 18:37:56.728428 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:37:56.728434 kernel: SMBIOS 3.0.0 present. Feb 9 18:37:56.728441 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 18:37:56.728447 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:37:56.728454 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 18:37:56.728461 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 18:37:56.728468 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 18:37:56.728475 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:37:56.728482 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 Feb 9 18:37:56.728488 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:37:56.728495 kernel: cpuidle: using governor menu Feb 9 18:37:56.728501 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 18:37:56.728508 kernel: ASID allocator initialised with 32768 entries Feb 9 18:37:56.728514 kernel: ACPI: bus type PCI registered Feb 9 18:37:56.728521 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:37:56.728529 kernel: Serial: AMBA PL011 UART driver Feb 9 18:37:56.728535 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:37:56.728542 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 18:37:56.728548 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:37:56.728555 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 18:37:56.728561 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:37:56.728568 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 18:37:56.728575 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:37:56.728581 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:37:56.728589 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:37:56.728596 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:37:56.728602 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:37:56.728609 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:37:56.728615 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:37:56.728622 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 18:37:56.728628 kernel: ACPI: Interpreter enabled Feb 9 18:37:56.728635 kernel: ACPI: Using GIC for interrupt routing Feb 9 18:37:56.728642 kernel: ACPI: MCFG table detected, 1 entries Feb 9 18:37:56.728649 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 18:37:56.728656 kernel: printk: console [ttyAMA0] enabled Feb 9 18:37:56.728662 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 18:37:56.728780 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 18:37:56.728842 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 18:37:56.728900 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 18:37:56.728980 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 18:37:56.729046 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 18:37:56.729055 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 18:37:56.729062 kernel: PCI host bridge to bus 0000:00 Feb 9 18:37:56.729135 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 18:37:56.729195 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 18:37:56.729248 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 18:37:56.729299 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 18:37:56.729369 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 18:37:56.729436 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 18:37:56.729498 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 18:37:56.729562 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 18:37:56.729621 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 18:37:56.729680 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 18:37:56.729739 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 18:37:56.729800 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 18:37:56.729852 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 18:37:56.729903 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 18:37:56.729997 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 18:37:56.730009 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 18:37:56.730016 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 18:37:56.730023 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 18:37:56.730031 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 18:37:56.730038 kernel: iommu: Default domain type: Translated Feb 9 18:37:56.730044 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 18:37:56.730051 kernel: vgaarb: loaded Feb 9 18:37:56.730058 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:37:56.730065 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:37:56.730071 kernel: PTP clock support registered Feb 9 18:37:56.730078 kernel: Registered efivars operations Feb 9 18:37:56.730084 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 18:37:56.730091 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:37:56.730108 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:37:56.730115 kernel: pnp: PnP ACPI init Feb 9 18:37:56.730193 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 18:37:56.730204 kernel: pnp: PnP ACPI: found 1 devices Feb 9 18:37:56.730210 kernel: NET: Registered PF_INET protocol family Feb 9 18:37:56.730217 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 18:37:56.730224 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 18:37:56.730231 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:37:56.730239 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:37:56.730246 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 18:37:56.730253 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 18:37:56.730260 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:37:56.730266 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:37:56.730273 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:37:56.730280 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:37:56.730286 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 18:37:56.730294 kernel: kvm [1]: HYP mode not available Feb 9 18:37:56.730301 kernel: Initialise system trusted keyrings Feb 9 18:37:56.730307 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 18:37:56.730314 kernel: Key type asymmetric registered Feb 9 18:37:56.730320 kernel: Asymmetric key parser 'x509' registered Feb 9 18:37:56.730327 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:37:56.730333 kernel: io scheduler mq-deadline registered Feb 9 18:37:56.730340 kernel: io scheduler kyber registered Feb 9 18:37:56.730346 kernel: io scheduler bfq registered Feb 9 18:37:56.730353 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 18:37:56.730361 kernel: ACPI: button: Power Button [PWRB] Feb 9 18:37:56.730368 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 18:37:56.730831 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 18:37:56.730848 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:37:56.730856 kernel: thunder_xcv, ver 1.0 Feb 9 18:37:56.730863 kernel: thunder_bgx, ver 1.0 Feb 9 18:37:56.730869 kernel: nicpf, ver 1.0 Feb 9 18:37:56.730876 kernel: nicvf, ver 1.0 Feb 9 18:37:56.730969 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 18:37:56.731043 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T18:37:56 UTC (1707503876) Feb 9 18:37:56.731053 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 18:37:56.731059 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:37:56.731066 kernel: Segment Routing with IPv6 Feb 9 18:37:56.731073 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:37:56.731080 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:37:56.731086 kernel: Key type dns_resolver registered Feb 9 18:37:56.731102 kernel: registered taskstats version 1 Feb 9 18:37:56.731112 kernel: Loading compiled-in X.509 certificates Feb 9 18:37:56.731119 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 18:37:56.731153 kernel: Key type .fscrypt registered Feb 9 18:37:56.731161 kernel: Key type fscrypt-provisioning registered Feb 9 18:37:56.731168 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:37:56.731175 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:37:56.731181 kernel: ima: No architecture policies found Feb 9 18:37:56.731188 kernel: Freeing unused kernel memory: 34688K Feb 9 18:37:56.731195 kernel: Run /init as init process Feb 9 18:37:56.731204 kernel: with arguments: Feb 9 18:37:56.731210 kernel: /init Feb 9 18:37:56.731216 kernel: with environment: Feb 9 18:37:56.731223 kernel: HOME=/ Feb 9 18:37:56.731229 kernel: TERM=linux Feb 9 18:37:56.731236 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:37:56.731245 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:37:56.731659 systemd[1]: Detected virtualization kvm. Feb 9 18:37:56.731670 systemd[1]: Detected architecture arm64. Feb 9 18:37:56.731678 systemd[1]: Running in initrd. Feb 9 18:37:56.731685 systemd[1]: No hostname configured, using default hostname. Feb 9 18:37:56.731692 systemd[1]: Hostname set to . Feb 9 18:37:56.731699 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:37:56.731706 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:37:56.731714 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:37:56.731721 systemd[1]: Reached target cryptsetup.target. Feb 9 18:37:56.731729 systemd[1]: Reached target paths.target. Feb 9 18:37:56.731737 systemd[1]: Reached target slices.target. Feb 9 18:37:56.731744 systemd[1]: Reached target swap.target. Feb 9 18:37:56.731751 systemd[1]: Reached target timers.target. Feb 9 18:37:56.731758 systemd[1]: Listening on iscsid.socket. Feb 9 18:37:56.731766 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:37:56.731773 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:37:56.731782 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:37:56.731789 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:37:56.731796 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:37:56.731803 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:37:56.731811 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:37:56.731818 systemd[1]: Reached target sockets.target. Feb 9 18:37:56.731825 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:37:56.731832 systemd[1]: Finished network-cleanup.service. Feb 9 18:37:56.731839 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:37:56.731848 systemd[1]: Starting systemd-journald.service... Feb 9 18:37:56.731855 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:37:56.731862 systemd[1]: Starting systemd-resolved.service... Feb 9 18:37:56.731870 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:37:56.731877 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:37:56.731884 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:37:56.731891 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:37:56.731898 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:37:56.731906 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:37:56.731914 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:37:56.731922 kernel: audit: type=1130 audit(1707503876.728:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:56.731932 systemd-journald[291]: Journal started Feb 9 18:37:56.731994 systemd-journald[291]: Runtime Journal (/run/log/journal/2977d1e21c0a4d95a923e45457c79069) is 6.0M, max 48.7M, 42.6M free. Feb 9 18:37:56.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:56.718361 systemd-modules-load[292]: Inserted module 'overlay' Feb 9 18:37:56.734280 systemd[1]: Started systemd-journald.service. Feb 9 18:37:56.734307 kernel: audit: type=1130 audit(1707503876.734:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:56.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:56.737963 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:37:56.741076 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:37:56.754038 kernel: Bridge firewalling registered Feb 9 18:37:56.754055 kernel: audit: type=1130 audit(1707503876.749:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:56.754069 kernel: SCSI subsystem initialized Feb 9 18:37:56.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:56.742174 systemd-modules-load[292]: Inserted module 'br_netfilter' Feb 9 18:37:56.750923 systemd-resolved[293]: Positive Trust Anchors: Feb 9 18:37:56.750930 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:37:56.750985 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:37:56.768064 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:37:56.768082 kernel: audit: type=1130 audit(1707503876.758:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:56.768099 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:37:56.768108 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:37:56.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:56.768144 dracut-cmdline[309]: dracut-dracut-053 Feb 9 18:37:56.751413 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:37:56.756046 systemd-resolved[293]: Defaulting to hostname 'linux'. Feb 9 18:37:56.756765 systemd[1]: Started systemd-resolved.service. Feb 9 18:37:56.770939 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:37:56.759308 systemd[1]: Reached target nss-lookup.target. Feb 9 18:37:56.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:56.771086 systemd-modules-load[292]: Inserted module 'dm_multipath' Feb 9 18:37:56.778415 kernel: audit: type=1130 audit(1707503876.774:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:56.771776 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:37:56.775608 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:37:56.782557 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:37:56.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:56.785972 kernel: audit: type=1130 audit(1707503876.783:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:56.830974 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:37:56.838980 kernel: iscsi: registered transport (tcp) Feb 9 18:37:56.851980 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:37:56.851991 kernel: QLogic iSCSI HBA Driver Feb 9 18:37:56.885270 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:37:56.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:56.886889 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:37:56.889323 kernel: audit: type=1130 audit(1707503876.885:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:56.932973 kernel: raid6: neonx8 gen() 13794 MB/s Feb 9 18:37:56.949965 kernel: raid6: neonx8 xor() 10795 MB/s Feb 9 18:37:56.966973 kernel: raid6: neonx4 gen() 13538 MB/s Feb 9 18:37:56.983974 kernel: raid6: neonx4 xor() 11271 MB/s Feb 9 18:37:57.000965 kernel: raid6: neonx2 gen() 12972 MB/s Feb 9 18:37:57.017966 kernel: raid6: neonx2 xor() 10273 MB/s Feb 9 18:37:57.034973 kernel: raid6: neonx1 gen() 10495 MB/s Feb 9 18:37:57.051974 kernel: raid6: neonx1 xor() 8758 MB/s Feb 9 18:37:57.068974 kernel: raid6: int64x8 gen() 6284 MB/s Feb 9 18:37:57.085963 kernel: raid6: int64x8 xor() 3534 MB/s Feb 9 18:37:57.102973 kernel: raid6: int64x4 gen() 7246 MB/s Feb 9 18:37:57.119964 kernel: raid6: int64x4 xor() 3847 MB/s Feb 9 18:37:57.136966 kernel: raid6: int64x2 gen() 6152 MB/s Feb 9 18:37:57.153963 kernel: raid6: int64x2 xor() 3320 MB/s Feb 9 18:37:57.170977 kernel: raid6: int64x1 gen() 5036 MB/s Feb 9 18:37:57.188151 kernel: raid6: int64x1 xor() 2636 MB/s Feb 9 18:37:57.188171 kernel: raid6: using algorithm neonx8 gen() 13794 MB/s Feb 9 18:37:57.188188 kernel: raid6: .... xor() 10795 MB/s, rmw enabled Feb 9 18:37:57.188204 kernel: raid6: using neon recovery algorithm Feb 9 18:37:57.199183 kernel: xor: measuring software checksum speed Feb 9 18:37:57.199210 kernel: 8regs : 17300 MB/sec Feb 9 18:37:57.200002 kernel: 32regs : 20765 MB/sec Feb 9 18:37:57.201158 kernel: arm64_neon : 28219 MB/sec Feb 9 18:37:57.201170 kernel: xor: using function: arm64_neon (28219 MB/sec) Feb 9 18:37:57.254979 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 18:37:57.264375 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:37:57.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:57.266044 systemd[1]: Starting systemd-udevd.service... Feb 9 18:37:57.269058 kernel: audit: type=1130 audit(1707503877.265:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:57.269079 kernel: audit: type=1334 audit(1707503877.265:10): prog-id=7 op=LOAD Feb 9 18:37:57.265000 audit: BPF prog-id=7 op=LOAD Feb 9 18:37:57.265000 audit: BPF prog-id=8 op=LOAD Feb 9 18:37:57.280619 systemd-udevd[491]: Using default interface naming scheme 'v252'. Feb 9 18:37:57.283936 systemd[1]: Started systemd-udevd.service. Feb 9 18:37:57.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:57.285731 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:37:57.296609 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Feb 9 18:37:57.322744 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:37:57.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:57.324182 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:37:57.357332 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:37:57.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:57.373245 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 18:37:57.376195 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 18:37:57.376222 kernel: GPT:9289727 != 19775487 Feb 9 18:37:57.376231 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 18:37:57.376968 kernel: GPT:9289727 != 19775487 Feb 9 18:37:57.378007 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 18:37:57.378040 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:37:57.403976 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (543) Feb 9 18:37:57.405120 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:37:57.410142 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:37:57.411184 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:37:57.417563 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:37:57.421015 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:37:57.422633 systemd[1]: Starting disk-uuid.service... Feb 9 18:37:57.428449 disk-uuid[568]: Primary Header is updated. Feb 9 18:37:57.428449 disk-uuid[568]: Secondary Entries is updated. Feb 9 18:37:57.428449 disk-uuid[568]: Secondary Header is updated. Feb 9 18:37:57.431969 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:37:58.444986 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:37:58.445042 disk-uuid[569]: The operation has completed successfully. Feb 9 18:37:58.466754 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:37:58.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.466847 systemd[1]: Finished disk-uuid.service. Feb 9 18:37:58.471142 systemd[1]: Starting verity-setup.service... Feb 9 18:37:58.486969 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 18:37:58.509631 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:37:58.511266 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:37:58.512067 systemd[1]: Finished verity-setup.service. Feb 9 18:37:58.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.559969 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:37:58.560249 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:37:58.561012 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:37:58.561671 systemd[1]: Starting ignition-setup.service... Feb 9 18:37:58.563823 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:37:58.569308 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:37:58.569345 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:37:58.569355 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:37:58.576664 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:37:58.583178 systemd[1]: Finished ignition-setup.service. Feb 9 18:37:58.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.584675 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:37:58.641621 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:37:58.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.642000 audit: BPF prog-id=9 op=LOAD Feb 9 18:37:58.643643 systemd[1]: Starting systemd-networkd.service... Feb 9 18:37:58.661994 ignition[649]: Ignition 2.14.0 Feb 9 18:37:58.662004 ignition[649]: Stage: fetch-offline Feb 9 18:37:58.662038 ignition[649]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:37:58.662047 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:37:58.662174 ignition[649]: parsed url from cmdline: "" Feb 9 18:37:58.662178 ignition[649]: no config URL provided Feb 9 18:37:58.662183 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:37:58.662190 ignition[649]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:37:58.662210 ignition[649]: op(1): [started] loading QEMU firmware config module Feb 9 18:37:58.668264 systemd-networkd[745]: lo: Link UP Feb 9 18:37:58.662215 ignition[649]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 18:37:58.668267 systemd-networkd[745]: lo: Gained carrier Feb 9 18:37:58.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.668625 systemd-networkd[745]: Enumeration completed Feb 9 18:37:58.668797 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:37:58.672508 ignition[649]: op(1): [finished] loading QEMU firmware config module Feb 9 18:37:58.670064 systemd-networkd[745]: eth0: Link UP Feb 9 18:37:58.670067 systemd-networkd[745]: eth0: Gained carrier Feb 9 18:37:58.670300 systemd[1]: Started systemd-networkd.service. Feb 9 18:37:58.670943 systemd[1]: Reached target network.target. Feb 9 18:37:58.672737 systemd[1]: Starting iscsiuio.service... Feb 9 18:37:58.681683 systemd[1]: Started iscsiuio.service. Feb 9 18:37:58.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.683134 systemd[1]: Starting iscsid.service... Feb 9 18:37:58.685018 systemd-networkd[745]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:37:58.686308 iscsid[752]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:37:58.686308 iscsid[752]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:37:58.686308 iscsid[752]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:37:58.686308 iscsid[752]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:37:58.686308 iscsid[752]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:37:58.686308 iscsid[752]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:37:58.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.689245 systemd[1]: Started iscsid.service. Feb 9 18:37:58.693479 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:37:58.703409 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:37:58.704380 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:37:58.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.705674 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:37:58.706935 systemd[1]: Reached target remote-fs.target. Feb 9 18:37:58.709039 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:37:58.716438 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:37:58.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.759185 ignition[649]: parsing config with SHA512: d24f6cd6b5d68c546f663b4f32693bfed6ec3a5d3d3879449d674f665de7fdc76c54a66f9902bb3477944a5473b379536455f6956303a32bbaa13ad113f38951 Feb 9 18:37:58.800567 unknown[649]: fetched base config from "system" Feb 9 18:37:58.800578 unknown[649]: fetched user config from "qemu" Feb 9 18:37:58.801213 ignition[649]: fetch-offline: fetch-offline passed Feb 9 18:37:58.802835 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:37:58.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.801277 ignition[649]: Ignition finished successfully Feb 9 18:37:58.804107 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 18:37:58.804778 systemd[1]: Starting ignition-kargs.service... Feb 9 18:37:58.813996 ignition[767]: Ignition 2.14.0 Feb 9 18:37:58.814007 ignition[767]: Stage: kargs Feb 9 18:37:58.814106 ignition[767]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:37:58.814117 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:37:58.816572 systemd[1]: Finished ignition-kargs.service. Feb 9 18:37:58.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.815203 ignition[767]: kargs: kargs passed Feb 9 18:37:58.815254 ignition[767]: Ignition finished successfully Feb 9 18:37:58.818692 systemd[1]: Starting ignition-disks.service... Feb 9 18:37:58.824475 ignition[773]: Ignition 2.14.0 Feb 9 18:37:58.824485 ignition[773]: Stage: disks Feb 9 18:37:58.824569 ignition[773]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:37:58.824578 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:37:58.825645 ignition[773]: disks: disks passed Feb 9 18:37:58.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.826453 systemd[1]: Finished ignition-disks.service. Feb 9 18:37:58.825689 ignition[773]: Ignition finished successfully Feb 9 18:37:58.827601 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:37:58.828658 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:37:58.829588 systemd[1]: Reached target local-fs.target. Feb 9 18:37:58.830766 systemd[1]: Reached target sysinit.target. Feb 9 18:37:58.831890 systemd[1]: Reached target basic.target. Feb 9 18:37:58.833713 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:37:58.844128 systemd-fsck[781]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 18:37:58.847562 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:37:58.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.849122 systemd[1]: Mounting sysroot.mount... Feb 9 18:37:58.853971 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:37:58.854040 systemd[1]: Mounted sysroot.mount. Feb 9 18:37:58.854727 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:37:58.857099 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:37:58.857921 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 18:37:58.857981 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:37:58.858007 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:37:58.859691 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:37:58.861480 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:37:58.865616 initrd-setup-root[791]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:37:58.870393 initrd-setup-root[799]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:37:58.874438 initrd-setup-root[807]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:37:58.878389 initrd-setup-root[815]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:37:58.902376 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:37:58.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.903828 systemd[1]: Starting ignition-mount.service... Feb 9 18:37:58.905188 systemd[1]: Starting sysroot-boot.service... Feb 9 18:37:58.909695 bash[832]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 18:37:58.918464 ignition[834]: INFO : Ignition 2.14.0 Feb 9 18:37:58.918464 ignition[834]: INFO : Stage: mount Feb 9 18:37:58.920667 ignition[834]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:37:58.920667 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:37:58.920667 ignition[834]: INFO : mount: mount passed Feb 9 18:37:58.920667 ignition[834]: INFO : Ignition finished successfully Feb 9 18:37:58.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.922837 systemd[1]: Finished ignition-mount.service. Feb 9 18:37:58.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:58.924025 systemd[1]: Finished sysroot-boot.service. Feb 9 18:37:59.519616 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:37:59.524976 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (842) Feb 9 18:37:59.526356 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:37:59.526381 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:37:59.526390 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:37:59.529458 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:37:59.530837 systemd[1]: Starting ignition-files.service... Feb 9 18:37:59.544391 ignition[862]: INFO : Ignition 2.14.0 Feb 9 18:37:59.544391 ignition[862]: INFO : Stage: files Feb 9 18:37:59.545865 ignition[862]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:37:59.545865 ignition[862]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:37:59.545865 ignition[862]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:37:59.548859 ignition[862]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:37:59.548859 ignition[862]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:37:59.551623 ignition[862]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:37:59.551623 ignition[862]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:37:59.554751 ignition[862]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:37:59.554751 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:37:59.554751 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 18:37:59.552856 unknown[862]: wrote ssh authorized keys file for user: core Feb 9 18:37:59.610423 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 18:37:59.665887 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:37:59.665887 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:37:59.668662 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 18:37:59.968649 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 18:38:00.096574 ignition[862]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 18:38:00.096574 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:38:00.099990 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:38:00.099990 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 18:38:00.202176 systemd-networkd[745]: eth0: Gained IPv6LL Feb 9 18:38:00.335444 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 18:38:00.618372 ignition[862]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 18:38:00.621187 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:38:00.622897 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:38:00.624583 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:38:00.624583 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:38:00.624583 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 18:38:00.726055 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 18:38:01.012325 ignition[862]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 18:38:01.012325 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:38:01.015527 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:38:01.015527 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 18:38:01.036037 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 18:38:01.691633 ignition[862]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 18:38:01.695440 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:38:01.695440 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:38:01.695440 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 18:38:01.720165 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 18:38:02.053240 ignition[862]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 18:38:02.053240 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:38:02.056494 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 18:38:02.056494 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 18:38:02.260605 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 18:38:02.329838 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 18:38:02.331277 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:38:02.331277 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:38:02.331277 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:38:02.331277 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:38:02.331277 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:38:02.331277 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:38:02.331277 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:38:02.331277 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:38:02.331277 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:38:02.331277 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:38:02.331277 ignition[862]: INFO : files: op(10): [started] processing unit "prepare-helm.service" Feb 9 18:38:02.331277 ignition[862]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:38:02.331277 ignition[862]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:38:02.331277 ignition[862]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" Feb 9 18:38:02.331277 ignition[862]: INFO : files: op(12): [started] processing unit "coreos-metadata.service" Feb 9 18:38:02.331277 ignition[862]: INFO : files: op(12): op(13): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:38:02.331277 ignition[862]: INFO : files: op(12): op(13): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:38:02.353839 ignition[862]: INFO : files: op(12): [finished] processing unit "coreos-metadata.service" Feb 9 18:38:02.353839 ignition[862]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:38:02.353839 ignition[862]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:38:02.353839 ignition[862]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:38:02.353839 ignition[862]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:38:02.353839 ignition[862]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 9 18:38:02.353839 ignition[862]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:38:02.353839 ignition[862]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:38:02.353839 ignition[862]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 9 18:38:02.353839 ignition[862]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:38:02.353839 ignition[862]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:38:02.353839 ignition[862]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Feb 9 18:38:02.353839 ignition[862]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 18:38:02.353839 ignition[862]: INFO : files: op(1a): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 18:38:02.353839 ignition[862]: INFO : files: op(1a): op(1b): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:38:02.379839 ignition[862]: INFO : files: op(1a): op(1b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:38:02.380971 ignition[862]: INFO : files: op(1a): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 18:38:02.380971 ignition[862]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:38:02.380971 ignition[862]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:38:02.380971 ignition[862]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:38:02.380971 ignition[862]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:38:02.380971 ignition[862]: INFO : files: files passed Feb 9 18:38:02.380971 ignition[862]: INFO : Ignition finished successfully Feb 9 18:38:02.393936 kernel: kauditd_printk_skb: 22 callbacks suppressed Feb 9 18:38:02.393968 kernel: audit: type=1130 audit(1707503882.382:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.393984 kernel: audit: type=1130 audit(1707503882.391:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.393994 kernel: audit: type=1131 audit(1707503882.392:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.381398 systemd[1]: Finished ignition-files.service. Feb 9 18:38:02.398472 kernel: audit: type=1130 audit(1707503882.395:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.384481 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:38:02.386615 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:38:02.401881 initrd-setup-root-after-ignition[887]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 18:38:02.387339 systemd[1]: Starting ignition-quench.service... Feb 9 18:38:02.403512 initrd-setup-root-after-ignition[889]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:38:02.390513 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:38:02.390590 systemd[1]: Finished ignition-quench.service. Feb 9 18:38:02.393241 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:38:02.396384 systemd[1]: Reached target ignition-complete.target. Feb 9 18:38:02.399851 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:38:02.411522 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:38:02.411603 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:38:02.416766 kernel: audit: type=1130 audit(1707503882.412:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.416783 kernel: audit: type=1131 audit(1707503882.412:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.413210 systemd[1]: Reached target initrd-fs.target. Feb 9 18:38:02.417403 systemd[1]: Reached target initrd.target. Feb 9 18:38:02.418685 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:38:02.419338 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:38:02.429164 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:38:02.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.430549 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:38:02.433093 kernel: audit: type=1130 audit(1707503882.428:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.438126 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:38:02.438829 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:38:02.440066 systemd[1]: Stopped target timers.target. Feb 9 18:38:02.441249 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:38:02.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.441346 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:38:02.445531 kernel: audit: type=1131 audit(1707503882.442:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.442395 systemd[1]: Stopped target initrd.target. Feb 9 18:38:02.445163 systemd[1]: Stopped target basic.target. Feb 9 18:38:02.446227 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:38:02.447344 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:38:02.448365 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:38:02.449457 systemd[1]: Stopped target remote-fs.target. Feb 9 18:38:02.450669 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:38:02.451917 systemd[1]: Stopped target sysinit.target. Feb 9 18:38:02.453040 systemd[1]: Stopped target local-fs.target. Feb 9 18:38:02.454095 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:38:02.455170 systemd[1]: Stopped target swap.target. Feb 9 18:38:02.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.456174 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:38:02.460429 kernel: audit: type=1131 audit(1707503882.457:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.456274 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:38:02.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.457344 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:38:02.464289 kernel: audit: type=1131 audit(1707503882.460:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.459897 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:38:02.460008 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:38:02.461180 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:38:02.461274 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:38:02.464015 systemd[1]: Stopped target paths.target. Feb 9 18:38:02.464926 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:38:02.469005 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:38:02.469843 systemd[1]: Stopped target slices.target. Feb 9 18:38:02.470961 systemd[1]: Stopped target sockets.target. Feb 9 18:38:02.472128 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:38:02.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.472233 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:38:02.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.473388 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:38:02.473480 systemd[1]: Stopped ignition-files.service. Feb 9 18:38:02.476965 iscsid[752]: iscsid shutting down. Feb 9 18:38:02.475538 systemd[1]: Stopping ignition-mount.service... Feb 9 18:38:02.476631 systemd[1]: Stopping iscsid.service... Feb 9 18:38:02.478284 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:38:02.479033 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:38:02.479190 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:38:02.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.482559 ignition[902]: INFO : Ignition 2.14.0 Feb 9 18:38:02.482559 ignition[902]: INFO : Stage: umount Feb 9 18:38:02.482559 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:38:02.482559 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:38:02.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.480286 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:38:02.488359 ignition[902]: INFO : umount: umount passed Feb 9 18:38:02.488359 ignition[902]: INFO : Ignition finished successfully Feb 9 18:38:02.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.480383 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:38:02.482925 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 18:38:02.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.483047 systemd[1]: Stopped iscsid.service. Feb 9 18:38:02.484675 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:38:02.484738 systemd[1]: Closed iscsid.socket. Feb 9 18:38:02.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.486092 systemd[1]: Stopping iscsiuio.service... Feb 9 18:38:02.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.487926 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:38:02.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.488013 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:38:02.489227 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:38:02.489295 systemd[1]: Stopped ignition-mount.service. Feb 9 18:38:02.490964 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:38:02.491064 systemd[1]: Stopped iscsiuio.service. Feb 9 18:38:02.492187 systemd[1]: Stopped target network.target. Feb 9 18:38:02.493312 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:38:02.493343 systemd[1]: Closed iscsiuio.socket. Feb 9 18:38:02.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.494314 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:38:02.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.494354 systemd[1]: Stopped ignition-disks.service. Feb 9 18:38:02.495629 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:38:02.495666 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:38:02.496678 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:38:02.496714 systemd[1]: Stopped ignition-setup.service. Feb 9 18:38:02.498059 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:38:02.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.499758 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:38:02.501681 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:38:02.502030 systemd-networkd[745]: eth0: DHCPv6 lease lost Feb 9 18:38:02.503000 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:38:02.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.513000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:38:02.503086 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:38:02.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.504505 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:38:02.504584 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:38:02.505614 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:38:02.505641 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:38:02.506590 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:38:02.506627 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:38:02.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.510615 systemd[1]: Stopping network-cleanup.service... Feb 9 18:38:02.511453 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:38:02.511503 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:38:02.512556 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:38:02.512590 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:38:02.525000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:38:02.515212 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:38:02.515251 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:38:02.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.516101 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:38:02.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.519890 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:38:02.520333 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:38:02.520428 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:38:02.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.526288 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:38:02.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.526408 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:38:02.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.527816 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:38:02.527892 systemd[1]: Stopped network-cleanup.service. Feb 9 18:38:02.528626 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:38:02.528657 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:38:02.530746 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:38:02.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.530780 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:38:02.532218 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:38:02.532260 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:38:02.533425 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:38:02.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.533464 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:38:02.534498 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:38:02.534538 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:38:02.536336 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:38:02.537393 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 18:38:02.537448 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 18:38:02.539211 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:38:02.539249 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:38:02.539972 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:38:02.540012 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:38:02.541844 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 18:38:02.542255 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:38:02.542329 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:38:02.543223 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:38:02.545303 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:38:02.551233 systemd[1]: Switching root. Feb 9 18:38:02.571444 systemd-journald[291]: Journal stopped Feb 9 18:38:04.710197 systemd-journald[291]: Received SIGTERM from PID 1 (n/a). Feb 9 18:38:04.710256 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:38:04.710269 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:38:04.710283 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:38:04.710293 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:38:04.710303 kernel: SELinux: policy capability open_perms=1 Feb 9 18:38:04.710312 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:38:04.710324 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:38:04.710337 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:38:04.710346 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:38:04.710355 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:38:04.710365 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:38:04.710375 systemd[1]: Successfully loaded SELinux policy in 39.328ms. Feb 9 18:38:04.710393 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.794ms. Feb 9 18:38:04.710405 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:38:04.710417 systemd[1]: Detected virtualization kvm. Feb 9 18:38:04.710428 systemd[1]: Detected architecture arm64. Feb 9 18:38:04.710438 systemd[1]: Detected first boot. Feb 9 18:38:04.710449 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:38:04.710459 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:38:04.710469 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:38:04.710479 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:38:04.710491 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:38:04.710506 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:38:04.710518 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 18:38:04.710529 systemd[1]: Stopped initrd-switch-root.service. Feb 9 18:38:04.710539 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 18:38:04.710549 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:38:04.710560 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:38:04.710571 systemd[1]: Created slice system-getty.slice. Feb 9 18:38:04.710582 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:38:04.710592 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:38:04.710603 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:38:04.710613 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:38:04.710624 systemd[1]: Created slice user.slice. Feb 9 18:38:04.710634 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:38:04.710645 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:38:04.710655 systemd[1]: Set up automount boot.automount. Feb 9 18:38:04.710666 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:38:04.710677 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 18:38:04.710688 systemd[1]: Stopped target initrd-fs.target. Feb 9 18:38:04.710698 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 18:38:04.710708 systemd[1]: Reached target integritysetup.target. Feb 9 18:38:04.710719 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:38:04.710730 systemd[1]: Reached target remote-fs.target. Feb 9 18:38:04.710741 systemd[1]: Reached target slices.target. Feb 9 18:38:04.710752 systemd[1]: Reached target swap.target. Feb 9 18:38:04.710763 systemd[1]: Reached target torcx.target. Feb 9 18:38:04.710774 systemd[1]: Reached target veritysetup.target. Feb 9 18:38:04.710784 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:38:04.710795 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:38:04.710805 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:38:04.710816 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:38:04.710827 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:38:04.710838 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:38:04.710849 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:38:04.710860 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:38:04.710871 systemd[1]: Mounting media.mount... Feb 9 18:38:04.710881 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:38:04.710891 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:38:04.710902 systemd[1]: Mounting tmp.mount... Feb 9 18:38:04.710913 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:38:04.710923 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:38:04.710934 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:38:04.710944 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:38:04.710966 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:38:04.710977 systemd[1]: Starting modprobe@drm.service... Feb 9 18:38:04.710987 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:38:04.710997 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:38:04.711007 systemd[1]: Starting modprobe@loop.service... Feb 9 18:38:04.711018 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:38:04.711029 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 18:38:04.711039 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 18:38:04.711049 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 18:38:04.711061 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 18:38:04.711077 systemd[1]: Stopped systemd-journald.service. Feb 9 18:38:04.711088 kernel: fuse: init (API version 7.34) Feb 9 18:38:04.711097 kernel: loop: module loaded Feb 9 18:38:04.711109 systemd[1]: Starting systemd-journald.service... Feb 9 18:38:04.711120 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:38:04.711132 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:38:04.711142 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:38:04.711153 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:38:04.711164 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 18:38:04.711175 systemd[1]: Stopped verity-setup.service. Feb 9 18:38:04.711187 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:38:04.711199 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:38:04.711209 systemd[1]: Mounted media.mount. Feb 9 18:38:04.711220 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:38:04.711230 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:38:04.711240 systemd[1]: Mounted tmp.mount. Feb 9 18:38:04.711250 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:38:04.711260 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:38:04.711270 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:38:04.711281 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:38:04.711292 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:38:04.711303 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:38:04.711313 systemd[1]: Finished modprobe@drm.service. Feb 9 18:38:04.711324 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:38:04.711338 systemd-journald[998]: Journal started Feb 9 18:38:04.711379 systemd-journald[998]: Runtime Journal (/run/log/journal/2977d1e21c0a4d95a923e45457c79069) is 6.0M, max 48.7M, 42.6M free. Feb 9 18:38:02.649000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 18:38:02.823000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:38:02.823000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:38:02.823000 audit: BPF prog-id=10 op=LOAD Feb 9 18:38:02.823000 audit: BPF prog-id=10 op=UNLOAD Feb 9 18:38:02.823000 audit: BPF prog-id=11 op=LOAD Feb 9 18:38:02.823000 audit: BPF prog-id=11 op=UNLOAD Feb 9 18:38:02.877000 audit[936]: AVC avc: denied { associate } for pid=936 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 18:38:02.877000 audit[936]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001cd8d4 a1=4000150de0 a2=40001570c0 a3=32 items=0 ppid=919 pid=936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:38:02.877000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:38:02.879000 audit[936]: AVC avc: denied { associate } for pid=936 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 18:38:02.879000 audit[936]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001cd9b9 a2=1ed a3=0 items=2 ppid=919 pid=936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:38:02.879000 audit: CWD cwd="/" Feb 9 18:38:02.879000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:38:02.879000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:38:02.879000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:38:04.581000 audit: BPF prog-id=12 op=LOAD Feb 9 18:38:04.581000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:38:04.581000 audit: BPF prog-id=13 op=LOAD Feb 9 18:38:04.581000 audit: BPF prog-id=14 op=LOAD Feb 9 18:38:04.581000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:38:04.581000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:38:04.582000 audit: BPF prog-id=15 op=LOAD Feb 9 18:38:04.582000 audit: BPF prog-id=12 op=UNLOAD Feb 9 18:38:04.582000 audit: BPF prog-id=16 op=LOAD Feb 9 18:38:04.582000 audit: BPF prog-id=17 op=LOAD Feb 9 18:38:04.582000 audit: BPF prog-id=13 op=UNLOAD Feb 9 18:38:04.582000 audit: BPF prog-id=14 op=UNLOAD Feb 9 18:38:04.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.712032 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:38:04.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.598000 audit: BPF prog-id=15 op=UNLOAD Feb 9 18:38:04.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.675000 audit: BPF prog-id=18 op=LOAD Feb 9 18:38:04.675000 audit: BPF prog-id=19 op=LOAD Feb 9 18:38:04.675000 audit: BPF prog-id=20 op=LOAD Feb 9 18:38:04.675000 audit: BPF prog-id=16 op=UNLOAD Feb 9 18:38:04.675000 audit: BPF prog-id=17 op=UNLOAD Feb 9 18:38:04.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.706000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:38:04.706000 audit[998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffcd67d500 a2=4000 a3=1 items=0 ppid=1 pid=998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:38:04.706000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:38:04.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.876002 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:38:04.581037 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:38:02.876348 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:38:04.581048 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 18:38:02.876367 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:38:04.583912 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 18:38:04.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:02.876395 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:02Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 18:38:02.876404 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:02Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 18:38:02.876430 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:02Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 18:38:02.876442 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:02Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 18:38:02.876623 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:02Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 18:38:02.876655 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:38:02.876666 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:38:02.877783 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 18:38:02.877816 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 18:38:02.877835 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 18:38:02.877849 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 18:38:02.877865 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 18:38:02.877877 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 18:38:04.291898 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:04Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:38:04.292184 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:04Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:38:04.714129 systemd[1]: Started systemd-journald.service. Feb 9 18:38:04.292298 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:04Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:38:04.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.292471 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:04Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:38:04.292519 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:04Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 18:38:04.292574 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2024-02-09T18:38:04Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 18:38:04.714803 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:38:04.716239 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:38:04.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.717286 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:38:04.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.718320 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:38:04.718475 systemd[1]: Finished modprobe@loop.service. Feb 9 18:38:04.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.719497 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:38:04.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.720536 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:38:04.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.721656 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:38:04.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.722922 systemd[1]: Reached target network-pre.target. Feb 9 18:38:04.724688 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:38:04.726665 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:38:04.727335 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:38:04.728773 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:38:04.730624 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:38:04.731513 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:38:04.732453 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:38:04.733335 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:38:04.734493 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:38:04.738212 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:38:04.739439 systemd-journald[998]: Time spent on flushing to /var/log/journal/2977d1e21c0a4d95a923e45457c79069 is 15.600ms for 1031 entries. Feb 9 18:38:04.739439 systemd-journald[998]: System Journal (/var/log/journal/2977d1e21c0a4d95a923e45457c79069) is 8.0M, max 195.6M, 187.6M free. Feb 9 18:38:04.775530 systemd-journald[998]: Received client request to flush runtime journal. Feb 9 18:38:04.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.741810 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:38:04.776613 udevadm[1037]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 18:38:04.742733 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:38:04.747731 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:38:04.748974 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:38:04.750171 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:38:04.753721 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:38:04.755091 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:38:04.765408 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:38:04.767193 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:38:04.776626 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:38:04.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:04.785430 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:38:04.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.102125 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:38:05.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.102000 audit: BPF prog-id=21 op=LOAD Feb 9 18:38:05.102000 audit: BPF prog-id=22 op=LOAD Feb 9 18:38:05.102000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:38:05.102000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:38:05.104384 systemd[1]: Starting systemd-udevd.service... Feb 9 18:38:05.123059 systemd-udevd[1041]: Using default interface naming scheme 'v252'. Feb 9 18:38:05.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.135905 systemd[1]: Started systemd-udevd.service. Feb 9 18:38:05.136000 audit: BPF prog-id=23 op=LOAD Feb 9 18:38:05.141801 systemd[1]: Starting systemd-networkd.service... Feb 9 18:38:05.146000 audit: BPF prog-id=24 op=LOAD Feb 9 18:38:05.146000 audit: BPF prog-id=25 op=LOAD Feb 9 18:38:05.146000 audit: BPF prog-id=26 op=LOAD Feb 9 18:38:05.148045 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:38:05.177634 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 18:38:05.183740 systemd[1]: Started systemd-userdbd.service. Feb 9 18:38:05.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.206060 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:38:05.232158 systemd-networkd[1057]: lo: Link UP Feb 9 18:38:05.232169 systemd-networkd[1057]: lo: Gained carrier Feb 9 18:38:05.232500 systemd-networkd[1057]: Enumeration completed Feb 9 18:38:05.232590 systemd[1]: Started systemd-networkd.service. Feb 9 18:38:05.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.233402 systemd-networkd[1057]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:38:05.234564 systemd-networkd[1057]: eth0: Link UP Feb 9 18:38:05.234575 systemd-networkd[1057]: eth0: Gained carrier Feb 9 18:38:05.238396 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:38:05.240405 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:38:05.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.252223 lvm[1074]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:38:05.255114 systemd-networkd[1057]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:38:05.273570 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:38:05.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.274334 systemd[1]: Reached target cryptsetup.target. Feb 9 18:38:05.275876 systemd[1]: Starting lvm2-activation.service... Feb 9 18:38:05.279011 lvm[1075]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:38:05.308743 systemd[1]: Finished lvm2-activation.service. Feb 9 18:38:05.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.309471 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:38:05.310090 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:38:05.310117 systemd[1]: Reached target local-fs.target. Feb 9 18:38:05.310653 systemd[1]: Reached target machines.target. Feb 9 18:38:05.312241 systemd[1]: Starting ldconfig.service... Feb 9 18:38:05.313085 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:38:05.313140 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:38:05.314196 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:38:05.315817 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:38:05.317667 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:38:05.319255 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:38:05.319300 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:38:05.320311 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:38:05.323310 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1077 (bootctl) Feb 9 18:38:05.324468 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:38:05.325773 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:38:05.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.336749 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:38:05.343138 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:38:05.344876 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:38:05.417738 systemd-fsck[1087]: fsck.fat 4.2 (2021-01-31) Feb 9 18:38:05.417738 systemd-fsck[1087]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 18:38:05.419229 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:38:05.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.421751 systemd[1]: Mounting boot.mount... Feb 9 18:38:05.446323 systemd[1]: Mounted boot.mount. Feb 9 18:38:05.477104 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:38:05.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.478338 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:38:05.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.523486 ldconfig[1076]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:38:05.528671 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:38:05.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.529735 systemd[1]: Finished ldconfig.service. Feb 9 18:38:05.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.531604 systemd[1]: Starting audit-rules.service... Feb 9 18:38:05.533149 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:38:05.534871 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:38:05.535000 audit: BPF prog-id=27 op=LOAD Feb 9 18:38:05.537322 systemd[1]: Starting systemd-resolved.service... Feb 9 18:38:05.541000 audit: BPF prog-id=28 op=LOAD Feb 9 18:38:05.542942 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:38:05.546147 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:38:05.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.549000 audit[1102]: SYSTEM_BOOT pid=1102 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.548572 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:38:05.549698 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:38:05.554064 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:38:05.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.556426 systemd[1]: Starting systemd-update-done.service... Feb 9 18:38:05.559360 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:38:05.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.562190 systemd[1]: Finished systemd-update-done.service. Feb 9 18:38:05.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:05.577000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:38:05.577000 audit[1112]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc8d212c0 a2=420 a3=0 items=0 ppid=1091 pid=1112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:38:05.577000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:38:05.579123 augenrules[1112]: No rules Feb 9 18:38:05.579904 systemd[1]: Finished audit-rules.service. Feb 9 18:38:05.585524 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:38:05.586573 systemd[1]: Reached target time-set.target. Feb 9 18:38:05.587395 systemd-timesyncd[1096]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 18:38:05.587454 systemd-timesyncd[1096]: Initial clock synchronization to Fri 2024-02-09 18:38:05.350266 UTC. Feb 9 18:38:05.591469 systemd-resolved[1095]: Positive Trust Anchors: Feb 9 18:38:05.591482 systemd-resolved[1095]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:38:05.591510 systemd-resolved[1095]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:38:05.600727 systemd-resolved[1095]: Defaulting to hostname 'linux'. Feb 9 18:38:05.601976 systemd[1]: Started systemd-resolved.service. Feb 9 18:38:05.602734 systemd[1]: Reached target network.target. Feb 9 18:38:05.603501 systemd[1]: Reached target nss-lookup.target. Feb 9 18:38:05.604258 systemd[1]: Reached target sysinit.target. Feb 9 18:38:05.605014 systemd[1]: Started motdgen.path. Feb 9 18:38:05.605682 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:38:05.606607 systemd[1]: Started logrotate.timer. Feb 9 18:38:05.607361 systemd[1]: Started mdadm.timer. Feb 9 18:38:05.608034 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:38:05.608778 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:38:05.608810 systemd[1]: Reached target paths.target. Feb 9 18:38:05.609498 systemd[1]: Reached target timers.target. Feb 9 18:38:05.610450 systemd[1]: Listening on dbus.socket. Feb 9 18:38:05.611999 systemd[1]: Starting docker.socket... Feb 9 18:38:05.614897 systemd[1]: Listening on sshd.socket. Feb 9 18:38:05.615645 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:38:05.616056 systemd[1]: Listening on docker.socket. Feb 9 18:38:05.616797 systemd[1]: Reached target sockets.target. Feb 9 18:38:05.617517 systemd[1]: Reached target basic.target. Feb 9 18:38:05.618266 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:38:05.618297 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:38:05.619208 systemd[1]: Starting containerd.service... Feb 9 18:38:05.620771 systemd[1]: Starting dbus.service... Feb 9 18:38:05.622668 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:38:05.624631 systemd[1]: Starting extend-filesystems.service... Feb 9 18:38:05.625432 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:38:05.626675 systemd[1]: Starting motdgen.service... Feb 9 18:38:05.632277 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:38:05.634062 systemd[1]: Starting prepare-critools.service... Feb 9 18:38:05.636560 systemd[1]: Starting prepare-helm.service... Feb 9 18:38:05.638399 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:38:05.640048 systemd[1]: Starting sshd-keygen.service... Feb 9 18:38:05.643357 jq[1122]: false Feb 9 18:38:05.642612 systemd[1]: Starting systemd-logind.service... Feb 9 18:38:05.643294 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:38:05.643346 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:38:05.643722 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 18:38:05.644434 systemd[1]: Starting update-engine.service... Feb 9 18:38:05.646121 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:38:05.650064 jq[1142]: true Feb 9 18:38:05.649206 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:38:05.649346 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:38:05.649665 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:38:05.649792 systemd[1]: Finished motdgen.service. Feb 9 18:38:05.653090 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:38:05.653279 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:38:05.664601 jq[1148]: true Feb 9 18:38:05.665792 extend-filesystems[1123]: Found vda Feb 9 18:38:05.665792 extend-filesystems[1123]: Found vda1 Feb 9 18:38:05.665792 extend-filesystems[1123]: Found vda2 Feb 9 18:38:05.665792 extend-filesystems[1123]: Found vda3 Feb 9 18:38:05.665792 extend-filesystems[1123]: Found usr Feb 9 18:38:05.665792 extend-filesystems[1123]: Found vda4 Feb 9 18:38:05.665792 extend-filesystems[1123]: Found vda6 Feb 9 18:38:05.665792 extend-filesystems[1123]: Found vda7 Feb 9 18:38:05.665792 extend-filesystems[1123]: Found vda9 Feb 9 18:38:05.665792 extend-filesystems[1123]: Checking size of /dev/vda9 Feb 9 18:38:05.683204 tar[1147]: linux-arm64/helm Feb 9 18:38:05.683376 tar[1145]: ./ Feb 9 18:38:05.683376 tar[1145]: ./macvlan Feb 9 18:38:05.683540 tar[1146]: crictl Feb 9 18:38:05.691284 dbus-daemon[1121]: [system] SELinux support is enabled Feb 9 18:38:05.691536 systemd[1]: Started dbus.service. Feb 9 18:38:05.694823 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:38:05.695242 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:38:05.695267 systemd[1]: Reached target system-config.target. Feb 9 18:38:05.695904 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:38:05.695923 systemd[1]: Reached target user-config.target. Feb 9 18:38:05.709469 extend-filesystems[1123]: Resized partition /dev/vda9 Feb 9 18:38:05.719576 systemd-logind[1138]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 18:38:05.727455 extend-filesystems[1173]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 18:38:05.731797 systemd-logind[1138]: New seat seat0. Feb 9 18:38:05.734718 systemd[1]: Started systemd-logind.service. Feb 9 18:38:05.743972 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 18:38:05.748421 tar[1145]: ./static Feb 9 18:38:05.768975 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 18:38:05.780256 extend-filesystems[1173]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 18:38:05.780256 extend-filesystems[1173]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 18:38:05.780256 extend-filesystems[1173]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 18:38:05.787099 extend-filesystems[1123]: Resized filesystem in /dev/vda9 Feb 9 18:38:05.788542 bash[1177]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:38:05.782081 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:38:05.782239 systemd[1]: Finished extend-filesystems.service. Feb 9 18:38:05.787014 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:38:05.792294 update_engine[1140]: I0209 18:38:05.792021 1140 main.cc:92] Flatcar Update Engine starting Feb 9 18:38:05.796102 systemd[1]: Started update-engine.service. Feb 9 18:38:05.796409 update_engine[1140]: I0209 18:38:05.796180 1140 update_check_scheduler.cc:74] Next update check in 5m31s Feb 9 18:38:05.797519 tar[1145]: ./vlan Feb 9 18:38:05.798545 systemd[1]: Started locksmithd.service. Feb 9 18:38:05.832331 env[1149]: time="2024-02-09T18:38:05.832276840Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:38:05.847795 locksmithd[1181]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:38:05.851709 tar[1145]: ./portmap Feb 9 18:38:05.881504 tar[1145]: ./host-local Feb 9 18:38:05.885565 env[1149]: time="2024-02-09T18:38:05.885525760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:38:05.885683 env[1149]: time="2024-02-09T18:38:05.885659520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:38:05.888507 env[1149]: time="2024-02-09T18:38:05.888466040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:38:05.888507 env[1149]: time="2024-02-09T18:38:05.888499240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:38:05.888733 env[1149]: time="2024-02-09T18:38:05.888707080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:38:05.888733 env[1149]: time="2024-02-09T18:38:05.888729760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:38:05.888789 env[1149]: time="2024-02-09T18:38:05.888743800Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:38:05.888789 env[1149]: time="2024-02-09T18:38:05.888754320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:38:05.888841 env[1149]: time="2024-02-09T18:38:05.888824400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:38:05.889076 env[1149]: time="2024-02-09T18:38:05.889045280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:38:05.889192 env[1149]: time="2024-02-09T18:38:05.889169560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:38:05.889192 env[1149]: time="2024-02-09T18:38:05.889188760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:38:05.889262 env[1149]: time="2024-02-09T18:38:05.889242680Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:38:05.889262 env[1149]: time="2024-02-09T18:38:05.889260680Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:38:05.894236 env[1149]: time="2024-02-09T18:38:05.894203760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:38:05.894298 env[1149]: time="2024-02-09T18:38:05.894240280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:38:05.894298 env[1149]: time="2024-02-09T18:38:05.894254280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:38:05.894298 env[1149]: time="2024-02-09T18:38:05.894286080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:38:05.894369 env[1149]: time="2024-02-09T18:38:05.894299840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:38:05.894369 env[1149]: time="2024-02-09T18:38:05.894314360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:38:05.894369 env[1149]: time="2024-02-09T18:38:05.894331280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:38:05.894701 env[1149]: time="2024-02-09T18:38:05.894674280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:38:05.894701 env[1149]: time="2024-02-09T18:38:05.894698040Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:38:05.894772 env[1149]: time="2024-02-09T18:38:05.894712640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:38:05.894772 env[1149]: time="2024-02-09T18:38:05.894731320Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:38:05.894772 env[1149]: time="2024-02-09T18:38:05.894743120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:38:05.894869 env[1149]: time="2024-02-09T18:38:05.894848560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:38:05.894942 env[1149]: time="2024-02-09T18:38:05.894924680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:38:05.895691 env[1149]: time="2024-02-09T18:38:05.895660160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:38:05.895736 env[1149]: time="2024-02-09T18:38:05.895709800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:38:05.895768 env[1149]: time="2024-02-09T18:38:05.895732000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:38:05.895869 env[1149]: time="2024-02-09T18:38:05.895844280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:38:05.895869 env[1149]: time="2024-02-09T18:38:05.895865280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:38:05.895927 env[1149]: time="2024-02-09T18:38:05.895881680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:38:05.895927 env[1149]: time="2024-02-09T18:38:05.895897000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:38:05.895927 env[1149]: time="2024-02-09T18:38:05.895912680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:38:05.896006 env[1149]: time="2024-02-09T18:38:05.895929760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:38:05.896006 env[1149]: time="2024-02-09T18:38:05.895944400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:38:05.896006 env[1149]: time="2024-02-09T18:38:05.895976000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:38:05.896006 env[1149]: time="2024-02-09T18:38:05.895996280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:38:05.896165 env[1149]: time="2024-02-09T18:38:05.896139160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:38:05.896197 env[1149]: time="2024-02-09T18:38:05.896166000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:38:05.896197 env[1149]: time="2024-02-09T18:38:05.896183120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:38:05.896234 env[1149]: time="2024-02-09T18:38:05.896199200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:38:05.896234 env[1149]: time="2024-02-09T18:38:05.896219640Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:38:05.896278 env[1149]: time="2024-02-09T18:38:05.896233000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:38:05.896278 env[1149]: time="2024-02-09T18:38:05.896258680Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:38:05.896324 env[1149]: time="2024-02-09T18:38:05.896295400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:38:05.896672 env[1149]: time="2024-02-09T18:38:05.896610360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:38:05.899053 env[1149]: time="2024-02-09T18:38:05.896679880Z" level=info msg="Connect containerd service" Feb 9 18:38:05.899053 env[1149]: time="2024-02-09T18:38:05.896723440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:38:05.899053 env[1149]: time="2024-02-09T18:38:05.897458480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:38:05.899053 env[1149]: time="2024-02-09T18:38:05.897795880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:38:05.899876 env[1149]: time="2024-02-09T18:38:05.899839560Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:38:05.899978 env[1149]: time="2024-02-09T18:38:05.897946400Z" level=info msg="Start subscribing containerd event" Feb 9 18:38:05.900356 env[1149]: time="2024-02-09T18:38:05.900332800Z" level=info msg="containerd successfully booted in 0.070765s" Feb 9 18:38:05.900396 systemd[1]: Started containerd.service. Feb 9 18:38:05.910971 env[1149]: time="2024-02-09T18:38:05.908318160Z" level=info msg="Start recovering state" Feb 9 18:38:05.910971 env[1149]: time="2024-02-09T18:38:05.908400480Z" level=info msg="Start event monitor" Feb 9 18:38:05.910971 env[1149]: time="2024-02-09T18:38:05.908420640Z" level=info msg="Start snapshots syncer" Feb 9 18:38:05.910971 env[1149]: time="2024-02-09T18:38:05.908431000Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:38:05.910971 env[1149]: time="2024-02-09T18:38:05.908438560Z" level=info msg="Start streaming server" Feb 9 18:38:05.911648 tar[1145]: ./vrf Feb 9 18:38:05.940861 tar[1145]: ./bridge Feb 9 18:38:05.974548 tar[1145]: ./tuning Feb 9 18:38:06.002152 tar[1145]: ./firewall Feb 9 18:38:06.035876 tar[1145]: ./host-device Feb 9 18:38:06.066022 tar[1145]: ./sbr Feb 9 18:38:06.093500 tar[1145]: ./loopback Feb 9 18:38:06.120135 tar[1145]: ./dhcp Feb 9 18:38:06.160585 tar[1147]: linux-arm64/LICENSE Feb 9 18:38:06.160736 tar[1147]: linux-arm64/README.md Feb 9 18:38:06.164852 systemd[1]: Finished prepare-helm.service. Feb 9 18:38:06.170023 systemd[1]: Finished prepare-critools.service. Feb 9 18:38:06.194741 tar[1145]: ./ptp Feb 9 18:38:06.225007 tar[1145]: ./ipvlan Feb 9 18:38:06.251403 tar[1145]: ./bandwidth Feb 9 18:38:06.284230 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:38:06.730228 systemd-networkd[1057]: eth0: Gained IPv6LL Feb 9 18:38:07.257734 sshd_keygen[1144]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:38:07.275038 systemd[1]: Finished sshd-keygen.service. Feb 9 18:38:07.277234 systemd[1]: Starting issuegen.service... Feb 9 18:38:07.281680 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:38:07.281828 systemd[1]: Finished issuegen.service. Feb 9 18:38:07.283891 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:38:07.293428 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:38:07.295575 systemd[1]: Started getty@tty1.service. Feb 9 18:38:07.297522 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 18:38:07.298491 systemd[1]: Reached target getty.target. Feb 9 18:38:07.299126 systemd[1]: Reached target multi-user.target. Feb 9 18:38:07.300878 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:38:07.306705 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:38:07.306843 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:38:07.307774 systemd[1]: Startup finished in 570ms (kernel) + 6.021s (initrd) + 4.704s (userspace) = 11.296s. Feb 9 18:38:09.434171 systemd[1]: Created slice system-sshd.slice. Feb 9 18:38:09.435659 systemd[1]: Started sshd@0-10.0.0.103:22-10.0.0.1:55664.service. Feb 9 18:38:09.490181 sshd[1209]: Accepted publickey for core from 10.0.0.1 port 55664 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:38:09.491830 sshd[1209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:38:09.500267 systemd-logind[1138]: New session 1 of user core. Feb 9 18:38:09.501335 systemd[1]: Created slice user-500.slice. Feb 9 18:38:09.502564 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:38:09.509683 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:38:09.510987 systemd[1]: Starting user@500.service... Feb 9 18:38:09.513354 (systemd)[1212]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:38:09.566626 systemd[1212]: Queued start job for default target default.target. Feb 9 18:38:09.567015 systemd[1212]: Reached target paths.target. Feb 9 18:38:09.567033 systemd[1212]: Reached target sockets.target. Feb 9 18:38:09.567044 systemd[1212]: Reached target timers.target. Feb 9 18:38:09.567054 systemd[1212]: Reached target basic.target. Feb 9 18:38:09.567100 systemd[1212]: Reached target default.target. Feb 9 18:38:09.567121 systemd[1212]: Startup finished in 49ms. Feb 9 18:38:09.567329 systemd[1]: Started user@500.service. Feb 9 18:38:09.568566 systemd[1]: Started session-1.scope. Feb 9 18:38:09.617399 systemd[1]: Started sshd@1-10.0.0.103:22-10.0.0.1:55680.service. Feb 9 18:38:09.657394 sshd[1221]: Accepted publickey for core from 10.0.0.1 port 55680 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:38:09.658652 sshd[1221]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:38:09.661916 systemd-logind[1138]: New session 2 of user core. Feb 9 18:38:09.662720 systemd[1]: Started session-2.scope. Feb 9 18:38:09.714569 sshd[1221]: pam_unix(sshd:session): session closed for user core Feb 9 18:38:09.718476 systemd[1]: sshd@1-10.0.0.103:22-10.0.0.1:55680.service: Deactivated successfully. Feb 9 18:38:09.719119 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 18:38:09.719593 systemd-logind[1138]: Session 2 logged out. Waiting for processes to exit. Feb 9 18:38:09.720823 systemd[1]: Started sshd@2-10.0.0.103:22-10.0.0.1:55686.service. Feb 9 18:38:09.721461 systemd-logind[1138]: Removed session 2. Feb 9 18:38:09.760229 sshd[1227]: Accepted publickey for core from 10.0.0.1 port 55686 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:38:09.761528 sshd[1227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:38:09.764353 systemd-logind[1138]: New session 3 of user core. Feb 9 18:38:09.765124 systemd[1]: Started session-3.scope. Feb 9 18:38:09.814974 sshd[1227]: pam_unix(sshd:session): session closed for user core Feb 9 18:38:09.818071 systemd[1]: sshd@2-10.0.0.103:22-10.0.0.1:55686.service: Deactivated successfully. Feb 9 18:38:09.818797 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 18:38:09.819387 systemd-logind[1138]: Session 3 logged out. Waiting for processes to exit. Feb 9 18:38:09.820917 systemd[1]: Started sshd@3-10.0.0.103:22-10.0.0.1:55694.service. Feb 9 18:38:09.821683 systemd-logind[1138]: Removed session 3. Feb 9 18:38:09.860730 sshd[1233]: Accepted publickey for core from 10.0.0.1 port 55694 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:38:09.861822 sshd[1233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:38:09.864922 systemd-logind[1138]: New session 4 of user core. Feb 9 18:38:09.866317 systemd[1]: Started session-4.scope. Feb 9 18:38:09.919281 sshd[1233]: pam_unix(sshd:session): session closed for user core Feb 9 18:38:09.922090 systemd[1]: sshd@3-10.0.0.103:22-10.0.0.1:55694.service: Deactivated successfully. Feb 9 18:38:09.922648 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:38:09.923168 systemd-logind[1138]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:38:09.924233 systemd[1]: Started sshd@4-10.0.0.103:22-10.0.0.1:55708.service. Feb 9 18:38:09.924862 systemd-logind[1138]: Removed session 4. Feb 9 18:38:09.964138 sshd[1239]: Accepted publickey for core from 10.0.0.1 port 55708 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:38:09.965419 sshd[1239]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:38:09.969282 systemd-logind[1138]: New session 5 of user core. Feb 9 18:38:09.970217 systemd[1]: Started session-5.scope. Feb 9 18:38:10.033003 sudo[1242]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:38:10.033200 sudo[1242]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:38:10.591534 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:38:10.596762 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:38:10.597171 systemd[1]: Reached target network-online.target. Feb 9 18:38:10.598389 systemd[1]: Starting docker.service... Feb 9 18:38:10.680829 env[1260]: time="2024-02-09T18:38:10.680775056Z" level=info msg="Starting up" Feb 9 18:38:10.682292 env[1260]: time="2024-02-09T18:38:10.682204860Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:38:10.682292 env[1260]: time="2024-02-09T18:38:10.682291057Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:38:10.682362 env[1260]: time="2024-02-09T18:38:10.682311260Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:38:10.682362 env[1260]: time="2024-02-09T18:38:10.682321951Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:38:10.684176 env[1260]: time="2024-02-09T18:38:10.684152511Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:38:10.684304 env[1260]: time="2024-02-09T18:38:10.684288664Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:38:10.684373 env[1260]: time="2024-02-09T18:38:10.684356780Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:38:10.684425 env[1260]: time="2024-02-09T18:38:10.684413183Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:38:10.688227 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport30985462-merged.mount: Deactivated successfully. Feb 9 18:38:10.912033 env[1260]: time="2024-02-09T18:38:10.911921412Z" level=info msg="Loading containers: start." Feb 9 18:38:11.007975 kernel: Initializing XFRM netlink socket Feb 9 18:38:11.029771 env[1260]: time="2024-02-09T18:38:11.029721928Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 18:38:11.085160 systemd-networkd[1057]: docker0: Link UP Feb 9 18:38:11.093330 env[1260]: time="2024-02-09T18:38:11.093291466Z" level=info msg="Loading containers: done." Feb 9 18:38:11.118818 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2011630346-merged.mount: Deactivated successfully. Feb 9 18:38:11.119729 env[1260]: time="2024-02-09T18:38:11.119684212Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 18:38:11.119868 env[1260]: time="2024-02-09T18:38:11.119842135Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 18:38:11.119986 env[1260]: time="2024-02-09T18:38:11.119969529Z" level=info msg="Daemon has completed initialization" Feb 9 18:38:11.133994 systemd[1]: Started docker.service. Feb 9 18:38:11.140383 env[1260]: time="2024-02-09T18:38:11.140339350Z" level=info msg="API listen on /run/docker.sock" Feb 9 18:38:11.155565 systemd[1]: Reloading. Feb 9 18:38:11.200593 /usr/lib/systemd/system-generators/torcx-generator[1404]: time="2024-02-09T18:38:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:38:11.200624 /usr/lib/systemd/system-generators/torcx-generator[1404]: time="2024-02-09T18:38:11Z" level=info msg="torcx already run" Feb 9 18:38:11.259075 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:38:11.259095 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:38:11.278589 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:38:11.342907 systemd[1]: Started kubelet.service. Feb 9 18:38:11.505590 kubelet[1440]: E0209 18:38:11.505466 1440 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:38:11.508000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:38:11.508126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:38:11.691493 env[1149]: time="2024-02-09T18:38:11.691445671Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 18:38:12.450298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount299496733.mount: Deactivated successfully. Feb 9 18:38:14.015969 env[1149]: time="2024-02-09T18:38:14.015918853Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:14.017622 env[1149]: time="2024-02-09T18:38:14.017581312Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:14.019397 env[1149]: time="2024-02-09T18:38:14.019371378Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:14.021849 env[1149]: time="2024-02-09T18:38:14.021813575Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:14.022747 env[1149]: time="2024-02-09T18:38:14.022720248Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 18:38:14.032032 env[1149]: time="2024-02-09T18:38:14.032006918Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 18:38:16.100934 env[1149]: time="2024-02-09T18:38:16.100879005Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:16.103827 env[1149]: time="2024-02-09T18:38:16.103786481Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:16.105280 env[1149]: time="2024-02-09T18:38:16.105252523Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:16.109476 env[1149]: time="2024-02-09T18:38:16.109446173Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:16.110383 env[1149]: time="2024-02-09T18:38:16.110342809Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 18:38:16.121370 env[1149]: time="2024-02-09T18:38:16.121342624Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 18:38:17.188087 env[1149]: time="2024-02-09T18:38:17.188015745Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:17.189650 env[1149]: time="2024-02-09T18:38:17.189614246Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:17.191634 env[1149]: time="2024-02-09T18:38:17.191600643Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:17.193976 env[1149]: time="2024-02-09T18:38:17.193941884Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:17.194696 env[1149]: time="2024-02-09T18:38:17.194662892Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 18:38:17.203623 env[1149]: time="2024-02-09T18:38:17.203587914Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 18:38:18.228109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2812013629.mount: Deactivated successfully. Feb 9 18:38:18.634587 env[1149]: time="2024-02-09T18:38:18.634468645Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:18.636092 env[1149]: time="2024-02-09T18:38:18.636060566Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:18.637409 env[1149]: time="2024-02-09T18:38:18.637380759Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:18.639014 env[1149]: time="2024-02-09T18:38:18.638985246Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:18.639638 env[1149]: time="2024-02-09T18:38:18.639600273Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 18:38:18.648260 env[1149]: time="2024-02-09T18:38:18.648230613Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 18:38:19.175565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount397551318.mount: Deactivated successfully. Feb 9 18:38:19.180526 env[1149]: time="2024-02-09T18:38:19.180492557Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:19.182421 env[1149]: time="2024-02-09T18:38:19.182391468Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:19.183721 env[1149]: time="2024-02-09T18:38:19.183688140Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:19.185506 env[1149]: time="2024-02-09T18:38:19.185462823Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:19.185727 env[1149]: time="2024-02-09T18:38:19.185694964Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 18:38:19.195777 env[1149]: time="2024-02-09T18:38:19.195735015Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 18:38:20.060940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount173942395.mount: Deactivated successfully. Feb 9 18:38:21.758837 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 18:38:21.759042 systemd[1]: Stopped kubelet.service. Feb 9 18:38:21.760473 systemd[1]: Started kubelet.service. Feb 9 18:38:21.789005 env[1149]: time="2024-02-09T18:38:21.788941013Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:21.790389 env[1149]: time="2024-02-09T18:38:21.790355933Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:21.791819 env[1149]: time="2024-02-09T18:38:21.791787386Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:21.793540 env[1149]: time="2024-02-09T18:38:21.793501544Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:21.794150 env[1149]: time="2024-02-09T18:38:21.794119782Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 18:38:21.803921 env[1149]: time="2024-02-09T18:38:21.803878366Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 18:38:21.805231 kubelet[1497]: E0209 18:38:21.805190 1497 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:38:21.808329 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:38:21.808459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:38:22.338494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2677344753.mount: Deactivated successfully. Feb 9 18:38:22.997048 env[1149]: time="2024-02-09T18:38:22.996940822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:22.998673 env[1149]: time="2024-02-09T18:38:22.998632842Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:22.999909 env[1149]: time="2024-02-09T18:38:22.999879545Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:23.002023 env[1149]: time="2024-02-09T18:38:23.001996010Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:23.002553 env[1149]: time="2024-02-09T18:38:23.002529333Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 18:38:28.500041 systemd[1]: Stopped kubelet.service. Feb 9 18:38:28.516389 systemd[1]: Reloading. Feb 9 18:38:28.574703 /usr/lib/systemd/system-generators/torcx-generator[1600]: time="2024-02-09T18:38:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:38:28.575168 /usr/lib/systemd/system-generators/torcx-generator[1600]: time="2024-02-09T18:38:28Z" level=info msg="torcx already run" Feb 9 18:38:28.630412 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:38:28.630431 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:38:28.650190 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:38:28.725132 systemd[1]: Started kubelet.service. Feb 9 18:38:28.777138 kubelet[1639]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:38:28.777138 kubelet[1639]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:38:28.780005 kubelet[1639]: I0209 18:38:28.777717 1639 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:38:28.780005 kubelet[1639]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:38:28.780005 kubelet[1639]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:38:29.260333 kubelet[1639]: I0209 18:38:29.260288 1639 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:38:29.260333 kubelet[1639]: I0209 18:38:29.260331 1639 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:38:29.261423 kubelet[1639]: I0209 18:38:29.261385 1639 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:38:29.266692 kubelet[1639]: I0209 18:38:29.266651 1639 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:38:29.267221 kubelet[1639]: E0209 18:38:29.267200 1639 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:29.268703 kubelet[1639]: W0209 18:38:29.268676 1639 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:38:29.270535 kubelet[1639]: I0209 18:38:29.269467 1639 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:38:29.270535 kubelet[1639]: I0209 18:38:29.269838 1639 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:38:29.270535 kubelet[1639]: I0209 18:38:29.269900 1639 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:38:29.270535 kubelet[1639]: I0209 18:38:29.269995 1639 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:38:29.270535 kubelet[1639]: I0209 18:38:29.270007 1639 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:38:29.270535 kubelet[1639]: I0209 18:38:29.270153 1639 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:38:29.275376 kubelet[1639]: I0209 18:38:29.275344 1639 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:38:29.275376 kubelet[1639]: I0209 18:38:29.275372 1639 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:38:29.275531 kubelet[1639]: I0209 18:38:29.275520 1639 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:38:29.275579 kubelet[1639]: I0209 18:38:29.275534 1639 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:38:29.276497 kubelet[1639]: W0209 18:38:29.276428 1639 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:29.276497 kubelet[1639]: E0209 18:38:29.276498 1639 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:29.276602 kubelet[1639]: W0209 18:38:29.276452 1639 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:29.276602 kubelet[1639]: E0209 18:38:29.276517 1639 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:29.276660 kubelet[1639]: I0209 18:38:29.276631 1639 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:38:29.277535 kubelet[1639]: W0209 18:38:29.277505 1639 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:38:29.278243 kubelet[1639]: I0209 18:38:29.278218 1639 server.go:1186] "Started kubelet" Feb 9 18:38:29.279027 kubelet[1639]: I0209 18:38:29.279000 1639 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:38:29.280091 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 18:38:29.280162 kubelet[1639]: I0209 18:38:29.279848 1639 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:38:29.280917 kubelet[1639]: I0209 18:38:29.280893 1639 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:38:29.281361 kubelet[1639]: E0209 18:38:29.281276 1639 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b245c23d1e1db7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 38, 29, 278195127, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 38, 29, 278195127, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.103:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.103:6443: connect: connection refused'(may retry after sleeping) Feb 9 18:38:29.282383 kubelet[1639]: I0209 18:38:29.282351 1639 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:38:29.282458 kubelet[1639]: I0209 18:38:29.282445 1639 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:38:29.283210 kubelet[1639]: W0209 18:38:29.283146 1639 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:29.283210 kubelet[1639]: E0209 18:38:29.283208 1639 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:29.284108 kubelet[1639]: E0209 18:38:29.284039 1639 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:38:29.284108 kubelet[1639]: E0209 18:38:29.284072 1639 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:38:29.285343 kubelet[1639]: E0209 18:38:29.285307 1639 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:29.304000 kubelet[1639]: I0209 18:38:29.303974 1639 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:38:29.304209 kubelet[1639]: I0209 18:38:29.304156 1639 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:38:29.304290 kubelet[1639]: I0209 18:38:29.304280 1639 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:38:29.311110 kubelet[1639]: I0209 18:38:29.311089 1639 policy_none.go:49] "None policy: Start" Feb 9 18:38:29.311872 kubelet[1639]: I0209 18:38:29.311856 1639 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:38:29.311998 kubelet[1639]: I0209 18:38:29.311985 1639 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:38:29.318981 systemd[1]: Created slice kubepods.slice. Feb 9 18:38:29.326007 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 18:38:29.328580 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 18:38:29.335857 kubelet[1639]: I0209 18:38:29.335716 1639 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:38:29.335977 kubelet[1639]: I0209 18:38:29.335926 1639 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:38:29.337186 kubelet[1639]: E0209 18:38:29.336765 1639 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 18:38:29.347526 kubelet[1639]: I0209 18:38:29.347503 1639 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:38:29.374500 kubelet[1639]: I0209 18:38:29.374470 1639 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:38:29.374500 kubelet[1639]: I0209 18:38:29.374494 1639 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:38:29.374615 kubelet[1639]: I0209 18:38:29.374511 1639 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:38:29.374615 kubelet[1639]: E0209 18:38:29.374560 1639 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 18:38:29.375109 kubelet[1639]: W0209 18:38:29.375082 1639 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:29.375232 kubelet[1639]: E0209 18:38:29.375220 1639 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:29.385593 kubelet[1639]: I0209 18:38:29.384896 1639 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:38:29.385593 kubelet[1639]: E0209 18:38:29.385284 1639 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Feb 9 18:38:29.475006 kubelet[1639]: I0209 18:38:29.474938 1639 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:38:29.476818 kubelet[1639]: I0209 18:38:29.476220 1639 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:38:29.477752 kubelet[1639]: I0209 18:38:29.477733 1639 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:38:29.478874 kubelet[1639]: I0209 18:38:29.478854 1639 status_manager.go:698] "Failed to get status for pod" podUID=65c018ff6af42cf001f4d6077e1692d1 pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.103:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.103:6443: connect: connection refused" Feb 9 18:38:29.479285 kubelet[1639]: I0209 18:38:29.479263 1639 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.103:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.103:6443: connect: connection refused" Feb 9 18:38:29.480517 kubelet[1639]: I0209 18:38:29.480498 1639 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.103:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.103:6443: connect: connection refused" Feb 9 18:38:29.485069 kubelet[1639]: I0209 18:38:29.485031 1639 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:38:29.485883 kubelet[1639]: E0209 18:38:29.485680 1639 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:29.486095 systemd[1]: Created slice kubepods-burstable-pod65c018ff6af42cf001f4d6077e1692d1.slice. Feb 9 18:38:29.507387 systemd[1]: Created slice kubepods-burstable-pod550020dd9f101bcc23e1d3c651841c4d.slice. Feb 9 18:38:29.525322 systemd[1]: Created slice kubepods-burstable-pod72ae17a74a2eae76daac6d298477aff0.slice. Feb 9 18:38:29.587414 kubelet[1639]: I0209 18:38:29.587360 1639 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:38:29.587414 kubelet[1639]: I0209 18:38:29.587404 1639 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:38:29.587551 kubelet[1639]: I0209 18:38:29.587427 1639 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:38:29.587551 kubelet[1639]: I0209 18:38:29.587452 1639 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65c018ff6af42cf001f4d6077e1692d1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"65c018ff6af42cf001f4d6077e1692d1\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:38:29.587551 kubelet[1639]: I0209 18:38:29.587471 1639 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:38:29.587551 kubelet[1639]: I0209 18:38:29.587501 1639 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 18:38:29.587551 kubelet[1639]: I0209 18:38:29.587520 1639 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65c018ff6af42cf001f4d6077e1692d1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"65c018ff6af42cf001f4d6077e1692d1\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:38:29.587688 kubelet[1639]: I0209 18:38:29.587539 1639 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65c018ff6af42cf001f4d6077e1692d1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"65c018ff6af42cf001f4d6077e1692d1\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:38:29.588173 kubelet[1639]: I0209 18:38:29.587976 1639 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:38:29.588346 kubelet[1639]: E0209 18:38:29.588318 1639 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Feb 9 18:38:29.806565 kubelet[1639]: E0209 18:38:29.805347 1639 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:29.806846 env[1149]: time="2024-02-09T18:38:29.806015590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:65c018ff6af42cf001f4d6077e1692d1,Namespace:kube-system,Attempt:0,}" Feb 9 18:38:29.810221 kubelet[1639]: E0209 18:38:29.810192 1639 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:29.810989 env[1149]: time="2024-02-09T18:38:29.810842349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 18:38:29.829998 kubelet[1639]: E0209 18:38:29.829936 1639 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:29.830418 env[1149]: time="2024-02-09T18:38:29.830377833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 18:38:29.886393 kubelet[1639]: E0209 18:38:29.886337 1639 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:29.990276 kubelet[1639]: I0209 18:38:29.990243 1639 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:38:29.990545 kubelet[1639]: E0209 18:38:29.990530 1639 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Feb 9 18:38:30.184033 kubelet[1639]: E0209 18:38:30.183861 1639 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b245c23d1e1db7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 38, 29, 278195127, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 38, 29, 278195127, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.103:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.103:6443: connect: connection refused'(may retry after sleeping) Feb 9 18:38:30.271821 kubelet[1639]: W0209 18:38:30.271755 1639 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:30.271821 kubelet[1639]: E0209 18:38:30.271798 1639 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:30.303268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3837907177.mount: Deactivated successfully. Feb 9 18:38:30.307329 env[1149]: time="2024-02-09T18:38:30.307282982Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:30.309201 env[1149]: time="2024-02-09T18:38:30.309163159Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:30.310111 env[1149]: time="2024-02-09T18:38:30.310081534Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:30.311629 env[1149]: time="2024-02-09T18:38:30.311606418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:30.314004 env[1149]: time="2024-02-09T18:38:30.313969015Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:30.315486 env[1149]: time="2024-02-09T18:38:30.315456384Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:30.316598 env[1149]: time="2024-02-09T18:38:30.316572801Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:30.317824 env[1149]: time="2024-02-09T18:38:30.317795529Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:30.320309 env[1149]: time="2024-02-09T18:38:30.320273746Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:30.321841 env[1149]: time="2024-02-09T18:38:30.321788803Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:30.322669 env[1149]: time="2024-02-09T18:38:30.322635863Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:30.324296 env[1149]: time="2024-02-09T18:38:30.324261267Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:30.352802 env[1149]: time="2024-02-09T18:38:30.352660085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:38:30.352802 env[1149]: time="2024-02-09T18:38:30.352710465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:38:30.352802 env[1149]: time="2024-02-09T18:38:30.352722850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:38:30.353070 env[1149]: time="2024-02-09T18:38:30.353017176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:38:30.353070 env[1149]: time="2024-02-09T18:38:30.353044183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:38:30.353156 env[1149]: time="2024-02-09T18:38:30.353065677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:38:30.353399 env[1149]: time="2024-02-09T18:38:30.353339348Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/94a1b80ccf9a18eca2ebd6d3d6b5dd9a4a7ed3584020428d92a2f60b59374299 pid=1722 runtime=io.containerd.runc.v2 Feb 9 18:38:30.353539 env[1149]: time="2024-02-09T18:38:30.353495959Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b20d1197a399eb29da5ea5940bffb9836ec51b72bcc1641d026bdf77bccf312 pid=1723 runtime=io.containerd.runc.v2 Feb 9 18:38:30.356234 env[1149]: time="2024-02-09T18:38:30.356146968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:38:30.356234 env[1149]: time="2024-02-09T18:38:30.356182166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:38:30.356234 env[1149]: time="2024-02-09T18:38:30.356192274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:38:30.356416 env[1149]: time="2024-02-09T18:38:30.356370220Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0831a17a3aac7f9da1b37897f3544ef644f857d959b5d2663aa22f029856b720 pid=1747 runtime=io.containerd.runc.v2 Feb 9 18:38:30.365412 systemd[1]: Started cri-containerd-9b20d1197a399eb29da5ea5940bffb9836ec51b72bcc1641d026bdf77bccf312.scope. Feb 9 18:38:30.372927 systemd[1]: Started cri-containerd-94a1b80ccf9a18eca2ebd6d3d6b5dd9a4a7ed3584020428d92a2f60b59374299.scope. Feb 9 18:38:30.378829 systemd[1]: Started cri-containerd-0831a17a3aac7f9da1b37897f3544ef644f857d959b5d2663aa22f029856b720.scope. Feb 9 18:38:30.389834 kubelet[1639]: W0209 18:38:30.389619 1639 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:30.389834 kubelet[1639]: E0209 18:38:30.389692 1639 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:30.436615 env[1149]: time="2024-02-09T18:38:30.435751635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b20d1197a399eb29da5ea5940bffb9836ec51b72bcc1641d026bdf77bccf312\"" Feb 9 18:38:30.436736 kubelet[1639]: E0209 18:38:30.436703 1639 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:30.439653 env[1149]: time="2024-02-09T18:38:30.439556455Z" level=info msg="CreateContainer within sandbox \"9b20d1197a399eb29da5ea5940bffb9836ec51b72bcc1641d026bdf77bccf312\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 18:38:30.441015 env[1149]: time="2024-02-09T18:38:30.440982938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0831a17a3aac7f9da1b37897f3544ef644f857d959b5d2663aa22f029856b720\"" Feb 9 18:38:30.441090 env[1149]: time="2024-02-09T18:38:30.441068595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:65c018ff6af42cf001f4d6077e1692d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"94a1b80ccf9a18eca2ebd6d3d6b5dd9a4a7ed3584020428d92a2f60b59374299\"" Feb 9 18:38:30.441534 kubelet[1639]: E0209 18:38:30.441506 1639 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:30.441576 kubelet[1639]: E0209 18:38:30.441536 1639 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:30.443544 env[1149]: time="2024-02-09T18:38:30.443508458Z" level=info msg="CreateContainer within sandbox \"0831a17a3aac7f9da1b37897f3544ef644f857d959b5d2663aa22f029856b720\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 18:38:30.444031 env[1149]: time="2024-02-09T18:38:30.443995672Z" level=info msg="CreateContainer within sandbox \"94a1b80ccf9a18eca2ebd6d3d6b5dd9a4a7ed3584020428d92a2f60b59374299\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 18:38:30.456195 env[1149]: time="2024-02-09T18:38:30.456123515Z" level=info msg="CreateContainer within sandbox \"9b20d1197a399eb29da5ea5940bffb9836ec51b72bcc1641d026bdf77bccf312\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5741930329f48a84013d58cf09ba220594966742a747768ea2b51418042a79e2\"" Feb 9 18:38:30.457100 env[1149]: time="2024-02-09T18:38:30.457060866Z" level=info msg="StartContainer for \"5741930329f48a84013d58cf09ba220594966742a747768ea2b51418042a79e2\"" Feb 9 18:38:30.460419 env[1149]: time="2024-02-09T18:38:30.460380151Z" level=info msg="CreateContainer within sandbox \"0831a17a3aac7f9da1b37897f3544ef644f857d959b5d2663aa22f029856b720\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b6e8a62dc336dcb8f40cd06dfe56f620f7019900c0b24fc8599f8e5c07c5b3fb\"" Feb 9 18:38:30.460770 env[1149]: time="2024-02-09T18:38:30.460744033Z" level=info msg="StartContainer for \"b6e8a62dc336dcb8f40cd06dfe56f620f7019900c0b24fc8599f8e5c07c5b3fb\"" Feb 9 18:38:30.462517 env[1149]: time="2024-02-09T18:38:30.462473472Z" level=info msg="CreateContainer within sandbox \"94a1b80ccf9a18eca2ebd6d3d6b5dd9a4a7ed3584020428d92a2f60b59374299\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d255c10b0fdaea74f5fa809080237faf246f558f4c8143b45e62a9f543ea1da5\"" Feb 9 18:38:30.462798 env[1149]: time="2024-02-09T18:38:30.462764641Z" level=info msg="StartContainer for \"d255c10b0fdaea74f5fa809080237faf246f558f4c8143b45e62a9f543ea1da5\"" Feb 9 18:38:30.473765 systemd[1]: Started cri-containerd-5741930329f48a84013d58cf09ba220594966742a747768ea2b51418042a79e2.scope. Feb 9 18:38:30.478761 systemd[1]: Started cri-containerd-b6e8a62dc336dcb8f40cd06dfe56f620f7019900c0b24fc8599f8e5c07c5b3fb.scope. Feb 9 18:38:30.489000 systemd[1]: Started cri-containerd-d255c10b0fdaea74f5fa809080237faf246f558f4c8143b45e62a9f543ea1da5.scope. Feb 9 18:38:30.532983 env[1149]: time="2024-02-09T18:38:30.531984727Z" level=info msg="StartContainer for \"b6e8a62dc336dcb8f40cd06dfe56f620f7019900c0b24fc8599f8e5c07c5b3fb\" returns successfully" Feb 9 18:38:30.556455 env[1149]: time="2024-02-09T18:38:30.556409049Z" level=info msg="StartContainer for \"5741930329f48a84013d58cf09ba220594966742a747768ea2b51418042a79e2\" returns successfully" Feb 9 18:38:30.559847 env[1149]: time="2024-02-09T18:38:30.559804842Z" level=info msg="StartContainer for \"d255c10b0fdaea74f5fa809080237faf246f558f4c8143b45e62a9f543ea1da5\" returns successfully" Feb 9 18:38:30.647070 kubelet[1639]: W0209 18:38:30.647016 1639 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:30.647070 kubelet[1639]: E0209 18:38:30.647072 1639 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:30.687001 kubelet[1639]: E0209 18:38:30.686868 1639 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.103:6443: connect: connection refused Feb 9 18:38:30.792125 kubelet[1639]: I0209 18:38:30.791852 1639 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:38:31.384656 kubelet[1639]: E0209 18:38:31.384610 1639 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:31.387742 kubelet[1639]: E0209 18:38:31.387714 1639 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:31.394504 kubelet[1639]: E0209 18:38:31.394475 1639 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:32.397257 kubelet[1639]: E0209 18:38:32.397219 1639 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:32.397770 kubelet[1639]: E0209 18:38:32.397742 1639 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:32.398136 kubelet[1639]: E0209 18:38:32.398116 1639 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:33.176277 kubelet[1639]: E0209 18:38:33.176237 1639 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 18:38:33.245757 kubelet[1639]: I0209 18:38:33.245715 1639 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 18:38:33.278279 kubelet[1639]: I0209 18:38:33.278231 1639 apiserver.go:52] "Watching apiserver" Feb 9 18:38:33.282579 kubelet[1639]: I0209 18:38:33.282533 1639 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:38:33.308534 kubelet[1639]: I0209 18:38:33.308489 1639 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:38:33.880884 kubelet[1639]: E0209 18:38:33.880852 1639 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 9 18:38:33.881509 kubelet[1639]: E0209 18:38:33.881491 1639 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:34.078713 kubelet[1639]: E0209 18:38:34.078687 1639 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 9 18:38:34.079342 kubelet[1639]: E0209 18:38:34.079326 1639 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:35.810712 systemd[1]: Reloading. Feb 9 18:38:35.860923 /usr/lib/systemd/system-generators/torcx-generator[1971]: time="2024-02-09T18:38:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:38:35.860962 /usr/lib/systemd/system-generators/torcx-generator[1971]: time="2024-02-09T18:38:35Z" level=info msg="torcx already run" Feb 9 18:38:35.901866 kubelet[1639]: E0209 18:38:35.901822 1639 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:35.924513 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:38:35.924531 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:38:35.945241 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:38:36.042509 systemd[1]: Stopping kubelet.service... Feb 9 18:38:36.061395 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 18:38:36.061601 systemd[1]: Stopped kubelet.service. Feb 9 18:38:36.063470 systemd[1]: Started kubelet.service. Feb 9 18:38:36.122526 kubelet[2008]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:38:36.122526 kubelet[2008]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:38:36.122848 kubelet[2008]: I0209 18:38:36.122563 2008 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:38:36.123766 kubelet[2008]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:38:36.123766 kubelet[2008]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:38:36.126921 kubelet[2008]: I0209 18:38:36.126894 2008 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:38:36.126921 kubelet[2008]: I0209 18:38:36.126920 2008 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:38:36.127121 kubelet[2008]: I0209 18:38:36.127107 2008 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:38:36.128413 kubelet[2008]: I0209 18:38:36.128389 2008 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 18:38:36.129004 kubelet[2008]: I0209 18:38:36.128975 2008 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:38:36.130527 kubelet[2008]: W0209 18:38:36.130516 2008 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:38:36.131667 kubelet[2008]: I0209 18:38:36.131646 2008 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:38:36.131866 kubelet[2008]: I0209 18:38:36.131853 2008 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:38:36.131947 kubelet[2008]: I0209 18:38:36.131932 2008 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:38:36.132027 kubelet[2008]: I0209 18:38:36.131967 2008 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:38:36.132027 kubelet[2008]: I0209 18:38:36.131980 2008 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:38:36.132027 kubelet[2008]: I0209 18:38:36.132015 2008 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:38:36.135549 kubelet[2008]: I0209 18:38:36.135530 2008 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:38:36.135681 kubelet[2008]: I0209 18:38:36.135671 2008 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:38:36.135774 kubelet[2008]: I0209 18:38:36.135765 2008 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:38:36.135908 kubelet[2008]: I0209 18:38:36.135899 2008 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:38:36.141466 kubelet[2008]: I0209 18:38:36.141041 2008 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:38:36.141538 kubelet[2008]: I0209 18:38:36.141513 2008 server.go:1186] "Started kubelet" Feb 9 18:38:36.141841 kubelet[2008]: I0209 18:38:36.141796 2008 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:38:36.142395 kubelet[2008]: I0209 18:38:36.142361 2008 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:38:36.142903 kubelet[2008]: I0209 18:38:36.142884 2008 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:38:36.144704 kubelet[2008]: E0209 18:38:36.144666 2008 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:38:36.144789 kubelet[2008]: I0209 18:38:36.144713 2008 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:38:36.144821 kubelet[2008]: I0209 18:38:36.144794 2008 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:38:36.157505 kubelet[2008]: E0209 18:38:36.156092 2008 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:38:36.157505 kubelet[2008]: E0209 18:38:36.156132 2008 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:38:36.182923 kubelet[2008]: I0209 18:38:36.182896 2008 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:38:36.199222 sudo[2052]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 18:38:36.199438 sudo[2052]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 18:38:36.212458 kubelet[2008]: I0209 18:38:36.212434 2008 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:38:36.212458 kubelet[2008]: I0209 18:38:36.212455 2008 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:38:36.212611 kubelet[2008]: I0209 18:38:36.212473 2008 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:38:36.212611 kubelet[2008]: E0209 18:38:36.212535 2008 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 18:38:36.215937 kubelet[2008]: I0209 18:38:36.215913 2008 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:38:36.215937 kubelet[2008]: I0209 18:38:36.215931 2008 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:38:36.216191 kubelet[2008]: I0209 18:38:36.215948 2008 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:38:36.216191 kubelet[2008]: I0209 18:38:36.216145 2008 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 18:38:36.216191 kubelet[2008]: I0209 18:38:36.216164 2008 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 18:38:36.216191 kubelet[2008]: I0209 18:38:36.216180 2008 policy_none.go:49] "None policy: Start" Feb 9 18:38:36.216812 kubelet[2008]: I0209 18:38:36.216794 2008 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:38:36.216856 kubelet[2008]: I0209 18:38:36.216833 2008 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:38:36.217007 kubelet[2008]: I0209 18:38:36.216994 2008 state_mem.go:75] "Updated machine memory state" Feb 9 18:38:36.222336 kubelet[2008]: I0209 18:38:36.222318 2008 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:38:36.223547 kubelet[2008]: I0209 18:38:36.223533 2008 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:38:36.247983 kubelet[2008]: I0209 18:38:36.247947 2008 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:38:36.254618 kubelet[2008]: I0209 18:38:36.254314 2008 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 18:38:36.254618 kubelet[2008]: I0209 18:38:36.254387 2008 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 18:38:36.313389 kubelet[2008]: I0209 18:38:36.313247 2008 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:38:36.313389 kubelet[2008]: I0209 18:38:36.313370 2008 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:38:36.313572 kubelet[2008]: I0209 18:38:36.313411 2008 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:38:36.345598 kubelet[2008]: I0209 18:38:36.345550 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65c018ff6af42cf001f4d6077e1692d1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"65c018ff6af42cf001f4d6077e1692d1\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:38:36.345598 kubelet[2008]: I0209 18:38:36.345589 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65c018ff6af42cf001f4d6077e1692d1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"65c018ff6af42cf001f4d6077e1692d1\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:38:36.345760 kubelet[2008]: I0209 18:38:36.345612 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:38:36.345760 kubelet[2008]: I0209 18:38:36.345633 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:38:36.345760 kubelet[2008]: I0209 18:38:36.345652 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:38:36.345760 kubelet[2008]: I0209 18:38:36.345684 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65c018ff6af42cf001f4d6077e1692d1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"65c018ff6af42cf001f4d6077e1692d1\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:38:36.345760 kubelet[2008]: I0209 18:38:36.345703 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:38:36.345877 kubelet[2008]: I0209 18:38:36.345725 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:38:36.345877 kubelet[2008]: I0209 18:38:36.345744 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 18:38:36.544101 kubelet[2008]: E0209 18:38:36.541447 2008 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 18:38:36.544101 kubelet[2008]: E0209 18:38:36.542271 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:36.621229 kubelet[2008]: E0209 18:38:36.621132 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:36.635834 sudo[2052]: pam_unix(sudo:session): session closed for user root Feb 9 18:38:36.641095 kubelet[2008]: E0209 18:38:36.641072 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:37.136427 kubelet[2008]: I0209 18:38:37.136372 2008 apiserver.go:52] "Watching apiserver" Feb 9 18:38:37.144896 kubelet[2008]: I0209 18:38:37.144870 2008 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:38:37.152252 kubelet[2008]: I0209 18:38:37.152232 2008 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:38:37.221248 kubelet[2008]: E0209 18:38:37.221224 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:37.221564 kubelet[2008]: E0209 18:38:37.221539 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:37.222060 kubelet[2008]: E0209 18:38:37.222046 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:37.547665 kubelet[2008]: I0209 18:38:37.547631 2008 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.547584505 pod.CreationTimestamp="2024-02-09 18:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:38:37.547196608 +0000 UTC m=+1.478222343" watchObservedRunningTime="2024-02-09 18:38:37.547584505 +0000 UTC m=+1.478610240" Feb 9 18:38:37.940693 kubelet[2008]: I0209 18:38:37.940597 2008 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.940531657 pod.CreationTimestamp="2024-02-09 18:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:38:37.940315198 +0000 UTC m=+1.871340933" watchObservedRunningTime="2024-02-09 18:38:37.940531657 +0000 UTC m=+1.871557392" Feb 9 18:38:38.190965 sudo[1242]: pam_unix(sudo:session): session closed for user root Feb 9 18:38:38.192400 sshd[1239]: pam_unix(sshd:session): session closed for user core Feb 9 18:38:38.194556 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:38:38.194759 systemd[1]: session-5.scope: Consumed 7.614s CPU time. Feb 9 18:38:38.195212 systemd-logind[1138]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:38:38.195304 systemd[1]: sshd@4-10.0.0.103:22-10.0.0.1:55708.service: Deactivated successfully. Feb 9 18:38:38.196170 systemd-logind[1138]: Removed session 5. Feb 9 18:38:38.222478 kubelet[2008]: E0209 18:38:38.222448 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:38.341877 kubelet[2008]: I0209 18:38:38.341707 2008 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.341665853 pod.CreationTimestamp="2024-02-09 18:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:38:38.341473324 +0000 UTC m=+2.272499059" watchObservedRunningTime="2024-02-09 18:38:38.341665853 +0000 UTC m=+2.272691588" Feb 9 18:38:39.448515 kubelet[2008]: E0209 18:38:39.448481 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:40.524751 kubelet[2008]: E0209 18:38:40.524716 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:41.226341 kubelet[2008]: E0209 18:38:41.226311 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:43.485239 kubelet[2008]: E0209 18:38:43.485200 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:44.231148 kubelet[2008]: E0209 18:38:44.231118 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:49.457058 kubelet[2008]: E0209 18:38:49.457032 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:49.823087 kubelet[2008]: I0209 18:38:49.823056 2008 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 18:38:49.823403 env[1149]: time="2024-02-09T18:38:49.823358205Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:38:49.823790 kubelet[2008]: I0209 18:38:49.823767 2008 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 18:38:50.238546 kubelet[2008]: E0209 18:38:50.238514 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:50.415334 kubelet[2008]: I0209 18:38:50.415297 2008 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:38:50.419849 systemd[1]: Created slice kubepods-besteffort-podd8983864_a7a7_4329_84a6_c1bbfeb83abc.slice. Feb 9 18:38:50.421109 kubelet[2008]: W0209 18:38:50.421064 2008 helpers.go:242] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8983864_a7a7_4329_84a6_c1bbfeb83abc.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8983864_a7a7_4329_84a6_c1bbfeb83abc.slice/cpuset.cpus.effective: no such device Feb 9 18:38:50.426703 kubelet[2008]: I0209 18:38:50.426646 2008 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:38:50.431626 systemd[1]: Created slice kubepods-burstable-pod20dcf362_0d80_4715_8780_8efbab2e5ccf.slice. Feb 9 18:38:50.449605 kubelet[2008]: I0209 18:38:50.449566 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-cilium-run\") pod \"cilium-mc69t\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " pod="kube-system/cilium-mc69t" Feb 9 18:38:50.449605 kubelet[2008]: I0209 18:38:50.449610 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20dcf362-0d80-4715-8780-8efbab2e5ccf-hubble-tls\") pod \"cilium-mc69t\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " pod="kube-system/cilium-mc69t" Feb 9 18:38:50.449778 kubelet[2008]: I0209 18:38:50.449640 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nxn6\" (UniqueName: \"kubernetes.io/projected/20dcf362-0d80-4715-8780-8efbab2e5ccf-kube-api-access-4nxn6\") pod \"cilium-mc69t\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " pod="kube-system/cilium-mc69t" Feb 9 18:38:50.449778 kubelet[2008]: I0209 18:38:50.449681 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8983864-a7a7-4329-84a6-c1bbfeb83abc-lib-modules\") pod \"kube-proxy-77xzp\" (UID: \"d8983864-a7a7-4329-84a6-c1bbfeb83abc\") " pod="kube-system/kube-proxy-77xzp" Feb 9 18:38:50.449778 kubelet[2008]: I0209 18:38:50.449714 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-bpf-maps\") pod \"cilium-mc69t\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " pod="kube-system/cilium-mc69t" Feb 9 18:38:50.449778 kubelet[2008]: I0209 18:38:50.449742 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-lib-modules\") pod \"cilium-mc69t\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " pod="kube-system/cilium-mc69t" Feb 9 18:38:50.449778 kubelet[2008]: I0209 18:38:50.449771 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-cilium-cgroup\") pod \"cilium-mc69t\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " pod="kube-system/cilium-mc69t" Feb 9 18:38:50.449906 kubelet[2008]: I0209 18:38:50.449804 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-host-proc-sys-kernel\") pod \"cilium-mc69t\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " pod="kube-system/cilium-mc69t" Feb 9 18:38:50.449906 kubelet[2008]: I0209 18:38:50.449846 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-hostproc\") pod \"cilium-mc69t\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " pod="kube-system/cilium-mc69t" Feb 9 18:38:50.449906 kubelet[2008]: I0209 18:38:50.449878 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20dcf362-0d80-4715-8780-8efbab2e5ccf-clustermesh-secrets\") pod \"cilium-mc69t\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " pod="kube-system/cilium-mc69t" Feb 9 18:38:50.449906 kubelet[2008]: I0209 18:38:50.449902 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-host-proc-sys-net\") pod \"cilium-mc69t\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " pod="kube-system/cilium-mc69t" Feb 9 18:38:50.450048 kubelet[2008]: I0209 18:38:50.449922 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d8983864-a7a7-4329-84a6-c1bbfeb83abc-kube-proxy\") pod \"kube-proxy-77xzp\" (UID: \"d8983864-a7a7-4329-84a6-c1bbfeb83abc\") " pod="kube-system/kube-proxy-77xzp" Feb 9 18:38:50.450048 kubelet[2008]: I0209 18:38:50.449945 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8983864-a7a7-4329-84a6-c1bbfeb83abc-xtables-lock\") pod \"kube-proxy-77xzp\" (UID: \"d8983864-a7a7-4329-84a6-c1bbfeb83abc\") " pod="kube-system/kube-proxy-77xzp" Feb 9 18:38:50.450048 kubelet[2008]: I0209 18:38:50.449982 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-etc-cni-netd\") pod \"cilium-mc69t\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " pod="kube-system/cilium-mc69t" Feb 9 18:38:50.450048 kubelet[2008]: I0209 18:38:50.450002 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-xtables-lock\") pod \"cilium-mc69t\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " pod="kube-system/cilium-mc69t" Feb 9 18:38:50.450048 kubelet[2008]: I0209 18:38:50.450034 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-cni-path\") pod \"cilium-mc69t\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " pod="kube-system/cilium-mc69t" Feb 9 18:38:50.450160 kubelet[2008]: I0209 18:38:50.450053 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20dcf362-0d80-4715-8780-8efbab2e5ccf-cilium-config-path\") pod \"cilium-mc69t\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " pod="kube-system/cilium-mc69t" Feb 9 18:38:50.450160 kubelet[2008]: I0209 18:38:50.450075 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t24db\" (UniqueName: \"kubernetes.io/projected/d8983864-a7a7-4329-84a6-c1bbfeb83abc-kube-api-access-t24db\") pod \"kube-proxy-77xzp\" (UID: \"d8983864-a7a7-4329-84a6-c1bbfeb83abc\") " pod="kube-system/kube-proxy-77xzp" Feb 9 18:38:50.725730 kubelet[2008]: E0209 18:38:50.725699 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:50.726315 env[1149]: time="2024-02-09T18:38:50.726279581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-77xzp,Uid:d8983864-a7a7-4329-84a6-c1bbfeb83abc,Namespace:kube-system,Attempt:0,}" Feb 9 18:38:50.737114 kubelet[2008]: E0209 18:38:50.737086 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:50.737493 env[1149]: time="2024-02-09T18:38:50.737445070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mc69t,Uid:20dcf362-0d80-4715-8780-8efbab2e5ccf,Namespace:kube-system,Attempt:0,}" Feb 9 18:38:50.739770 env[1149]: time="2024-02-09T18:38:50.739697848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:38:50.739862 env[1149]: time="2024-02-09T18:38:50.739750370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:38:50.739862 env[1149]: time="2024-02-09T18:38:50.739761370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:38:50.740097 env[1149]: time="2024-02-09T18:38:50.740059938Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b277093a3c4815d5a894aa802d8178a607a90ab44e44dd102406da10d92dec7f pid=2124 runtime=io.containerd.runc.v2 Feb 9 18:38:50.754229 env[1149]: time="2024-02-09T18:38:50.752143611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:38:50.754229 env[1149]: time="2024-02-09T18:38:50.752181692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:38:50.754229 env[1149]: time="2024-02-09T18:38:50.752191812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:38:50.754229 env[1149]: time="2024-02-09T18:38:50.752636264Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7 pid=2148 runtime=io.containerd.runc.v2 Feb 9 18:38:50.753789 systemd[1]: Started cri-containerd-b277093a3c4815d5a894aa802d8178a607a90ab44e44dd102406da10d92dec7f.scope. Feb 9 18:38:50.763808 systemd[1]: Started cri-containerd-5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7.scope. Feb 9 18:38:50.791396 env[1149]: time="2024-02-09T18:38:50.791359108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-77xzp,Uid:d8983864-a7a7-4329-84a6-c1bbfeb83abc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b277093a3c4815d5a894aa802d8178a607a90ab44e44dd102406da10d92dec7f\"" Feb 9 18:38:50.792017 kubelet[2008]: E0209 18:38:50.791995 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:50.798314 env[1149]: time="2024-02-09T18:38:50.798232246Z" level=info msg="CreateContainer within sandbox \"b277093a3c4815d5a894aa802d8178a607a90ab44e44dd102406da10d92dec7f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:38:50.804407 env[1149]: time="2024-02-09T18:38:50.804375886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mc69t,Uid:20dcf362-0d80-4715-8780-8efbab2e5ccf,Namespace:kube-system,Attempt:0,} returns sandbox id \"5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7\"" Feb 9 18:38:50.804885 kubelet[2008]: E0209 18:38:50.804865 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:50.807373 env[1149]: time="2024-02-09T18:38:50.806914552Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 18:38:50.810843 env[1149]: time="2024-02-09T18:38:50.810807212Z" level=info msg="CreateContainer within sandbox \"b277093a3c4815d5a894aa802d8178a607a90ab44e44dd102406da10d92dec7f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"15bf2f89b7af7508e4e40e90406cd8f403be0ffeb6fa45a06b69af59cb439b99\"" Feb 9 18:38:50.812480 env[1149]: time="2024-02-09T18:38:50.812444335Z" level=info msg="StartContainer for \"15bf2f89b7af7508e4e40e90406cd8f403be0ffeb6fa45a06b69af59cb439b99\"" Feb 9 18:38:50.826566 systemd[1]: Started cri-containerd-15bf2f89b7af7508e4e40e90406cd8f403be0ffeb6fa45a06b69af59cb439b99.scope. Feb 9 18:38:50.872237 env[1149]: time="2024-02-09T18:38:50.872188364Z" level=info msg="StartContainer for \"15bf2f89b7af7508e4e40e90406cd8f403be0ffeb6fa45a06b69af59cb439b99\" returns successfully" Feb 9 18:38:50.878460 kubelet[2008]: I0209 18:38:50.878421 2008 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:38:50.883281 systemd[1]: Created slice kubepods-besteffort-pod9cd40c7d_e78a_42f1_8ec6_4bc2f52e6957.slice. Feb 9 18:38:50.953919 kubelet[2008]: I0209 18:38:50.953885 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-6ftqk\" (UID: \"9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957\") " pod="kube-system/cilium-operator-f59cbd8c6-6ftqk" Feb 9 18:38:50.954065 kubelet[2008]: I0209 18:38:50.953939 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf8s5\" (UniqueName: \"kubernetes.io/projected/9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957-kube-api-access-qf8s5\") pod \"cilium-operator-f59cbd8c6-6ftqk\" (UID: \"9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957\") " pod="kube-system/cilium-operator-f59cbd8c6-6ftqk" Feb 9 18:38:51.011493 update_engine[1140]: I0209 18:38:51.011371 1140 update_attempter.cc:509] Updating boot flags... Feb 9 18:38:51.240829 kubelet[2008]: E0209 18:38:51.240274 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:51.485638 kubelet[2008]: E0209 18:38:51.485613 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:51.486409 env[1149]: time="2024-02-09T18:38:51.486370272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-6ftqk,Uid:9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957,Namespace:kube-system,Attempt:0,}" Feb 9 18:38:51.498973 env[1149]: time="2024-02-09T18:38:51.498662135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:38:51.498973 env[1149]: time="2024-02-09T18:38:51.498700816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:38:51.498973 env[1149]: time="2024-02-09T18:38:51.498711136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:38:51.498973 env[1149]: time="2024-02-09T18:38:51.498851140Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a9b01f06cf31dfcae17e9d59aff30de2c55a57b4f4244a1966b6111f8834864 pid=2363 runtime=io.containerd.runc.v2 Feb 9 18:38:51.509883 systemd[1]: Started cri-containerd-1a9b01f06cf31dfcae17e9d59aff30de2c55a57b4f4244a1966b6111f8834864.scope. Feb 9 18:38:51.546203 env[1149]: time="2024-02-09T18:38:51.546161226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-6ftqk,Uid:9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a9b01f06cf31dfcae17e9d59aff30de2c55a57b4f4244a1966b6111f8834864\"" Feb 9 18:38:51.547358 kubelet[2008]: E0209 18:38:51.547335 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:52.247537 kubelet[2008]: E0209 18:38:52.247420 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:54.418807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount178654174.mount: Deactivated successfully. Feb 9 18:38:57.689530 env[1149]: time="2024-02-09T18:38:57.689479729Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:57.690688 env[1149]: time="2024-02-09T18:38:57.690656391Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:57.692728 env[1149]: time="2024-02-09T18:38:57.692682708Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:57.693332 env[1149]: time="2024-02-09T18:38:57.693300360Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 18:38:57.694301 env[1149]: time="2024-02-09T18:38:57.694269498Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 18:38:57.695959 env[1149]: time="2024-02-09T18:38:57.695925008Z" level=info msg="CreateContainer within sandbox \"5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:38:57.706969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3582920779.mount: Deactivated successfully. Feb 9 18:38:57.708958 env[1149]: time="2024-02-09T18:38:57.708909089Z" level=info msg="CreateContainer within sandbox \"5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a\"" Feb 9 18:38:57.709578 env[1149]: time="2024-02-09T18:38:57.709542501Z" level=info msg="StartContainer for \"1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a\"" Feb 9 18:38:57.727548 systemd[1]: Started cri-containerd-1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a.scope. Feb 9 18:38:57.816611 env[1149]: time="2024-02-09T18:38:57.816571322Z" level=info msg="StartContainer for \"1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a\" returns successfully" Feb 9 18:38:57.828427 systemd[1]: cri-containerd-1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a.scope: Deactivated successfully. Feb 9 18:38:57.939379 env[1149]: time="2024-02-09T18:38:57.939336676Z" level=info msg="shim disconnected" id=1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a Feb 9 18:38:57.940075 env[1149]: time="2024-02-09T18:38:57.939692162Z" level=warning msg="cleaning up after shim disconnected" id=1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a namespace=k8s.io Feb 9 18:38:57.940168 env[1149]: time="2024-02-09T18:38:57.940147251Z" level=info msg="cleaning up dead shim" Feb 9 18:38:57.946759 env[1149]: time="2024-02-09T18:38:57.946732413Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2449 runtime=io.containerd.runc.v2\n" Feb 9 18:38:58.258108 kubelet[2008]: E0209 18:38:58.258042 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:58.262973 env[1149]: time="2024-02-09T18:38:58.262246483Z" level=info msg="CreateContainer within sandbox \"5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:38:58.274133 kubelet[2008]: I0209 18:38:58.274109 2008 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-77xzp" podStartSLOduration=8.274069372 pod.CreationTimestamp="2024-02-09 18:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:38:51.622431307 +0000 UTC m=+15.553457002" watchObservedRunningTime="2024-02-09 18:38:58.274069372 +0000 UTC m=+22.205095107" Feb 9 18:38:58.281819 env[1149]: time="2024-02-09T18:38:58.281776789Z" level=info msg="CreateContainer within sandbox \"5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7\"" Feb 9 18:38:58.282644 env[1149]: time="2024-02-09T18:38:58.282618803Z" level=info msg="StartContainer for \"b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7\"" Feb 9 18:38:58.295338 systemd[1]: Started cri-containerd-b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7.scope. Feb 9 18:38:58.334872 env[1149]: time="2024-02-09T18:38:58.334748046Z" level=info msg="StartContainer for \"b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7\" returns successfully" Feb 9 18:38:58.339209 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:38:58.339420 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:38:58.339552 systemd[1]: Stopping systemd-sysctl.service... Feb 9 18:38:58.341190 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:38:58.343376 systemd[1]: cri-containerd-b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7.scope: Deactivated successfully. Feb 9 18:38:58.349211 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:38:58.373469 env[1149]: time="2024-02-09T18:38:58.373422171Z" level=info msg="shim disconnected" id=b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7 Feb 9 18:38:58.373469 env[1149]: time="2024-02-09T18:38:58.373462252Z" level=warning msg="cleaning up after shim disconnected" id=b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7 namespace=k8s.io Feb 9 18:38:58.373469 env[1149]: time="2024-02-09T18:38:58.373471532Z" level=info msg="cleaning up dead shim" Feb 9 18:38:58.380045 env[1149]: time="2024-02-09T18:38:58.380011568Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2512 runtime=io.containerd.runc.v2\n" Feb 9 18:38:58.705611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a-rootfs.mount: Deactivated successfully. Feb 9 18:38:58.897319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1528767437.mount: Deactivated successfully. Feb 9 18:38:59.260438 kubelet[2008]: E0209 18:38:59.260397 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:38:59.264212 env[1149]: time="2024-02-09T18:38:59.264167261Z" level=info msg="CreateContainer within sandbox \"5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:38:59.291243 env[1149]: time="2024-02-09T18:38:59.291194279Z" level=info msg="CreateContainer within sandbox \"5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8\"" Feb 9 18:38:59.292001 env[1149]: time="2024-02-09T18:38:59.291969812Z" level=info msg="StartContainer for \"d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8\"" Feb 9 18:38:59.309038 systemd[1]: Started cri-containerd-d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8.scope. Feb 9 18:38:59.386075 env[1149]: time="2024-02-09T18:38:59.384142174Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:59.386075 env[1149]: time="2024-02-09T18:38:59.385610159Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:59.386364 env[1149]: time="2024-02-09T18:38:59.386331131Z" level=info msg="StartContainer for \"d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8\" returns successfully" Feb 9 18:38:59.388253 systemd[1]: cri-containerd-d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8.scope: Deactivated successfully. Feb 9 18:38:59.389067 env[1149]: time="2024-02-09T18:38:59.389021857Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:38:59.389840 env[1149]: time="2024-02-09T18:38:59.389791190Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 18:38:59.392785 env[1149]: time="2024-02-09T18:38:59.392718840Z" level=info msg="CreateContainer within sandbox \"1a9b01f06cf31dfcae17e9d59aff30de2c55a57b4f4244a1966b6111f8834864\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 18:38:59.471017 env[1149]: time="2024-02-09T18:38:59.470939485Z" level=info msg="CreateContainer within sandbox \"1a9b01f06cf31dfcae17e9d59aff30de2c55a57b4f4244a1966b6111f8834864\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac\"" Feb 9 18:38:59.471493 env[1149]: time="2024-02-09T18:38:59.471466294Z" level=info msg="StartContainer for \"959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac\"" Feb 9 18:38:59.472878 env[1149]: time="2024-02-09T18:38:59.472839037Z" level=info msg="shim disconnected" id=d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8 Feb 9 18:38:59.472878 env[1149]: time="2024-02-09T18:38:59.472875398Z" level=warning msg="cleaning up after shim disconnected" id=d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8 namespace=k8s.io Feb 9 18:38:59.473017 env[1149]: time="2024-02-09T18:38:59.472884798Z" level=info msg="cleaning up dead shim" Feb 9 18:38:59.480673 env[1149]: time="2024-02-09T18:38:59.480630689Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2570 runtime=io.containerd.runc.v2\n" Feb 9 18:38:59.486711 systemd[1]: Started cri-containerd-959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac.scope. Feb 9 18:38:59.532711 env[1149]: time="2024-02-09T18:38:59.532628730Z" level=info msg="StartContainer for \"959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac\" returns successfully" Feb 9 18:39:00.264527 kubelet[2008]: E0209 18:39:00.264496 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:00.265618 kubelet[2008]: E0209 18:39:00.265594 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:00.266988 env[1149]: time="2024-02-09T18:39:00.266939223Z" level=info msg="CreateContainer within sandbox \"5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:39:00.278568 env[1149]: time="2024-02-09T18:39:00.278523931Z" level=info msg="CreateContainer within sandbox \"5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f\"" Feb 9 18:39:00.279195 env[1149]: time="2024-02-09T18:39:00.279153941Z" level=info msg="StartContainer for \"8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f\"" Feb 9 18:39:00.314366 systemd[1]: Started cri-containerd-8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f.scope. Feb 9 18:39:00.366255 systemd[1]: cri-containerd-8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f.scope: Deactivated successfully. Feb 9 18:39:00.367972 env[1149]: time="2024-02-09T18:39:00.367920862Z" level=info msg="StartContainer for \"8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f\" returns successfully" Feb 9 18:39:00.382790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f-rootfs.mount: Deactivated successfully. Feb 9 18:39:00.385257 env[1149]: time="2024-02-09T18:39:00.385209102Z" level=info msg="shim disconnected" id=8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f Feb 9 18:39:00.385349 env[1149]: time="2024-02-09T18:39:00.385258623Z" level=warning msg="cleaning up after shim disconnected" id=8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f namespace=k8s.io Feb 9 18:39:00.385349 env[1149]: time="2024-02-09T18:39:00.385267823Z" level=info msg="cleaning up dead shim" Feb 9 18:39:00.391948 env[1149]: time="2024-02-09T18:39:00.391901771Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:39:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2665 runtime=io.containerd.runc.v2\n" Feb 9 18:39:01.269421 kubelet[2008]: E0209 18:39:01.269385 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:01.269772 kubelet[2008]: E0209 18:39:01.269438 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:01.271662 env[1149]: time="2024-02-09T18:39:01.271612028Z" level=info msg="CreateContainer within sandbox \"5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:39:01.285502 kubelet[2008]: I0209 18:39:01.285450 2008 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-6ftqk" podStartSLOduration=-9.223372025569363e+09 pod.CreationTimestamp="2024-02-09 18:38:50 +0000 UTC" firstStartedPulling="2024-02-09 18:38:51.547800227 +0000 UTC m=+15.478825962" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:39:00.313870784 +0000 UTC m=+24.244896519" watchObservedRunningTime="2024-02-09 18:39:01.285413683 +0000 UTC m=+25.216439378" Feb 9 18:39:01.285807 env[1149]: time="2024-02-09T18:39:01.285768129Z" level=info msg="CreateContainer within sandbox \"5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289\"" Feb 9 18:39:01.286351 env[1149]: time="2024-02-09T18:39:01.286313537Z" level=info msg="StartContainer for \"b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289\"" Feb 9 18:39:01.301935 systemd[1]: Started cri-containerd-b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289.scope. Feb 9 18:39:01.356225 env[1149]: time="2024-02-09T18:39:01.356178944Z" level=info msg="StartContainer for \"b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289\" returns successfully" Feb 9 18:39:01.499405 kubelet[2008]: I0209 18:39:01.498782 2008 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:39:01.516260 kubelet[2008]: I0209 18:39:01.516228 2008 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:39:01.517566 kubelet[2008]: I0209 18:39:01.517539 2008 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:39:01.521804 systemd[1]: Created slice kubepods-burstable-pod56db49c6_09c1_4fe2_afe8_465b41f34f5b.slice. Feb 9 18:39:01.527397 systemd[1]: Created slice kubepods-burstable-pod520ed2c0_efc9_40a1_8c2f_69e0aeae0719.slice. Feb 9 18:39:01.533387 kubelet[2008]: I0209 18:39:01.533363 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56db49c6-09c1-4fe2-afe8-465b41f34f5b-config-volume\") pod \"coredns-787d4945fb-lwkst\" (UID: \"56db49c6-09c1-4fe2-afe8-465b41f34f5b\") " pod="kube-system/coredns-787d4945fb-lwkst" Feb 9 18:39:01.533504 kubelet[2008]: I0209 18:39:01.533402 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/520ed2c0-efc9-40a1-8c2f-69e0aeae0719-config-volume\") pod \"coredns-787d4945fb-fd99l\" (UID: \"520ed2c0-efc9-40a1-8c2f-69e0aeae0719\") " pod="kube-system/coredns-787d4945fb-fd99l" Feb 9 18:39:01.533504 kubelet[2008]: I0209 18:39:01.533427 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvfts\" (UniqueName: \"kubernetes.io/projected/520ed2c0-efc9-40a1-8c2f-69e0aeae0719-kube-api-access-xvfts\") pod \"coredns-787d4945fb-fd99l\" (UID: \"520ed2c0-efc9-40a1-8c2f-69e0aeae0719\") " pod="kube-system/coredns-787d4945fb-fd99l" Feb 9 18:39:01.533504 kubelet[2008]: I0209 18:39:01.533451 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6cc5\" (UniqueName: \"kubernetes.io/projected/56db49c6-09c1-4fe2-afe8-465b41f34f5b-kube-api-access-j6cc5\") pod \"coredns-787d4945fb-lwkst\" (UID: \"56db49c6-09c1-4fe2-afe8-465b41f34f5b\") " pod="kube-system/coredns-787d4945fb-lwkst" Feb 9 18:39:01.628984 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 18:39:01.825869 kubelet[2008]: E0209 18:39:01.825775 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:01.826731 env[1149]: time="2024-02-09T18:39:01.826676546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-lwkst,Uid:56db49c6-09c1-4fe2-afe8-465b41f34f5b,Namespace:kube-system,Attempt:0,}" Feb 9 18:39:01.829663 kubelet[2008]: E0209 18:39:01.829640 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:01.830307 env[1149]: time="2024-02-09T18:39:01.830253482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-fd99l,Uid:520ed2c0-efc9-40a1-8c2f-69e0aeae0719,Namespace:kube-system,Attempt:0,}" Feb 9 18:39:01.879064 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 18:39:02.272848 kubelet[2008]: E0209 18:39:02.272815 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:02.287165 systemd[1]: run-containerd-runc-k8s.io-b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289-runc.tWINE5.mount: Deactivated successfully. Feb 9 18:39:02.290904 kubelet[2008]: I0209 18:39:02.290539 2008 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mc69t" podStartSLOduration=-9.223372024564268e+09 pod.CreationTimestamp="2024-02-09 18:38:50 +0000 UTC" firstStartedPulling="2024-02-09 18:38:50.806307976 +0000 UTC m=+14.737333711" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:39:02.289835933 +0000 UTC m=+26.220861748" watchObservedRunningTime="2024-02-09 18:39:02.290507783 +0000 UTC m=+26.221533518" Feb 9 18:39:03.274396 kubelet[2008]: E0209 18:39:03.274359 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:03.490190 systemd-networkd[1057]: cilium_host: Link UP Feb 9 18:39:03.490931 systemd-networkd[1057]: cilium_net: Link UP Feb 9 18:39:03.492743 systemd-networkd[1057]: cilium_net: Gained carrier Feb 9 18:39:03.492907 systemd-networkd[1057]: cilium_host: Gained carrier Feb 9 18:39:03.492977 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 18:39:03.493007 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 18:39:03.570360 systemd-networkd[1057]: cilium_vxlan: Link UP Feb 9 18:39:03.570366 systemd-networkd[1057]: cilium_vxlan: Gained carrier Feb 9 18:39:03.738120 systemd-networkd[1057]: cilium_host: Gained IPv6LL Feb 9 18:39:03.873990 kernel: NET: Registered PF_ALG protocol family Feb 9 18:39:03.899080 systemd-networkd[1057]: cilium_net: Gained IPv6LL Feb 9 18:39:04.275429 kubelet[2008]: E0209 18:39:04.275394 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:04.462340 systemd-networkd[1057]: lxc_health: Link UP Feb 9 18:39:04.472205 systemd-networkd[1057]: lxc_health: Gained carrier Feb 9 18:39:04.472989 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:39:04.778093 systemd-networkd[1057]: cilium_vxlan: Gained IPv6LL Feb 9 18:39:04.893603 systemd-networkd[1057]: lxcb25c329ceedb: Link UP Feb 9 18:39:04.906004 kernel: eth0: renamed from tmp77bd3 Feb 9 18:39:04.915187 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:39:04.915284 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb25c329ceedb: link becomes ready Feb 9 18:39:04.916530 systemd-networkd[1057]: lxcb25c329ceedb: Gained carrier Feb 9 18:39:04.918037 systemd-networkd[1057]: lxc6164006154fb: Link UP Feb 9 18:39:04.925979 kernel: eth0: renamed from tmp1df21 Feb 9 18:39:04.938319 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6164006154fb: link becomes ready Feb 9 18:39:04.937904 systemd-networkd[1057]: lxc6164006154fb: Gained carrier Feb 9 18:39:05.276635 kubelet[2008]: E0209 18:39:05.276408 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:05.994180 systemd-networkd[1057]: lxcb25c329ceedb: Gained IPv6LL Feb 9 18:39:06.186107 systemd-networkd[1057]: lxc_health: Gained IPv6LL Feb 9 18:39:06.890117 systemd-networkd[1057]: lxc6164006154fb: Gained IPv6LL Feb 9 18:39:08.451065 env[1149]: time="2024-02-09T18:39:08.449661942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:39:08.451065 env[1149]: time="2024-02-09T18:39:08.449710543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:39:08.451065 env[1149]: time="2024-02-09T18:39:08.449723583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:39:08.451065 env[1149]: time="2024-02-09T18:39:08.449859784Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/77bd3582bbe013f54d2fffc41690b5b95735b75505ff1d526f8cd530634d6c7d pid=3226 runtime=io.containerd.runc.v2 Feb 9 18:39:08.468144 systemd[1]: run-containerd-runc-k8s.io-77bd3582bbe013f54d2fffc41690b5b95735b75505ff1d526f8cd530634d6c7d-runc.acYk7r.mount: Deactivated successfully. Feb 9 18:39:08.469597 systemd[1]: Started cri-containerd-77bd3582bbe013f54d2fffc41690b5b95735b75505ff1d526f8cd530634d6c7d.scope. Feb 9 18:39:08.473447 env[1149]: time="2024-02-09T18:39:08.473376465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:39:08.473447 env[1149]: time="2024-02-09T18:39:08.473423705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:39:08.473612 env[1149]: time="2024-02-09T18:39:08.473434385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:39:08.474985 env[1149]: time="2024-02-09T18:39:08.473884511Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1df21c2303b2a81eade7ae4c86f88db4cd78a95495ee2695252d076564118558 pid=3248 runtime=io.containerd.runc.v2 Feb 9 18:39:08.487286 systemd[1]: Started cri-containerd-1df21c2303b2a81eade7ae4c86f88db4cd78a95495ee2695252d076564118558.scope. Feb 9 18:39:08.539164 systemd-resolved[1095]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:39:08.545512 systemd-resolved[1095]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:39:08.559620 env[1149]: time="2024-02-09T18:39:08.559577452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-lwkst,Uid:56db49c6-09c1-4fe2-afe8-465b41f34f5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1df21c2303b2a81eade7ae4c86f88db4cd78a95495ee2695252d076564118558\"" Feb 9 18:39:08.560328 kubelet[2008]: E0209 18:39:08.560303 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:08.562249 env[1149]: time="2024-02-09T18:39:08.562211723Z" level=info msg="CreateContainer within sandbox \"1df21c2303b2a81eade7ae4c86f88db4cd78a95495ee2695252d076564118558\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:39:08.567394 env[1149]: time="2024-02-09T18:39:08.567357145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-fd99l,Uid:520ed2c0-efc9-40a1-8c2f-69e0aeae0719,Namespace:kube-system,Attempt:0,} returns sandbox id \"77bd3582bbe013f54d2fffc41690b5b95735b75505ff1d526f8cd530634d6c7d\"" Feb 9 18:39:08.568001 kubelet[2008]: E0209 18:39:08.567958 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:08.570529 env[1149]: time="2024-02-09T18:39:08.570476462Z" level=info msg="CreateContainer within sandbox \"77bd3582bbe013f54d2fffc41690b5b95735b75505ff1d526f8cd530634d6c7d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:39:08.572881 env[1149]: time="2024-02-09T18:39:08.572849690Z" level=info msg="CreateContainer within sandbox \"1df21c2303b2a81eade7ae4c86f88db4cd78a95495ee2695252d076564118558\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"93463488592dc4fb1d7e894659209ea31aecd4f99daa269cbe3304d1ec0a6d2d\"" Feb 9 18:39:08.573601 env[1149]: time="2024-02-09T18:39:08.573577739Z" level=info msg="StartContainer for \"93463488592dc4fb1d7e894659209ea31aecd4f99daa269cbe3304d1ec0a6d2d\"" Feb 9 18:39:08.581156 env[1149]: time="2024-02-09T18:39:08.581107868Z" level=info msg="CreateContainer within sandbox \"77bd3582bbe013f54d2fffc41690b5b95735b75505ff1d526f8cd530634d6c7d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee40caf873dc3f151b0a2a1b150db14ef0290825c0c647d8fc1677698e264aac\"" Feb 9 18:39:08.581534 env[1149]: time="2024-02-09T18:39:08.581507393Z" level=info msg="StartContainer for \"ee40caf873dc3f151b0a2a1b150db14ef0290825c0c647d8fc1677698e264aac\"" Feb 9 18:39:08.587577 systemd[1]: Started cri-containerd-93463488592dc4fb1d7e894659209ea31aecd4f99daa269cbe3304d1ec0a6d2d.scope. Feb 9 18:39:08.605106 systemd[1]: Started cri-containerd-ee40caf873dc3f151b0a2a1b150db14ef0290825c0c647d8fc1677698e264aac.scope. Feb 9 18:39:08.632735 env[1149]: time="2024-02-09T18:39:08.632685803Z" level=info msg="StartContainer for \"93463488592dc4fb1d7e894659209ea31aecd4f99daa269cbe3304d1ec0a6d2d\" returns successfully" Feb 9 18:39:08.636482 env[1149]: time="2024-02-09T18:39:08.636428408Z" level=info msg="StartContainer for \"ee40caf873dc3f151b0a2a1b150db14ef0290825c0c647d8fc1677698e264aac\" returns successfully" Feb 9 18:39:09.286504 kubelet[2008]: E0209 18:39:09.286453 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:09.288664 kubelet[2008]: E0209 18:39:09.288602 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:09.297465 kubelet[2008]: I0209 18:39:09.297434 2008 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-lwkst" podStartSLOduration=19.297403526 pod.CreationTimestamp="2024-02-09 18:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:39:09.295347702 +0000 UTC m=+33.226373437" watchObservedRunningTime="2024-02-09 18:39:09.297403526 +0000 UTC m=+33.228429261" Feb 9 18:39:09.303800 kubelet[2008]: I0209 18:39:09.303765 2008 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-fd99l" podStartSLOduration=19.303735759 pod.CreationTimestamp="2024-02-09 18:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:39:09.303500076 +0000 UTC m=+33.234525771" watchObservedRunningTime="2024-02-09 18:39:09.303735759 +0000 UTC m=+33.234761494" Feb 9 18:39:10.289808 kubelet[2008]: E0209 18:39:10.289766 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:10.290306 kubelet[2008]: E0209 18:39:10.290280 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:11.291135 kubelet[2008]: E0209 18:39:11.291108 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:11.291451 kubelet[2008]: E0209 18:39:11.291153 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:12.362204 kubelet[2008]: I0209 18:39:12.362163 2008 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 18:39:12.362904 kubelet[2008]: E0209 18:39:12.362884 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:13.294912 kubelet[2008]: E0209 18:39:13.294866 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:17.337936 systemd[1]: Started sshd@5-10.0.0.103:22-10.0.0.1:44242.service. Feb 9 18:39:17.387254 sshd[3435]: Accepted publickey for core from 10.0.0.1 port 44242 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:17.388804 sshd[3435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:17.393598 systemd[1]: Started session-6.scope. Feb 9 18:39:17.393742 systemd-logind[1138]: New session 6 of user core. Feb 9 18:39:17.564751 sshd[3435]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:17.567219 systemd[1]: sshd@5-10.0.0.103:22-10.0.0.1:44242.service: Deactivated successfully. Feb 9 18:39:17.567944 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 18:39:17.570240 systemd-logind[1138]: Session 6 logged out. Waiting for processes to exit. Feb 9 18:39:17.571264 systemd-logind[1138]: Removed session 6. Feb 9 18:39:22.569347 systemd[1]: Started sshd@6-10.0.0.103:22-10.0.0.1:44252.service. Feb 9 18:39:22.615974 sshd[3452]: Accepted publickey for core from 10.0.0.1 port 44252 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:22.617106 sshd[3452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:22.620457 systemd-logind[1138]: New session 7 of user core. Feb 9 18:39:22.621652 systemd[1]: Started session-7.scope. Feb 9 18:39:22.737460 sshd[3452]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:22.740213 systemd[1]: sshd@6-10.0.0.103:22-10.0.0.1:44252.service: Deactivated successfully. Feb 9 18:39:22.740924 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 18:39:22.741798 systemd-logind[1138]: Session 7 logged out. Waiting for processes to exit. Feb 9 18:39:22.742571 systemd-logind[1138]: Removed session 7. Feb 9 18:39:27.742121 systemd[1]: Started sshd@7-10.0.0.103:22-10.0.0.1:45096.service. Feb 9 18:39:27.782543 sshd[3466]: Accepted publickey for core from 10.0.0.1 port 45096 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:27.783828 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:27.787715 systemd-logind[1138]: New session 8 of user core. Feb 9 18:39:27.789022 systemd[1]: Started session-8.scope. Feb 9 18:39:27.896842 sshd[3466]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:27.899233 systemd[1]: sshd@7-10.0.0.103:22-10.0.0.1:45096.service: Deactivated successfully. Feb 9 18:39:27.900012 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 18:39:27.900537 systemd-logind[1138]: Session 8 logged out. Waiting for processes to exit. Feb 9 18:39:27.901214 systemd-logind[1138]: Removed session 8. Feb 9 18:39:32.901177 systemd[1]: Started sshd@8-10.0.0.103:22-10.0.0.1:47238.service. Feb 9 18:39:32.945192 sshd[3481]: Accepted publickey for core from 10.0.0.1 port 47238 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:32.946733 sshd[3481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:32.950338 systemd-logind[1138]: New session 9 of user core. Feb 9 18:39:32.951340 systemd[1]: Started session-9.scope. Feb 9 18:39:33.056384 sshd[3481]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:33.059259 systemd[1]: sshd@8-10.0.0.103:22-10.0.0.1:47238.service: Deactivated successfully. Feb 9 18:39:33.059914 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 18:39:33.060502 systemd-logind[1138]: Session 9 logged out. Waiting for processes to exit. Feb 9 18:39:33.061672 systemd[1]: Started sshd@9-10.0.0.103:22-10.0.0.1:47242.service. Feb 9 18:39:33.062533 systemd-logind[1138]: Removed session 9. Feb 9 18:39:33.102634 sshd[3495]: Accepted publickey for core from 10.0.0.1 port 47242 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:33.104020 sshd[3495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:33.107400 systemd-logind[1138]: New session 10 of user core. Feb 9 18:39:33.108377 systemd[1]: Started session-10.scope. Feb 9 18:39:33.958286 systemd[1]: Started sshd@10-10.0.0.103:22-10.0.0.1:47246.service. Feb 9 18:39:33.961230 sshd[3495]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:33.968358 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 18:39:33.968973 systemd-logind[1138]: Session 10 logged out. Waiting for processes to exit. Feb 9 18:39:33.969105 systemd[1]: sshd@9-10.0.0.103:22-10.0.0.1:47242.service: Deactivated successfully. Feb 9 18:39:33.970205 systemd-logind[1138]: Removed session 10. Feb 9 18:39:34.005131 sshd[3505]: Accepted publickey for core from 10.0.0.1 port 47246 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:34.006683 sshd[3505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:34.010226 systemd-logind[1138]: New session 11 of user core. Feb 9 18:39:34.011141 systemd[1]: Started session-11.scope. Feb 9 18:39:34.119289 sshd[3505]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:34.121604 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 18:39:34.122172 systemd-logind[1138]: Session 11 logged out. Waiting for processes to exit. Feb 9 18:39:34.122300 systemd[1]: sshd@10-10.0.0.103:22-10.0.0.1:47246.service: Deactivated successfully. Feb 9 18:39:34.123271 systemd-logind[1138]: Removed session 11. Feb 9 18:39:39.124538 systemd[1]: Started sshd@11-10.0.0.103:22-10.0.0.1:47254.service. Feb 9 18:39:39.164943 sshd[3526]: Accepted publickey for core from 10.0.0.1 port 47254 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:39.166169 sshd[3526]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:39.169768 systemd-logind[1138]: New session 12 of user core. Feb 9 18:39:39.170265 systemd[1]: Started session-12.scope. Feb 9 18:39:39.282338 sshd[3526]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:39.284704 systemd[1]: sshd@11-10.0.0.103:22-10.0.0.1:47254.service: Deactivated successfully. Feb 9 18:39:39.285526 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 18:39:39.286073 systemd-logind[1138]: Session 12 logged out. Waiting for processes to exit. Feb 9 18:39:39.286756 systemd-logind[1138]: Removed session 12. Feb 9 18:39:44.286916 systemd[1]: Started sshd@12-10.0.0.103:22-10.0.0.1:37922.service. Feb 9 18:39:44.326922 sshd[3539]: Accepted publickey for core from 10.0.0.1 port 37922 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:44.328330 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:44.331747 systemd-logind[1138]: New session 13 of user core. Feb 9 18:39:44.332246 systemd[1]: Started session-13.scope. Feb 9 18:39:44.435760 sshd[3539]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:44.439702 systemd[1]: Started sshd@13-10.0.0.103:22-10.0.0.1:37936.service. Feb 9 18:39:44.440699 systemd[1]: sshd@12-10.0.0.103:22-10.0.0.1:37922.service: Deactivated successfully. Feb 9 18:39:44.441500 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 18:39:44.443059 systemd-logind[1138]: Session 13 logged out. Waiting for processes to exit. Feb 9 18:39:44.445532 systemd-logind[1138]: Removed session 13. Feb 9 18:39:44.480825 sshd[3551]: Accepted publickey for core from 10.0.0.1 port 37936 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:44.482215 sshd[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:44.486120 systemd-logind[1138]: New session 14 of user core. Feb 9 18:39:44.486752 systemd[1]: Started session-14.scope. Feb 9 18:39:44.676554 sshd[3551]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:44.679851 systemd[1]: sshd@13-10.0.0.103:22-10.0.0.1:37936.service: Deactivated successfully. Feb 9 18:39:44.680546 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 18:39:44.681168 systemd-logind[1138]: Session 14 logged out. Waiting for processes to exit. Feb 9 18:39:44.682287 systemd[1]: Started sshd@14-10.0.0.103:22-10.0.0.1:37946.service. Feb 9 18:39:44.683070 systemd-logind[1138]: Removed session 14. Feb 9 18:39:44.724382 sshd[3563]: Accepted publickey for core from 10.0.0.1 port 37946 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:44.725704 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:44.729011 systemd-logind[1138]: New session 15 of user core. Feb 9 18:39:44.729511 systemd[1]: Started session-15.scope. Feb 9 18:39:45.531109 sshd[3563]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:45.532722 systemd[1]: Started sshd@15-10.0.0.103:22-10.0.0.1:37960.service. Feb 9 18:39:45.538508 systemd[1]: sshd@14-10.0.0.103:22-10.0.0.1:37946.service: Deactivated successfully. Feb 9 18:39:45.538862 systemd-logind[1138]: Session 15 logged out. Waiting for processes to exit. Feb 9 18:39:45.539367 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 18:39:45.541380 systemd-logind[1138]: Removed session 15. Feb 9 18:39:45.577444 sshd[3591]: Accepted publickey for core from 10.0.0.1 port 37960 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:45.579093 sshd[3591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:45.584023 systemd[1]: Started session-16.scope. Feb 9 18:39:45.584315 systemd-logind[1138]: New session 16 of user core. Feb 9 18:39:45.770228 sshd[3591]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:45.772837 systemd[1]: Started sshd@16-10.0.0.103:22-10.0.0.1:37964.service. Feb 9 18:39:45.776388 systemd-logind[1138]: Session 16 logged out. Waiting for processes to exit. Feb 9 18:39:45.776630 systemd[1]: sshd@15-10.0.0.103:22-10.0.0.1:37960.service: Deactivated successfully. Feb 9 18:39:45.777487 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 18:39:45.778103 systemd-logind[1138]: Removed session 16. Feb 9 18:39:45.815641 sshd[3640]: Accepted publickey for core from 10.0.0.1 port 37964 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:45.817310 sshd[3640]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:45.820241 systemd-logind[1138]: New session 17 of user core. Feb 9 18:39:45.821133 systemd[1]: Started session-17.scope. Feb 9 18:39:45.932421 sshd[3640]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:45.934770 systemd[1]: sshd@16-10.0.0.103:22-10.0.0.1:37964.service: Deactivated successfully. Feb 9 18:39:45.935607 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 18:39:45.936114 systemd-logind[1138]: Session 17 logged out. Waiting for processes to exit. Feb 9 18:39:45.936723 systemd-logind[1138]: Removed session 17. Feb 9 18:39:50.937223 systemd[1]: Started sshd@17-10.0.0.103:22-10.0.0.1:37972.service. Feb 9 18:39:50.977498 sshd[3681]: Accepted publickey for core from 10.0.0.1 port 37972 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:50.978945 sshd[3681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:50.982039 systemd-logind[1138]: New session 18 of user core. Feb 9 18:39:50.982974 systemd[1]: Started session-18.scope. Feb 9 18:39:51.089402 sshd[3681]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:51.091636 systemd[1]: sshd@17-10.0.0.103:22-10.0.0.1:37972.service: Deactivated successfully. Feb 9 18:39:51.092480 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 18:39:51.092981 systemd-logind[1138]: Session 18 logged out. Waiting for processes to exit. Feb 9 18:39:51.093592 systemd-logind[1138]: Removed session 18. Feb 9 18:39:55.213920 kubelet[2008]: E0209 18:39:55.213880 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:56.094138 systemd[1]: Started sshd@18-10.0.0.103:22-10.0.0.1:40004.service. Feb 9 18:39:56.134925 sshd[3696]: Accepted publickey for core from 10.0.0.1 port 40004 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:56.137130 sshd[3696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:56.140341 systemd-logind[1138]: New session 19 of user core. Feb 9 18:39:56.141235 systemd[1]: Started session-19.scope. Feb 9 18:39:56.245271 sshd[3696]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:56.247410 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 18:39:56.247990 systemd[1]: sshd@18-10.0.0.103:22-10.0.0.1:40004.service: Deactivated successfully. Feb 9 18:39:56.248896 systemd-logind[1138]: Session 19 logged out. Waiting for processes to exit. Feb 9 18:39:56.249510 systemd-logind[1138]: Removed session 19. Feb 9 18:39:57.214098 kubelet[2008]: E0209 18:39:57.214069 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:01.250393 systemd[1]: Started sshd@19-10.0.0.103:22-10.0.0.1:40006.service. Feb 9 18:40:01.290297 sshd[3709]: Accepted publickey for core from 10.0.0.1 port 40006 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:40:01.291382 sshd[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:01.295317 systemd[1]: Started session-20.scope. Feb 9 18:40:01.295629 systemd-logind[1138]: New session 20 of user core. Feb 9 18:40:01.400016 sshd[3709]: pam_unix(sshd:session): session closed for user core Feb 9 18:40:01.402295 systemd[1]: sshd@19-10.0.0.103:22-10.0.0.1:40006.service: Deactivated successfully. Feb 9 18:40:01.403054 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 18:40:01.403748 systemd-logind[1138]: Session 20 logged out. Waiting for processes to exit. Feb 9 18:40:01.404497 systemd-logind[1138]: Removed session 20. Feb 9 18:40:05.213989 kubelet[2008]: E0209 18:40:05.213588 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:06.404545 systemd[1]: Started sshd@20-10.0.0.103:22-10.0.0.1:35652.service. Feb 9 18:40:06.446249 sshd[3722]: Accepted publickey for core from 10.0.0.1 port 35652 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:40:06.447071 sshd[3722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:06.452721 systemd-logind[1138]: New session 21 of user core. Feb 9 18:40:06.455417 systemd[1]: Started session-21.scope. Feb 9 18:40:06.580381 sshd[3722]: pam_unix(sshd:session): session closed for user core Feb 9 18:40:06.584641 systemd[1]: Started sshd@21-10.0.0.103:22-10.0.0.1:35654.service. Feb 9 18:40:06.589005 systemd[1]: sshd@20-10.0.0.103:22-10.0.0.1:35652.service: Deactivated successfully. Feb 9 18:40:06.589693 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 18:40:06.591826 systemd-logind[1138]: Session 21 logged out. Waiting for processes to exit. Feb 9 18:40:06.593191 systemd-logind[1138]: Removed session 21. Feb 9 18:40:06.627005 sshd[3734]: Accepted publickey for core from 10.0.0.1 port 35654 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:40:06.628295 sshd[3734]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:06.634239 systemd-logind[1138]: New session 22 of user core. Feb 9 18:40:06.637400 systemd[1]: Started session-22.scope. Feb 9 18:40:07.213476 kubelet[2008]: E0209 18:40:07.213440 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:08.383557 env[1149]: time="2024-02-09T18:40:08.383512063Z" level=info msg="StopContainer for \"959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac\" with timeout 30 (s)" Feb 9 18:40:08.384136 env[1149]: time="2024-02-09T18:40:08.384051585Z" level=info msg="Stop container \"959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac\" with signal terminated" Feb 9 18:40:08.394442 systemd[1]: run-containerd-runc-k8s.io-b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289-runc.vSPs6T.mount: Deactivated successfully. Feb 9 18:40:08.403387 systemd[1]: cri-containerd-959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac.scope: Deactivated successfully. Feb 9 18:40:08.416228 env[1149]: time="2024-02-09T18:40:08.416168701Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:40:08.420929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac-rootfs.mount: Deactivated successfully. Feb 9 18:40:08.424209 env[1149]: time="2024-02-09T18:40:08.424151020Z" level=info msg="StopContainer for \"b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289\" with timeout 1 (s)" Feb 9 18:40:08.424792 env[1149]: time="2024-02-09T18:40:08.424766143Z" level=info msg="Stop container \"b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289\" with signal terminated" Feb 9 18:40:08.431263 systemd-networkd[1057]: lxc_health: Link DOWN Feb 9 18:40:08.431269 systemd-networkd[1057]: lxc_health: Lost carrier Feb 9 18:40:08.433402 env[1149]: time="2024-02-09T18:40:08.433364825Z" level=info msg="shim disconnected" id=959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac Feb 9 18:40:08.433553 env[1149]: time="2024-02-09T18:40:08.433533026Z" level=warning msg="cleaning up after shim disconnected" id=959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac namespace=k8s.io Feb 9 18:40:08.433634 env[1149]: time="2024-02-09T18:40:08.433620266Z" level=info msg="cleaning up dead shim" Feb 9 18:40:08.439938 env[1149]: time="2024-02-09T18:40:08.439901937Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3790 runtime=io.containerd.runc.v2\n" Feb 9 18:40:08.442517 env[1149]: time="2024-02-09T18:40:08.442481549Z" level=info msg="StopContainer for \"959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac\" returns successfully" Feb 9 18:40:08.443453 env[1149]: time="2024-02-09T18:40:08.443423674Z" level=info msg="StopPodSandbox for \"1a9b01f06cf31dfcae17e9d59aff30de2c55a57b4f4244a1966b6111f8834864\"" Feb 9 18:40:08.443525 env[1149]: time="2024-02-09T18:40:08.443485674Z" level=info msg="Container to stop \"959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:08.444915 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a9b01f06cf31dfcae17e9d59aff30de2c55a57b4f4244a1966b6111f8834864-shm.mount: Deactivated successfully. Feb 9 18:40:08.455208 systemd[1]: cri-containerd-1a9b01f06cf31dfcae17e9d59aff30de2c55a57b4f4244a1966b6111f8834864.scope: Deactivated successfully. Feb 9 18:40:08.459346 systemd[1]: cri-containerd-b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289.scope: Deactivated successfully. Feb 9 18:40:08.459641 systemd[1]: cri-containerd-b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289.scope: Consumed 6.510s CPU time. Feb 9 18:40:08.487439 env[1149]: time="2024-02-09T18:40:08.487392327Z" level=info msg="shim disconnected" id=1a9b01f06cf31dfcae17e9d59aff30de2c55a57b4f4244a1966b6111f8834864 Feb 9 18:40:08.487805 env[1149]: time="2024-02-09T18:40:08.487783089Z" level=warning msg="cleaning up after shim disconnected" id=1a9b01f06cf31dfcae17e9d59aff30de2c55a57b4f4244a1966b6111f8834864 namespace=k8s.io Feb 9 18:40:08.488005 env[1149]: time="2024-02-09T18:40:08.487390607Z" level=info msg="shim disconnected" id=b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289 Feb 9 18:40:08.488071 env[1149]: time="2024-02-09T18:40:08.488012850Z" level=warning msg="cleaning up after shim disconnected" id=b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289 namespace=k8s.io Feb 9 18:40:08.488071 env[1149]: time="2024-02-09T18:40:08.488028170Z" level=info msg="cleaning up dead shim" Feb 9 18:40:08.488252 env[1149]: time="2024-02-09T18:40:08.487994250Z" level=info msg="cleaning up dead shim" Feb 9 18:40:08.495186 env[1149]: time="2024-02-09T18:40:08.495145805Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3836 runtime=io.containerd.runc.v2\n" Feb 9 18:40:08.496229 env[1149]: time="2024-02-09T18:40:08.496200530Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3837 runtime=io.containerd.runc.v2\n" Feb 9 18:40:08.496691 env[1149]: time="2024-02-09T18:40:08.496661292Z" level=info msg="TearDown network for sandbox \"1a9b01f06cf31dfcae17e9d59aff30de2c55a57b4f4244a1966b6111f8834864\" successfully" Feb 9 18:40:08.496805 env[1149]: time="2024-02-09T18:40:08.496787093Z" level=info msg="StopPodSandbox for \"1a9b01f06cf31dfcae17e9d59aff30de2c55a57b4f4244a1966b6111f8834864\" returns successfully" Feb 9 18:40:08.498196 env[1149]: time="2024-02-09T18:40:08.498167819Z" level=info msg="StopContainer for \"b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289\" returns successfully" Feb 9 18:40:08.498573 env[1149]: time="2024-02-09T18:40:08.498543781Z" level=info msg="StopPodSandbox for \"5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7\"" Feb 9 18:40:08.498631 env[1149]: time="2024-02-09T18:40:08.498600541Z" level=info msg="Container to stop \"1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:08.498631 env[1149]: time="2024-02-09T18:40:08.498614461Z" level=info msg="Container to stop \"b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:08.498680 env[1149]: time="2024-02-09T18:40:08.498631101Z" level=info msg="Container to stop \"8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:08.498680 env[1149]: time="2024-02-09T18:40:08.498642422Z" level=info msg="Container to stop \"b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:08.498680 env[1149]: time="2024-02-09T18:40:08.498653222Z" level=info msg="Container to stop \"d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:08.504912 systemd[1]: cri-containerd-5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7.scope: Deactivated successfully. Feb 9 18:40:08.527862 env[1149]: time="2024-02-09T18:40:08.527818443Z" level=info msg="shim disconnected" id=5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7 Feb 9 18:40:08.527862 env[1149]: time="2024-02-09T18:40:08.527865483Z" level=warning msg="cleaning up after shim disconnected" id=5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7 namespace=k8s.io Feb 9 18:40:08.527862 env[1149]: time="2024-02-09T18:40:08.527874803Z" level=info msg="cleaning up dead shim" Feb 9 18:40:08.528466 kubelet[2008]: I0209 18:40:08.528440 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf8s5\" (UniqueName: \"kubernetes.io/projected/9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957-kube-api-access-qf8s5\") pod \"9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957\" (UID: \"9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957\") " Feb 9 18:40:08.528789 kubelet[2008]: I0209 18:40:08.528492 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957-cilium-config-path\") pod \"9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957\" (UID: \"9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957\") " Feb 9 18:40:08.529128 kubelet[2008]: W0209 18:40:08.529083 2008 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:40:08.531071 kubelet[2008]: I0209 18:40:08.531033 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957" (UID: "9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:40:08.534627 kubelet[2008]: I0209 18:40:08.534141 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957-kube-api-access-qf8s5" (OuterVolumeSpecName: "kube-api-access-qf8s5") pod "9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957" (UID: "9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957"). InnerVolumeSpecName "kube-api-access-qf8s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:08.538054 env[1149]: time="2024-02-09T18:40:08.538014733Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3880 runtime=io.containerd.runc.v2\n" Feb 9 18:40:08.538435 env[1149]: time="2024-02-09T18:40:08.538408535Z" level=info msg="TearDown network for sandbox \"5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7\" successfully" Feb 9 18:40:08.538520 env[1149]: time="2024-02-09T18:40:08.538436295Z" level=info msg="StopPodSandbox for \"5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7\" returns successfully" Feb 9 18:40:08.628891 kubelet[2008]: I0209 18:40:08.628823 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20dcf362-0d80-4715-8780-8efbab2e5ccf-clustermesh-secrets\") pod \"20dcf362-0d80-4715-8780-8efbab2e5ccf\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " Feb 9 18:40:08.628891 kubelet[2008]: I0209 18:40:08.628872 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-lib-modules\") pod \"20dcf362-0d80-4715-8780-8efbab2e5ccf\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " Feb 9 18:40:08.628891 kubelet[2008]: I0209 18:40:08.628891 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-hostproc\") pod \"20dcf362-0d80-4715-8780-8efbab2e5ccf\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " Feb 9 18:40:08.629111 kubelet[2008]: I0209 18:40:08.628910 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-xtables-lock\") pod \"20dcf362-0d80-4715-8780-8efbab2e5ccf\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " Feb 9 18:40:08.629111 kubelet[2008]: I0209 18:40:08.628938 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nxn6\" (UniqueName: \"kubernetes.io/projected/20dcf362-0d80-4715-8780-8efbab2e5ccf-kube-api-access-4nxn6\") pod \"20dcf362-0d80-4715-8780-8efbab2e5ccf\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " Feb 9 18:40:08.629111 kubelet[2008]: I0209 18:40:08.628972 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-etc-cni-netd\") pod \"20dcf362-0d80-4715-8780-8efbab2e5ccf\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " Feb 9 18:40:08.629111 kubelet[2008]: I0209 18:40:08.628992 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20dcf362-0d80-4715-8780-8efbab2e5ccf-hubble-tls\") pod \"20dcf362-0d80-4715-8780-8efbab2e5ccf\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " Feb 9 18:40:08.629111 kubelet[2008]: I0209 18:40:08.629008 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-cilium-cgroup\") pod \"20dcf362-0d80-4715-8780-8efbab2e5ccf\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " Feb 9 18:40:08.629111 kubelet[2008]: I0209 18:40:08.629027 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-host-proc-sys-kernel\") pod \"20dcf362-0d80-4715-8780-8efbab2e5ccf\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " Feb 9 18:40:08.629287 kubelet[2008]: I0209 18:40:08.629045 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-bpf-maps\") pod \"20dcf362-0d80-4715-8780-8efbab2e5ccf\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " Feb 9 18:40:08.629287 kubelet[2008]: I0209 18:40:08.629068 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20dcf362-0d80-4715-8780-8efbab2e5ccf-cilium-config-path\") pod \"20dcf362-0d80-4715-8780-8efbab2e5ccf\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " Feb 9 18:40:08.629287 kubelet[2008]: I0209 18:40:08.629086 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-host-proc-sys-net\") pod \"20dcf362-0d80-4715-8780-8efbab2e5ccf\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " Feb 9 18:40:08.629287 kubelet[2008]: I0209 18:40:08.629105 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-cni-path\") pod \"20dcf362-0d80-4715-8780-8efbab2e5ccf\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " Feb 9 18:40:08.629287 kubelet[2008]: I0209 18:40:08.629128 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-cilium-run\") pod \"20dcf362-0d80-4715-8780-8efbab2e5ccf\" (UID: \"20dcf362-0d80-4715-8780-8efbab2e5ccf\") " Feb 9 18:40:08.629287 kubelet[2008]: I0209 18:40:08.629159 2008 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-qf8s5\" (UniqueName: \"kubernetes.io/projected/9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957-kube-api-access-qf8s5\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:08.629423 kubelet[2008]: I0209 18:40:08.629170 2008 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:08.629423 kubelet[2008]: I0209 18:40:08.629199 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "20dcf362-0d80-4715-8780-8efbab2e5ccf" (UID: "20dcf362-0d80-4715-8780-8efbab2e5ccf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.629423 kubelet[2008]: I0209 18:40:08.629229 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-hostproc" (OuterVolumeSpecName: "hostproc") pod "20dcf362-0d80-4715-8780-8efbab2e5ccf" (UID: "20dcf362-0d80-4715-8780-8efbab2e5ccf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.629423 kubelet[2008]: I0209 18:40:08.629254 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "20dcf362-0d80-4715-8780-8efbab2e5ccf" (UID: "20dcf362-0d80-4715-8780-8efbab2e5ccf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.631464 kubelet[2008]: I0209 18:40:08.629531 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "20dcf362-0d80-4715-8780-8efbab2e5ccf" (UID: "20dcf362-0d80-4715-8780-8efbab2e5ccf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.631464 kubelet[2008]: I0209 18:40:08.629562 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "20dcf362-0d80-4715-8780-8efbab2e5ccf" (UID: "20dcf362-0d80-4715-8780-8efbab2e5ccf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.631464 kubelet[2008]: I0209 18:40:08.629562 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "20dcf362-0d80-4715-8780-8efbab2e5ccf" (UID: "20dcf362-0d80-4715-8780-8efbab2e5ccf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.631464 kubelet[2008]: W0209 18:40:08.629711 2008 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/20dcf362-0d80-4715-8780-8efbab2e5ccf/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:40:08.631464 kubelet[2008]: I0209 18:40:08.629732 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "20dcf362-0d80-4715-8780-8efbab2e5ccf" (UID: "20dcf362-0d80-4715-8780-8efbab2e5ccf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.631654 kubelet[2008]: I0209 18:40:08.629766 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "20dcf362-0d80-4715-8780-8efbab2e5ccf" (UID: "20dcf362-0d80-4715-8780-8efbab2e5ccf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.631654 kubelet[2008]: I0209 18:40:08.629786 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-cni-path" (OuterVolumeSpecName: "cni-path") pod "20dcf362-0d80-4715-8780-8efbab2e5ccf" (UID: "20dcf362-0d80-4715-8780-8efbab2e5ccf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.631654 kubelet[2008]: I0209 18:40:08.629801 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "20dcf362-0d80-4715-8780-8efbab2e5ccf" (UID: "20dcf362-0d80-4715-8780-8efbab2e5ccf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.631654 kubelet[2008]: I0209 18:40:08.631420 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20dcf362-0d80-4715-8780-8efbab2e5ccf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "20dcf362-0d80-4715-8780-8efbab2e5ccf" (UID: "20dcf362-0d80-4715-8780-8efbab2e5ccf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:40:08.631654 kubelet[2008]: I0209 18:40:08.631602 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20dcf362-0d80-4715-8780-8efbab2e5ccf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "20dcf362-0d80-4715-8780-8efbab2e5ccf" (UID: "20dcf362-0d80-4715-8780-8efbab2e5ccf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:40:08.632019 kubelet[2008]: I0209 18:40:08.631986 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20dcf362-0d80-4715-8780-8efbab2e5ccf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "20dcf362-0d80-4715-8780-8efbab2e5ccf" (UID: "20dcf362-0d80-4715-8780-8efbab2e5ccf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:08.632235 kubelet[2008]: I0209 18:40:08.632201 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20dcf362-0d80-4715-8780-8efbab2e5ccf-kube-api-access-4nxn6" (OuterVolumeSpecName: "kube-api-access-4nxn6") pod "20dcf362-0d80-4715-8780-8efbab2e5ccf" (UID: "20dcf362-0d80-4715-8780-8efbab2e5ccf"). InnerVolumeSpecName "kube-api-access-4nxn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:08.729620 kubelet[2008]: I0209 18:40:08.729587 2008 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-4nxn6\" (UniqueName: \"kubernetes.io/projected/20dcf362-0d80-4715-8780-8efbab2e5ccf-kube-api-access-4nxn6\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:08.729620 kubelet[2008]: I0209 18:40:08.729620 2008 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:08.729753 kubelet[2008]: I0209 18:40:08.729630 2008 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20dcf362-0d80-4715-8780-8efbab2e5ccf-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:08.729753 kubelet[2008]: I0209 18:40:08.729660 2008 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:08.729753 kubelet[2008]: I0209 18:40:08.729670 2008 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:08.729753 kubelet[2008]: I0209 18:40:08.729679 2008 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:08.729753 kubelet[2008]: I0209 18:40:08.729688 2008 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20dcf362-0d80-4715-8780-8efbab2e5ccf-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:08.729753 kubelet[2008]: I0209 18:40:08.729697 2008 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:08.729753 kubelet[2008]: I0209 18:40:08.729705 2008 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:08.729753 kubelet[2008]: I0209 18:40:08.729715 2008 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:08.729937 kubelet[2008]: I0209 18:40:08.729726 2008 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:08.729937 kubelet[2008]: I0209 18:40:08.729735 2008 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:08.729937 kubelet[2008]: I0209 18:40:08.729745 2008 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20dcf362-0d80-4715-8780-8efbab2e5ccf-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:08.729937 kubelet[2008]: I0209 18:40:08.729754 2008 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20dcf362-0d80-4715-8780-8efbab2e5ccf-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:09.390253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289-rootfs.mount: Deactivated successfully. Feb 9 18:40:09.390361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a9b01f06cf31dfcae17e9d59aff30de2c55a57b4f4244a1966b6111f8834864-rootfs.mount: Deactivated successfully. Feb 9 18:40:09.390420 systemd[1]: var-lib-kubelet-pods-9cd40c7d\x2de78a\x2d42f1\x2d8ec6\x2d4bc2f52e6957-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqf8s5.mount: Deactivated successfully. Feb 9 18:40:09.390496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7-rootfs.mount: Deactivated successfully. Feb 9 18:40:09.390548 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5785fc924e7d60d964941e909374e4ee6a1ead532a2aaa36ece3535a8bc861c7-shm.mount: Deactivated successfully. Feb 9 18:40:09.390596 systemd[1]: var-lib-kubelet-pods-20dcf362\x2d0d80\x2d4715\x2d8780\x2d8efbab2e5ccf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4nxn6.mount: Deactivated successfully. Feb 9 18:40:09.390652 systemd[1]: var-lib-kubelet-pods-20dcf362\x2d0d80\x2d4715\x2d8780\x2d8efbab2e5ccf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:40:09.390702 systemd[1]: var-lib-kubelet-pods-20dcf362\x2d0d80\x2d4715\x2d8780\x2d8efbab2e5ccf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:40:09.401996 kubelet[2008]: I0209 18:40:09.401034 2008 scope.go:115] "RemoveContainer" containerID="b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289" Feb 9 18:40:09.404437 systemd[1]: Removed slice kubepods-burstable-pod20dcf362_0d80_4715_8780_8efbab2e5ccf.slice. Feb 9 18:40:09.404522 systemd[1]: kubepods-burstable-pod20dcf362_0d80_4715_8780_8efbab2e5ccf.slice: Consumed 6.755s CPU time. Feb 9 18:40:09.405355 env[1149]: time="2024-02-09T18:40:09.405287668Z" level=info msg="RemoveContainer for \"b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289\"" Feb 9 18:40:09.409181 env[1149]: time="2024-02-09T18:40:09.409142726Z" level=info msg="RemoveContainer for \"b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289\" returns successfully" Feb 9 18:40:09.410185 kubelet[2008]: I0209 18:40:09.410117 2008 scope.go:115] "RemoveContainer" containerID="8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f" Feb 9 18:40:09.411269 env[1149]: time="2024-02-09T18:40:09.411230057Z" level=info msg="RemoveContainer for \"8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f\"" Feb 9 18:40:09.413665 systemd[1]: Removed slice kubepods-besteffort-pod9cd40c7d_e78a_42f1_8ec6_4bc2f52e6957.slice. Feb 9 18:40:09.416472 env[1149]: time="2024-02-09T18:40:09.416422602Z" level=info msg="RemoveContainer for \"8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f\" returns successfully" Feb 9 18:40:09.416659 kubelet[2008]: I0209 18:40:09.416600 2008 scope.go:115] "RemoveContainer" containerID="d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8" Feb 9 18:40:09.417525 env[1149]: time="2024-02-09T18:40:09.417500087Z" level=info msg="RemoveContainer for \"d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8\"" Feb 9 18:40:09.421870 env[1149]: time="2024-02-09T18:40:09.421837108Z" level=info msg="RemoveContainer for \"d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8\" returns successfully" Feb 9 18:40:09.422309 kubelet[2008]: I0209 18:40:09.422288 2008 scope.go:115] "RemoveContainer" containerID="b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7" Feb 9 18:40:09.424537 env[1149]: time="2024-02-09T18:40:09.424511241Z" level=info msg="RemoveContainer for \"b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7\"" Feb 9 18:40:09.430451 env[1149]: time="2024-02-09T18:40:09.430294669Z" level=info msg="RemoveContainer for \"b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7\" returns successfully" Feb 9 18:40:09.430535 kubelet[2008]: I0209 18:40:09.430455 2008 scope.go:115] "RemoveContainer" containerID="1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a" Feb 9 18:40:09.431780 env[1149]: time="2024-02-09T18:40:09.431751317Z" level=info msg="RemoveContainer for \"1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a\"" Feb 9 18:40:09.434389 env[1149]: time="2024-02-09T18:40:09.434352769Z" level=info msg="RemoveContainer for \"1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a\" returns successfully" Feb 9 18:40:09.434630 kubelet[2008]: I0209 18:40:09.434614 2008 scope.go:115] "RemoveContainer" containerID="b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289" Feb 9 18:40:09.435014 env[1149]: time="2024-02-09T18:40:09.434933172Z" level=error msg="ContainerStatus for \"b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289\": not found" Feb 9 18:40:09.435977 kubelet[2008]: E0209 18:40:09.435930 2008 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289\": not found" containerID="b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289" Feb 9 18:40:09.436180 kubelet[2008]: I0209 18:40:09.436151 2008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289} err="failed to get container status \"b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289\": rpc error: code = NotFound desc = an error occurred when try to find container \"b43caed33010930b459ecf1b0bd2c19db727d3109bb6492b0bf55f5112332289\": not found" Feb 9 18:40:09.436314 kubelet[2008]: I0209 18:40:09.436300 2008 scope.go:115] "RemoveContainer" containerID="8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f" Feb 9 18:40:09.436586 env[1149]: time="2024-02-09T18:40:09.436529580Z" level=error msg="ContainerStatus for \"8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f\": not found" Feb 9 18:40:09.436839 kubelet[2008]: E0209 18:40:09.436819 2008 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f\": not found" containerID="8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f" Feb 9 18:40:09.436879 kubelet[2008]: I0209 18:40:09.436853 2008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f} err="failed to get container status \"8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cd03757721a26ec8c06b9e25909bdc6f538502b677ee1ca5909234f9af7038f\": not found" Feb 9 18:40:09.436879 kubelet[2008]: I0209 18:40:09.436864 2008 scope.go:115] "RemoveContainer" containerID="d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8" Feb 9 18:40:09.437229 env[1149]: time="2024-02-09T18:40:09.437184823Z" level=error msg="ContainerStatus for \"d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8\": not found" Feb 9 18:40:09.437402 kubelet[2008]: E0209 18:40:09.437377 2008 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8\": not found" containerID="d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8" Feb 9 18:40:09.437487 kubelet[2008]: I0209 18:40:09.437404 2008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8} err="failed to get container status \"d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8\": rpc error: code = NotFound desc = an error occurred when try to find container \"d000f19109a76b1acf1f11a41b026bf447ee2ab41b3841fc1c58720e74555ea8\": not found" Feb 9 18:40:09.437487 kubelet[2008]: I0209 18:40:09.437414 2008 scope.go:115] "RemoveContainer" containerID="b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7" Feb 9 18:40:09.438421 env[1149]: time="2024-02-09T18:40:09.438350189Z" level=error msg="ContainerStatus for \"b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7\": not found" Feb 9 18:40:09.438538 kubelet[2008]: E0209 18:40:09.438518 2008 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7\": not found" containerID="b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7" Feb 9 18:40:09.438578 kubelet[2008]: I0209 18:40:09.438550 2008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7} err="failed to get container status \"b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3006b1440c5322b4ea0a66d147db9c936f4c4c511bd5473613743034e2e6fd7\": not found" Feb 9 18:40:09.438578 kubelet[2008]: I0209 18:40:09.438560 2008 scope.go:115] "RemoveContainer" containerID="1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a" Feb 9 18:40:09.438914 env[1149]: time="2024-02-09T18:40:09.438697550Z" level=error msg="ContainerStatus for \"1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a\": not found" Feb 9 18:40:09.439022 kubelet[2008]: E0209 18:40:09.438835 2008 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a\": not found" containerID="1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a" Feb 9 18:40:09.439022 kubelet[2008]: I0209 18:40:09.438856 2008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a} err="failed to get container status \"1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a\": rpc error: code = NotFound desc = an error occurred when try to find container \"1270a0ba1f415453d18b5d3f75e0c1fd31007c2d0a230baf53779f69f63b138a\": not found" Feb 9 18:40:09.439022 kubelet[2008]: I0209 18:40:09.438865 2008 scope.go:115] "RemoveContainer" containerID="959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac" Feb 9 18:40:09.441293 env[1149]: time="2024-02-09T18:40:09.440044317Z" level=info msg="RemoveContainer for \"959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac\"" Feb 9 18:40:09.442503 env[1149]: time="2024-02-09T18:40:09.442463249Z" level=info msg="RemoveContainer for \"959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac\" returns successfully" Feb 9 18:40:09.442639 kubelet[2008]: I0209 18:40:09.442610 2008 scope.go:115] "RemoveContainer" containerID="959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac" Feb 9 18:40:09.442826 env[1149]: time="2024-02-09T18:40:09.442769570Z" level=error msg="ContainerStatus for \"959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac\": not found" Feb 9 18:40:09.442934 kubelet[2008]: E0209 18:40:09.442912 2008 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac\": not found" containerID="959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac" Feb 9 18:40:09.442972 kubelet[2008]: I0209 18:40:09.442945 2008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac} err="failed to get container status \"959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac\": rpc error: code = NotFound desc = an error occurred when try to find container \"959fb31a226531567e91df6dcea4b4b97bfed51b308549dacd1a99fef692adac\": not found" Feb 9 18:40:10.216806 kubelet[2008]: I0209 18:40:10.216149 2008 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=20dcf362-0d80-4715-8780-8efbab2e5ccf path="/var/lib/kubelet/pods/20dcf362-0d80-4715-8780-8efbab2e5ccf/volumes" Feb 9 18:40:10.216806 kubelet[2008]: I0209 18:40:10.216726 2008 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957 path="/var/lib/kubelet/pods/9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957/volumes" Feb 9 18:40:10.339555 sshd[3734]: pam_unix(sshd:session): session closed for user core Feb 9 18:40:10.342236 systemd[1]: Started sshd@22-10.0.0.103:22-10.0.0.1:35656.service. Feb 9 18:40:10.342741 systemd[1]: sshd@21-10.0.0.103:22-10.0.0.1:35654.service: Deactivated successfully. Feb 9 18:40:10.343527 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 18:40:10.343705 systemd[1]: session-22.scope: Consumed 1.067s CPU time. Feb 9 18:40:10.347268 systemd-logind[1138]: Session 22 logged out. Waiting for processes to exit. Feb 9 18:40:10.348284 systemd-logind[1138]: Removed session 22. Feb 9 18:40:10.384566 sshd[3899]: Accepted publickey for core from 10.0.0.1 port 35656 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:40:10.385733 sshd[3899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:10.390035 systemd[1]: Started session-23.scope. Feb 9 18:40:10.390512 systemd-logind[1138]: New session 23 of user core. Feb 9 18:40:11.242877 kubelet[2008]: E0209 18:40:11.242762 2008 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:40:11.774135 sshd[3899]: pam_unix(sshd:session): session closed for user core Feb 9 18:40:11.778750 systemd[1]: Started sshd@23-10.0.0.103:22-10.0.0.1:35666.service. Feb 9 18:40:11.780887 systemd[1]: sshd@22-10.0.0.103:22-10.0.0.1:35656.service: Deactivated successfully. Feb 9 18:40:11.784734 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 18:40:11.785541 systemd[1]: session-23.scope: Consumed 1.271s CPU time. Feb 9 18:40:11.786326 systemd-logind[1138]: Session 23 logged out. Waiting for processes to exit. Feb 9 18:40:11.787197 systemd-logind[1138]: Removed session 23. Feb 9 18:40:11.798172 kubelet[2008]: I0209 18:40:11.798131 2008 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:40:11.798296 kubelet[2008]: E0209 18:40:11.798194 2008 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20dcf362-0d80-4715-8780-8efbab2e5ccf" containerName="mount-cgroup" Feb 9 18:40:11.798296 kubelet[2008]: E0209 18:40:11.798204 2008 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20dcf362-0d80-4715-8780-8efbab2e5ccf" containerName="mount-bpf-fs" Feb 9 18:40:11.798296 kubelet[2008]: E0209 18:40:11.798211 2008 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957" containerName="cilium-operator" Feb 9 18:40:11.798296 kubelet[2008]: E0209 18:40:11.798218 2008 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20dcf362-0d80-4715-8780-8efbab2e5ccf" containerName="clean-cilium-state" Feb 9 18:40:11.798296 kubelet[2008]: E0209 18:40:11.798224 2008 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20dcf362-0d80-4715-8780-8efbab2e5ccf" containerName="cilium-agent" Feb 9 18:40:11.798296 kubelet[2008]: E0209 18:40:11.798231 2008 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20dcf362-0d80-4715-8780-8efbab2e5ccf" containerName="apply-sysctl-overwrites" Feb 9 18:40:11.798296 kubelet[2008]: I0209 18:40:11.798261 2008 memory_manager.go:346] "RemoveStaleState removing state" podUID="20dcf362-0d80-4715-8780-8efbab2e5ccf" containerName="cilium-agent" Feb 9 18:40:11.798296 kubelet[2008]: I0209 18:40:11.798268 2008 memory_manager.go:346] "RemoveStaleState removing state" podUID="9cd40c7d-e78a-42f1-8ec6-4bc2f52e6957" containerName="cilium-operator" Feb 9 18:40:11.803221 systemd[1]: Created slice kubepods-burstable-podd4c63dd6_b2e2_4059_8cc5_e7e61efc82c6.slice. Feb 9 18:40:11.828788 sshd[3912]: Accepted publickey for core from 10.0.0.1 port 35666 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:40:11.830479 sshd[3912]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:11.835158 systemd[1]: Started session-24.scope. Feb 9 18:40:11.835468 systemd-logind[1138]: New session 24 of user core. Feb 9 18:40:11.846505 kubelet[2008]: I0209 18:40:11.846415 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-lib-modules\") pod \"cilium-bzvkd\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " pod="kube-system/cilium-bzvkd" Feb 9 18:40:11.846505 kubelet[2008]: I0209 18:40:11.846461 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-host-proc-sys-net\") pod \"cilium-bzvkd\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " pod="kube-system/cilium-bzvkd" Feb 9 18:40:11.846505 kubelet[2008]: I0209 18:40:11.846486 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cilium-config-path\") pod \"cilium-bzvkd\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " pod="kube-system/cilium-bzvkd" Feb 9 18:40:11.846505 kubelet[2008]: I0209 18:40:11.846504 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cilium-ipsec-secrets\") pod \"cilium-bzvkd\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " pod="kube-system/cilium-bzvkd" Feb 9 18:40:11.846789 kubelet[2008]: I0209 18:40:11.846524 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84gz8\" (UniqueName: \"kubernetes.io/projected/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-kube-api-access-84gz8\") pod \"cilium-bzvkd\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " pod="kube-system/cilium-bzvkd" Feb 9 18:40:11.846789 kubelet[2008]: I0209 18:40:11.846543 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cilium-run\") pod \"cilium-bzvkd\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " pod="kube-system/cilium-bzvkd" Feb 9 18:40:11.846789 kubelet[2008]: I0209 18:40:11.846563 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-etc-cni-netd\") pod \"cilium-bzvkd\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " pod="kube-system/cilium-bzvkd" Feb 9 18:40:11.846789 kubelet[2008]: I0209 18:40:11.846583 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cilium-cgroup\") pod \"cilium-bzvkd\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " pod="kube-system/cilium-bzvkd" Feb 9 18:40:11.846789 kubelet[2008]: I0209 18:40:11.846602 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-xtables-lock\") pod \"cilium-bzvkd\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " pod="kube-system/cilium-bzvkd" Feb 9 18:40:11.846789 kubelet[2008]: I0209 18:40:11.846622 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-host-proc-sys-kernel\") pod \"cilium-bzvkd\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " pod="kube-system/cilium-bzvkd" Feb 9 18:40:11.846925 kubelet[2008]: I0209 18:40:11.846642 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-bpf-maps\") pod \"cilium-bzvkd\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " pod="kube-system/cilium-bzvkd" Feb 9 18:40:11.846925 kubelet[2008]: I0209 18:40:11.846660 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-hostproc\") pod \"cilium-bzvkd\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " pod="kube-system/cilium-bzvkd" Feb 9 18:40:11.846925 kubelet[2008]: I0209 18:40:11.846678 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-hubble-tls\") pod \"cilium-bzvkd\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " pod="kube-system/cilium-bzvkd" Feb 9 18:40:11.846925 kubelet[2008]: I0209 18:40:11.846706 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cni-path\") pod \"cilium-bzvkd\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " pod="kube-system/cilium-bzvkd" Feb 9 18:40:11.846925 kubelet[2008]: I0209 18:40:11.846725 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-clustermesh-secrets\") pod \"cilium-bzvkd\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " pod="kube-system/cilium-bzvkd" Feb 9 18:40:11.969168 sshd[3912]: pam_unix(sshd:session): session closed for user core Feb 9 18:40:11.973940 systemd[1]: sshd@23-10.0.0.103:22-10.0.0.1:35666.service: Deactivated successfully. Feb 9 18:40:11.974582 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 18:40:11.979373 systemd[1]: Started sshd@24-10.0.0.103:22-10.0.0.1:35676.service. Feb 9 18:40:11.979790 systemd-logind[1138]: Session 24 logged out. Waiting for processes to exit. Feb 9 18:40:11.980733 systemd-logind[1138]: Removed session 24. Feb 9 18:40:12.020292 sshd[3930]: Accepted publickey for core from 10.0.0.1 port 35676 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:40:12.021493 sshd[3930]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:12.025015 systemd-logind[1138]: New session 25 of user core. Feb 9 18:40:12.025866 systemd[1]: Started session-25.scope. Feb 9 18:40:12.108840 kubelet[2008]: E0209 18:40:12.108811 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:12.109524 env[1149]: time="2024-02-09T18:40:12.109487167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bzvkd,Uid:d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6,Namespace:kube-system,Attempt:0,}" Feb 9 18:40:12.126335 env[1149]: time="2024-02-09T18:40:12.126269329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:40:12.126753 env[1149]: time="2024-02-09T18:40:12.126309850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:40:12.126753 env[1149]: time="2024-02-09T18:40:12.126321010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:40:12.126753 env[1149]: time="2024-02-09T18:40:12.126535251Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b472998756a59c969acddfe44f6afc20a46dc47f3e4efebab7af754153a4809b pid=3949 runtime=io.containerd.runc.v2 Feb 9 18:40:12.139529 systemd[1]: Started cri-containerd-b472998756a59c969acddfe44f6afc20a46dc47f3e4efebab7af754153a4809b.scope. Feb 9 18:40:12.180424 env[1149]: time="2024-02-09T18:40:12.180381075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bzvkd,Uid:d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b472998756a59c969acddfe44f6afc20a46dc47f3e4efebab7af754153a4809b\"" Feb 9 18:40:12.181323 kubelet[2008]: E0209 18:40:12.181302 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:12.183371 env[1149]: time="2024-02-09T18:40:12.183329490Z" level=info msg="CreateContainer within sandbox \"b472998756a59c969acddfe44f6afc20a46dc47f3e4efebab7af754153a4809b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:40:12.193156 env[1149]: time="2024-02-09T18:40:12.193111738Z" level=info msg="CreateContainer within sandbox \"b472998756a59c969acddfe44f6afc20a46dc47f3e4efebab7af754153a4809b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0\"" Feb 9 18:40:12.193999 env[1149]: time="2024-02-09T18:40:12.193970582Z" level=info msg="StartContainer for \"84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0\"" Feb 9 18:40:12.210198 systemd[1]: Started cri-containerd-84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0.scope. Feb 9 18:40:12.228442 systemd[1]: cri-containerd-84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0.scope: Deactivated successfully. Feb 9 18:40:12.251074 env[1149]: time="2024-02-09T18:40:12.251024743Z" level=info msg="shim disconnected" id=84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0 Feb 9 18:40:12.251074 env[1149]: time="2024-02-09T18:40:12.251074903Z" level=warning msg="cleaning up after shim disconnected" id=84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0 namespace=k8s.io Feb 9 18:40:12.251335 env[1149]: time="2024-02-09T18:40:12.251084943Z" level=info msg="cleaning up dead shim" Feb 9 18:40:12.258547 env[1149]: time="2024-02-09T18:40:12.258484619Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4005 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T18:40:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2024-02-09T18:40:12Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 18:40:12.258886 env[1149]: time="2024-02-09T18:40:12.258755581Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Feb 9 18:40:12.259085 env[1149]: time="2024-02-09T18:40:12.259043262Z" level=error msg="Failed to pipe stdout of container \"84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0\"" error="reading from a closed fifo" Feb 9 18:40:12.259409 env[1149]: time="2024-02-09T18:40:12.259377384Z" level=error msg="Failed to pipe stderr of container \"84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0\"" error="reading from a closed fifo" Feb 9 18:40:12.261824 env[1149]: time="2024-02-09T18:40:12.261768636Z" level=error msg="StartContainer for \"84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 18:40:12.262187 kubelet[2008]: E0209 18:40:12.262076 2008 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0" Feb 9 18:40:12.262943 kubelet[2008]: E0209 18:40:12.262873 2008 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 18:40:12.262943 kubelet[2008]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 18:40:12.262943 kubelet[2008]: rm /hostbin/cilium-mount Feb 9 18:40:12.262943 kubelet[2008]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-84gz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-bzvkd_kube-system(d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 18:40:12.263163 kubelet[2008]: E0209 18:40:12.262920 2008 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bzvkd" podUID=d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6 Feb 9 18:40:12.419189 env[1149]: time="2024-02-09T18:40:12.419072009Z" level=info msg="StopPodSandbox for \"b472998756a59c969acddfe44f6afc20a46dc47f3e4efebab7af754153a4809b\"" Feb 9 18:40:12.419189 env[1149]: time="2024-02-09T18:40:12.419167889Z" level=info msg="Container to stop \"84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:12.430465 systemd[1]: cri-containerd-b472998756a59c969acddfe44f6afc20a46dc47f3e4efebab7af754153a4809b.scope: Deactivated successfully. Feb 9 18:40:12.452020 env[1149]: time="2024-02-09T18:40:12.451973770Z" level=info msg="shim disconnected" id=b472998756a59c969acddfe44f6afc20a46dc47f3e4efebab7af754153a4809b Feb 9 18:40:12.452567 env[1149]: time="2024-02-09T18:40:12.452541213Z" level=warning msg="cleaning up after shim disconnected" id=b472998756a59c969acddfe44f6afc20a46dc47f3e4efebab7af754153a4809b namespace=k8s.io Feb 9 18:40:12.452668 env[1149]: time="2024-02-09T18:40:12.452654054Z" level=info msg="cleaning up dead shim" Feb 9 18:40:12.459881 env[1149]: time="2024-02-09T18:40:12.459843889Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4036 runtime=io.containerd.runc.v2\n" Feb 9 18:40:12.460317 env[1149]: time="2024-02-09T18:40:12.460286331Z" level=info msg="TearDown network for sandbox \"b472998756a59c969acddfe44f6afc20a46dc47f3e4efebab7af754153a4809b\" successfully" Feb 9 18:40:12.460411 env[1149]: time="2024-02-09T18:40:12.460392972Z" level=info msg="StopPodSandbox for \"b472998756a59c969acddfe44f6afc20a46dc47f3e4efebab7af754153a4809b\" returns successfully" Feb 9 18:40:12.551819 kubelet[2008]: I0209 18:40:12.551779 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-host-proc-sys-net\") pod \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " Feb 9 18:40:12.551819 kubelet[2008]: I0209 18:40:12.551828 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-hubble-tls\") pod \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " Feb 9 18:40:12.552053 kubelet[2008]: I0209 18:40:12.551847 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cilium-cgroup\") pod \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " Feb 9 18:40:12.552053 kubelet[2008]: I0209 18:40:12.551871 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cilium-config-path\") pod \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " Feb 9 18:40:12.552053 kubelet[2008]: I0209 18:40:12.551891 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84gz8\" (UniqueName: \"kubernetes.io/projected/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-kube-api-access-84gz8\") pod \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " Feb 9 18:40:12.552053 kubelet[2008]: I0209 18:40:12.551913 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-bpf-maps\") pod \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " Feb 9 18:40:12.552053 kubelet[2008]: I0209 18:40:12.551928 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-hostproc\") pod \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " Feb 9 18:40:12.552053 kubelet[2008]: I0209 18:40:12.551946 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-xtables-lock\") pod \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " Feb 9 18:40:12.552195 kubelet[2008]: I0209 18:40:12.551984 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-host-proc-sys-kernel\") pod \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " Feb 9 18:40:12.552195 kubelet[2008]: I0209 18:40:12.552002 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cni-path\") pod \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " Feb 9 18:40:12.552195 kubelet[2008]: I0209 18:40:12.552023 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-clustermesh-secrets\") pod \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " Feb 9 18:40:12.552195 kubelet[2008]: I0209 18:40:12.552045 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cilium-ipsec-secrets\") pod \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " Feb 9 18:40:12.552195 kubelet[2008]: I0209 18:40:12.552066 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-lib-modules\") pod \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " Feb 9 18:40:12.552195 kubelet[2008]: I0209 18:40:12.552086 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cilium-run\") pod \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " Feb 9 18:40:12.552355 kubelet[2008]: I0209 18:40:12.552102 2008 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-etc-cni-netd\") pod \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\" (UID: \"d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6\") " Feb 9 18:40:12.552355 kubelet[2008]: I0209 18:40:12.552166 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" (UID: "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.552355 kubelet[2008]: I0209 18:40:12.552192 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" (UID: "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.553164 kubelet[2008]: I0209 18:40:12.552474 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" (UID: "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.553164 kubelet[2008]: I0209 18:40:12.552515 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" (UID: "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.553164 kubelet[2008]: I0209 18:40:12.552547 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-hostproc" (OuterVolumeSpecName: "hostproc") pod "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" (UID: "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.553164 kubelet[2008]: I0209 18:40:12.552545 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" (UID: "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.553164 kubelet[2008]: I0209 18:40:12.552652 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" (UID: "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.553386 kubelet[2008]: I0209 18:40:12.552687 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" (UID: "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.553386 kubelet[2008]: W0209 18:40:12.552689 2008 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:40:12.553386 kubelet[2008]: I0209 18:40:12.552775 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cni-path" (OuterVolumeSpecName: "cni-path") pod "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" (UID: "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.553386 kubelet[2008]: I0209 18:40:12.552801 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" (UID: "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.554513 kubelet[2008]: I0209 18:40:12.554470 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" (UID: "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:40:12.555179 kubelet[2008]: I0209 18:40:12.555146 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-kube-api-access-84gz8" (OuterVolumeSpecName: "kube-api-access-84gz8") pod "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" (UID: "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6"). InnerVolumeSpecName "kube-api-access-84gz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:12.555370 kubelet[2008]: I0209 18:40:12.555346 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" (UID: "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:12.556263 kubelet[2008]: I0209 18:40:12.556226 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" (UID: "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:40:12.556545 kubelet[2008]: I0209 18:40:12.556525 2008 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" (UID: "d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:40:12.653132 kubelet[2008]: I0209 18:40:12.653101 2008 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-84gz8\" (UniqueName: \"kubernetes.io/projected/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-kube-api-access-84gz8\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:12.653309 kubelet[2008]: I0209 18:40:12.653296 2008 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:12.653393 kubelet[2008]: I0209 18:40:12.653383 2008 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:12.653453 kubelet[2008]: I0209 18:40:12.653445 2008 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:12.653512 kubelet[2008]: I0209 18:40:12.653503 2008 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:12.653579 kubelet[2008]: I0209 18:40:12.653569 2008 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:12.653639 kubelet[2008]: I0209 18:40:12.653630 2008 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:12.653697 kubelet[2008]: I0209 18:40:12.653688 2008 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:12.653757 kubelet[2008]: I0209 18:40:12.653748 2008 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:12.653818 kubelet[2008]: I0209 18:40:12.653809 2008 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:12.653877 kubelet[2008]: I0209 18:40:12.653868 2008 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:12.653945 kubelet[2008]: I0209 18:40:12.653936 2008 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:12.654039 kubelet[2008]: I0209 18:40:12.654028 2008 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:12.654110 kubelet[2008]: I0209 18:40:12.654097 2008 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:12.654169 kubelet[2008]: I0209 18:40:12.654161 2008 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 18:40:12.951929 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b472998756a59c969acddfe44f6afc20a46dc47f3e4efebab7af754153a4809b-shm.mount: Deactivated successfully. Feb 9 18:40:12.952056 systemd[1]: var-lib-kubelet-pods-d4c63dd6\x2db2e2\x2d4059\x2d8cc5\x2de7e61efc82c6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d84gz8.mount: Deactivated successfully. Feb 9 18:40:12.952114 systemd[1]: var-lib-kubelet-pods-d4c63dd6\x2db2e2\x2d4059\x2d8cc5\x2de7e61efc82c6-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 18:40:12.952166 systemd[1]: var-lib-kubelet-pods-d4c63dd6\x2db2e2\x2d4059\x2d8cc5\x2de7e61efc82c6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:40:12.952227 systemd[1]: var-lib-kubelet-pods-d4c63dd6\x2db2e2\x2d4059\x2d8cc5\x2de7e61efc82c6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:40:13.213365 kubelet[2008]: E0209 18:40:13.213336 2008 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-fd99l" podUID=520ed2c0-efc9-40a1-8c2f-69e0aeae0719 Feb 9 18:40:13.421715 kubelet[2008]: I0209 18:40:13.421687 2008 scope.go:115] "RemoveContainer" containerID="84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0" Feb 9 18:40:13.422925 env[1149]: time="2024-02-09T18:40:13.422891909Z" level=info msg="RemoveContainer for \"84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0\"" Feb 9 18:40:13.426049 systemd[1]: Removed slice kubepods-burstable-podd4c63dd6_b2e2_4059_8cc5_e7e61efc82c6.slice. Feb 9 18:40:13.429964 env[1149]: time="2024-02-09T18:40:13.429907344Z" level=info msg="RemoveContainer for \"84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0\" returns successfully" Feb 9 18:40:13.453965 kubelet[2008]: I0209 18:40:13.453916 2008 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:40:13.454102 kubelet[2008]: E0209 18:40:13.453991 2008 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" containerName="mount-cgroup" Feb 9 18:40:13.454102 kubelet[2008]: I0209 18:40:13.454017 2008 memory_manager.go:346] "RemoveStaleState removing state" podUID="d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6" containerName="mount-cgroup" Feb 9 18:40:13.459510 systemd[1]: Created slice kubepods-burstable-pod15ba2a54_a8fc_4833_845a_e2341a6a867c.slice. Feb 9 18:40:13.558434 kubelet[2008]: I0209 18:40:13.558320 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/15ba2a54-a8fc-4833-845a-e2341a6a867c-cilium-ipsec-secrets\") pod \"cilium-9tnc9\" (UID: \"15ba2a54-a8fc-4833-845a-e2341a6a867c\") " pod="kube-system/cilium-9tnc9" Feb 9 18:40:13.558434 kubelet[2008]: I0209 18:40:13.558372 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15ba2a54-a8fc-4833-845a-e2341a6a867c-lib-modules\") pod \"cilium-9tnc9\" (UID: \"15ba2a54-a8fc-4833-845a-e2341a6a867c\") " pod="kube-system/cilium-9tnc9" Feb 9 18:40:13.558434 kubelet[2008]: I0209 18:40:13.558393 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15ba2a54-a8fc-4833-845a-e2341a6a867c-cni-path\") pod \"cilium-9tnc9\" (UID: \"15ba2a54-a8fc-4833-845a-e2341a6a867c\") " pod="kube-system/cilium-9tnc9" Feb 9 18:40:13.558434 kubelet[2008]: I0209 18:40:13.558412 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15ba2a54-a8fc-4833-845a-e2341a6a867c-xtables-lock\") pod \"cilium-9tnc9\" (UID: \"15ba2a54-a8fc-4833-845a-e2341a6a867c\") " pod="kube-system/cilium-9tnc9" Feb 9 18:40:13.558640 kubelet[2008]: I0209 18:40:13.558475 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15ba2a54-a8fc-4833-845a-e2341a6a867c-hostproc\") pod \"cilium-9tnc9\" (UID: \"15ba2a54-a8fc-4833-845a-e2341a6a867c\") " pod="kube-system/cilium-9tnc9" Feb 9 18:40:13.558640 kubelet[2008]: I0209 18:40:13.558568 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15ba2a54-a8fc-4833-845a-e2341a6a867c-bpf-maps\") pod \"cilium-9tnc9\" (UID: \"15ba2a54-a8fc-4833-845a-e2341a6a867c\") " pod="kube-system/cilium-9tnc9" Feb 9 18:40:13.558640 kubelet[2008]: I0209 18:40:13.558622 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15ba2a54-a8fc-4833-845a-e2341a6a867c-etc-cni-netd\") pod \"cilium-9tnc9\" (UID: \"15ba2a54-a8fc-4833-845a-e2341a6a867c\") " pod="kube-system/cilium-9tnc9" Feb 9 18:40:13.558717 kubelet[2008]: I0209 18:40:13.558644 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15ba2a54-a8fc-4833-845a-e2341a6a867c-clustermesh-secrets\") pod \"cilium-9tnc9\" (UID: \"15ba2a54-a8fc-4833-845a-e2341a6a867c\") " pod="kube-system/cilium-9tnc9" Feb 9 18:40:13.558717 kubelet[2008]: I0209 18:40:13.558677 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15ba2a54-a8fc-4833-845a-e2341a6a867c-cilium-config-path\") pod \"cilium-9tnc9\" (UID: \"15ba2a54-a8fc-4833-845a-e2341a6a867c\") " pod="kube-system/cilium-9tnc9" Feb 9 18:40:13.558717 kubelet[2008]: I0209 18:40:13.558702 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15ba2a54-a8fc-4833-845a-e2341a6a867c-hubble-tls\") pod \"cilium-9tnc9\" (UID: \"15ba2a54-a8fc-4833-845a-e2341a6a867c\") " pod="kube-system/cilium-9tnc9" Feb 9 18:40:13.558789 kubelet[2008]: I0209 18:40:13.558721 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15ba2a54-a8fc-4833-845a-e2341a6a867c-cilium-run\") pod \"cilium-9tnc9\" (UID: \"15ba2a54-a8fc-4833-845a-e2341a6a867c\") " pod="kube-system/cilium-9tnc9" Feb 9 18:40:13.558789 kubelet[2008]: I0209 18:40:13.558780 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15ba2a54-a8fc-4833-845a-e2341a6a867c-cilium-cgroup\") pod \"cilium-9tnc9\" (UID: \"15ba2a54-a8fc-4833-845a-e2341a6a867c\") " pod="kube-system/cilium-9tnc9" Feb 9 18:40:13.558837 kubelet[2008]: I0209 18:40:13.558817 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gmzh\" (UniqueName: \"kubernetes.io/projected/15ba2a54-a8fc-4833-845a-e2341a6a867c-kube-api-access-6gmzh\") pod \"cilium-9tnc9\" (UID: \"15ba2a54-a8fc-4833-845a-e2341a6a867c\") " pod="kube-system/cilium-9tnc9" Feb 9 18:40:13.558860 kubelet[2008]: I0209 18:40:13.558843 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15ba2a54-a8fc-4833-845a-e2341a6a867c-host-proc-sys-net\") pod \"cilium-9tnc9\" (UID: \"15ba2a54-a8fc-4833-845a-e2341a6a867c\") " pod="kube-system/cilium-9tnc9" Feb 9 18:40:13.558885 kubelet[2008]: I0209 18:40:13.558872 2008 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15ba2a54-a8fc-4833-845a-e2341a6a867c-host-proc-sys-kernel\") pod \"cilium-9tnc9\" (UID: \"15ba2a54-a8fc-4833-845a-e2341a6a867c\") " pod="kube-system/cilium-9tnc9" Feb 9 18:40:13.762529 kubelet[2008]: E0209 18:40:13.762489 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:13.763010 env[1149]: time="2024-02-09T18:40:13.762934106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9tnc9,Uid:15ba2a54-a8fc-4833-845a-e2341a6a867c,Namespace:kube-system,Attempt:0,}" Feb 9 18:40:13.803743 env[1149]: time="2024-02-09T18:40:13.803648186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:40:13.803743 env[1149]: time="2024-02-09T18:40:13.803689547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:40:13.803743 env[1149]: time="2024-02-09T18:40:13.803699747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:40:13.806282 env[1149]: time="2024-02-09T18:40:13.804060308Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec19bd68e542aea0164bbf55657001cbe87f624c34e5f458d0cea6623ff79b91 pid=4063 runtime=io.containerd.runc.v2 Feb 9 18:40:13.817807 systemd[1]: Started cri-containerd-ec19bd68e542aea0164bbf55657001cbe87f624c34e5f458d0cea6623ff79b91.scope. Feb 9 18:40:13.844086 env[1149]: time="2024-02-09T18:40:13.844044826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9tnc9,Uid:15ba2a54-a8fc-4833-845a-e2341a6a867c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec19bd68e542aea0164bbf55657001cbe87f624c34e5f458d0cea6623ff79b91\"" Feb 9 18:40:13.844650 kubelet[2008]: E0209 18:40:13.844632 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:13.846744 env[1149]: time="2024-02-09T18:40:13.846712439Z" level=info msg="CreateContainer within sandbox \"ec19bd68e542aea0164bbf55657001cbe87f624c34e5f458d0cea6623ff79b91\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:40:13.855228 env[1149]: time="2024-02-09T18:40:13.855189600Z" level=info msg="CreateContainer within sandbox \"ec19bd68e542aea0164bbf55657001cbe87f624c34e5f458d0cea6623ff79b91\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f6a1d03ad880b6b9229fe923b5d823e47817e46d1514e116a25e6fd9901df335\"" Feb 9 18:40:13.855788 env[1149]: time="2024-02-09T18:40:13.855764043Z" level=info msg="StartContainer for \"f6a1d03ad880b6b9229fe923b5d823e47817e46d1514e116a25e6fd9901df335\"" Feb 9 18:40:13.868988 systemd[1]: Started cri-containerd-f6a1d03ad880b6b9229fe923b5d823e47817e46d1514e116a25e6fd9901df335.scope. Feb 9 18:40:13.896677 env[1149]: time="2024-02-09T18:40:13.896634765Z" level=info msg="StartContainer for \"f6a1d03ad880b6b9229fe923b5d823e47817e46d1514e116a25e6fd9901df335\" returns successfully" Feb 9 18:40:13.906504 systemd[1]: cri-containerd-f6a1d03ad880b6b9229fe923b5d823e47817e46d1514e116a25e6fd9901df335.scope: Deactivated successfully. Feb 9 18:40:13.930499 env[1149]: time="2024-02-09T18:40:13.930454132Z" level=info msg="shim disconnected" id=f6a1d03ad880b6b9229fe923b5d823e47817e46d1514e116a25e6fd9901df335 Feb 9 18:40:13.930499 env[1149]: time="2024-02-09T18:40:13.930498092Z" level=warning msg="cleaning up after shim disconnected" id=f6a1d03ad880b6b9229fe923b5d823e47817e46d1514e116a25e6fd9901df335 namespace=k8s.io Feb 9 18:40:13.930700 env[1149]: time="2024-02-09T18:40:13.930508092Z" level=info msg="cleaning up dead shim" Feb 9 18:40:13.936806 env[1149]: time="2024-02-09T18:40:13.936753963Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4146 runtime=io.containerd.runc.v2\n" Feb 9 18:40:14.215103 env[1149]: time="2024-02-09T18:40:14.215053658Z" level=info msg="StopPodSandbox for \"b472998756a59c969acddfe44f6afc20a46dc47f3e4efebab7af754153a4809b\"" Feb 9 18:40:14.215233 env[1149]: time="2024-02-09T18:40:14.215176698Z" level=info msg="TearDown network for sandbox \"b472998756a59c969acddfe44f6afc20a46dc47f3e4efebab7af754153a4809b\" successfully" Feb 9 18:40:14.215233 env[1149]: time="2024-02-09T18:40:14.215211698Z" level=info msg="StopPodSandbox for \"b472998756a59c969acddfe44f6afc20a46dc47f3e4efebab7af754153a4809b\" returns successfully" Feb 9 18:40:14.215827 kubelet[2008]: I0209 18:40:14.215802 2008 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6 path="/var/lib/kubelet/pods/d4c63dd6-b2e2-4059-8cc5-e7e61efc82c6/volumes" Feb 9 18:40:14.426448 kubelet[2008]: E0209 18:40:14.426421 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:14.429313 env[1149]: time="2024-02-09T18:40:14.429227877Z" level=info msg="CreateContainer within sandbox \"ec19bd68e542aea0164bbf55657001cbe87f624c34e5f458d0cea6623ff79b91\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:40:14.448239 env[1149]: time="2024-02-09T18:40:14.448183210Z" level=info msg="CreateContainer within sandbox \"ec19bd68e542aea0164bbf55657001cbe87f624c34e5f458d0cea6623ff79b91\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"269f6234d9fe0c46048e6267e7de3f4dc98957225ab1c59b3e8cc0a491db9630\"" Feb 9 18:40:14.448679 env[1149]: time="2024-02-09T18:40:14.448651813Z" level=info msg="StartContainer for \"269f6234d9fe0c46048e6267e7de3f4dc98957225ab1c59b3e8cc0a491db9630\"" Feb 9 18:40:14.465870 systemd[1]: Started cri-containerd-269f6234d9fe0c46048e6267e7de3f4dc98957225ab1c59b3e8cc0a491db9630.scope. Feb 9 18:40:14.495745 env[1149]: time="2024-02-09T18:40:14.495649925Z" level=info msg="StartContainer for \"269f6234d9fe0c46048e6267e7de3f4dc98957225ab1c59b3e8cc0a491db9630\" returns successfully" Feb 9 18:40:14.500622 systemd[1]: cri-containerd-269f6234d9fe0c46048e6267e7de3f4dc98957225ab1c59b3e8cc0a491db9630.scope: Deactivated successfully. Feb 9 18:40:14.520256 env[1149]: time="2024-02-09T18:40:14.520193486Z" level=info msg="shim disconnected" id=269f6234d9fe0c46048e6267e7de3f4dc98957225ab1c59b3e8cc0a491db9630 Feb 9 18:40:14.520256 env[1149]: time="2024-02-09T18:40:14.520244327Z" level=warning msg="cleaning up after shim disconnected" id=269f6234d9fe0c46048e6267e7de3f4dc98957225ab1c59b3e8cc0a491db9630 namespace=k8s.io Feb 9 18:40:14.520256 env[1149]: time="2024-02-09T18:40:14.520259127Z" level=info msg="cleaning up dead shim" Feb 9 18:40:14.527227 env[1149]: time="2024-02-09T18:40:14.527143481Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4208 runtime=io.containerd.runc.v2\n" Feb 9 18:40:14.952114 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-269f6234d9fe0c46048e6267e7de3f4dc98957225ab1c59b3e8cc0a491db9630-rootfs.mount: Deactivated successfully. Feb 9 18:40:15.213618 kubelet[2008]: E0209 18:40:15.213584 2008 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-fd99l" podUID=520ed2c0-efc9-40a1-8c2f-69e0aeae0719 Feb 9 18:40:15.357074 kubelet[2008]: W0209 18:40:15.357024 2008 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4c63dd6_b2e2_4059_8cc5_e7e61efc82c6.slice/cri-containerd-84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0.scope WatchSource:0}: container "84660fe1c89b017f92c48a1c42f24931846d52a1aedd1c23946b68784794e3b0" in namespace "k8s.io": not found Feb 9 18:40:15.431138 kubelet[2008]: E0209 18:40:15.430146 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:15.433208 env[1149]: time="2024-02-09T18:40:15.432813884Z" level=info msg="CreateContainer within sandbox \"ec19bd68e542aea0164bbf55657001cbe87f624c34e5f458d0cea6623ff79b91\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:40:15.449469 env[1149]: time="2024-02-09T18:40:15.449422407Z" level=info msg="CreateContainer within sandbox \"ec19bd68e542aea0164bbf55657001cbe87f624c34e5f458d0cea6623ff79b91\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a2aa7a03e5b6066093be545da393d4fdc5325b012430addc5959e04eefecbd34\"" Feb 9 18:40:15.449934 env[1149]: time="2024-02-09T18:40:15.449906569Z" level=info msg="StartContainer for \"a2aa7a03e5b6066093be545da393d4fdc5325b012430addc5959e04eefecbd34\"" Feb 9 18:40:15.477575 systemd[1]: Started cri-containerd-a2aa7a03e5b6066093be545da393d4fdc5325b012430addc5959e04eefecbd34.scope. Feb 9 18:40:15.507724 env[1149]: time="2024-02-09T18:40:15.507678936Z" level=info msg="StartContainer for \"a2aa7a03e5b6066093be545da393d4fdc5325b012430addc5959e04eefecbd34\" returns successfully" Feb 9 18:40:15.508730 systemd[1]: cri-containerd-a2aa7a03e5b6066093be545da393d4fdc5325b012430addc5959e04eefecbd34.scope: Deactivated successfully. Feb 9 18:40:15.529177 env[1149]: time="2024-02-09T18:40:15.529130362Z" level=info msg="shim disconnected" id=a2aa7a03e5b6066093be545da393d4fdc5325b012430addc5959e04eefecbd34 Feb 9 18:40:15.529177 env[1149]: time="2024-02-09T18:40:15.529171802Z" level=warning msg="cleaning up after shim disconnected" id=a2aa7a03e5b6066093be545da393d4fdc5325b012430addc5959e04eefecbd34 namespace=k8s.io Feb 9 18:40:15.529177 env[1149]: time="2024-02-09T18:40:15.529181322Z" level=info msg="cleaning up dead shim" Feb 9 18:40:15.535948 env[1149]: time="2024-02-09T18:40:15.535912955Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4267 runtime=io.containerd.runc.v2\n" Feb 9 18:40:15.952216 systemd[1]: run-containerd-runc-k8s.io-a2aa7a03e5b6066093be545da393d4fdc5325b012430addc5959e04eefecbd34-runc.Bkzugz.mount: Deactivated successfully. Feb 9 18:40:15.952329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2aa7a03e5b6066093be545da393d4fdc5325b012430addc5959e04eefecbd34-rootfs.mount: Deactivated successfully. Feb 9 18:40:16.244243 kubelet[2008]: E0209 18:40:16.244193 2008 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:40:16.434118 kubelet[2008]: E0209 18:40:16.434082 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:16.436208 env[1149]: time="2024-02-09T18:40:16.436165625Z" level=info msg="CreateContainer within sandbox \"ec19bd68e542aea0164bbf55657001cbe87f624c34e5f458d0cea6623ff79b91\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:40:16.452164 env[1149]: time="2024-02-09T18:40:16.452104624Z" level=info msg="CreateContainer within sandbox \"ec19bd68e542aea0164bbf55657001cbe87f624c34e5f458d0cea6623ff79b91\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"00796a2a257bb958710601003da09d01484ed5b60cfc0d1375f495dbe3b9ebec\"" Feb 9 18:40:16.452927 env[1149]: time="2024-02-09T18:40:16.452901828Z" level=info msg="StartContainer for \"00796a2a257bb958710601003da09d01484ed5b60cfc0d1375f495dbe3b9ebec\"" Feb 9 18:40:16.472182 systemd[1]: Started cri-containerd-00796a2a257bb958710601003da09d01484ed5b60cfc0d1375f495dbe3b9ebec.scope. Feb 9 18:40:16.501883 systemd[1]: cri-containerd-00796a2a257bb958710601003da09d01484ed5b60cfc0d1375f495dbe3b9ebec.scope: Deactivated successfully. Feb 9 18:40:16.504847 env[1149]: time="2024-02-09T18:40:16.504805006Z" level=info msg="StartContainer for \"00796a2a257bb958710601003da09d01484ed5b60cfc0d1375f495dbe3b9ebec\" returns successfully" Feb 9 18:40:16.530626 env[1149]: time="2024-02-09T18:40:16.530532614Z" level=info msg="shim disconnected" id=00796a2a257bb958710601003da09d01484ed5b60cfc0d1375f495dbe3b9ebec Feb 9 18:40:16.530878 env[1149]: time="2024-02-09T18:40:16.530846935Z" level=warning msg="cleaning up after shim disconnected" id=00796a2a257bb958710601003da09d01484ed5b60cfc0d1375f495dbe3b9ebec namespace=k8s.io Feb 9 18:40:16.530987 env[1149]: time="2024-02-09T18:40:16.530947736Z" level=info msg="cleaning up dead shim" Feb 9 18:40:16.538072 env[1149]: time="2024-02-09T18:40:16.538035091Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4320 runtime=io.containerd.runc.v2\n" Feb 9 18:40:16.952276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00796a2a257bb958710601003da09d01484ed5b60cfc0d1375f495dbe3b9ebec-rootfs.mount: Deactivated successfully. Feb 9 18:40:17.213500 kubelet[2008]: E0209 18:40:17.213463 2008 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-fd99l" podUID=520ed2c0-efc9-40a1-8c2f-69e0aeae0719 Feb 9 18:40:17.437794 kubelet[2008]: E0209 18:40:17.437748 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:17.440039 env[1149]: time="2024-02-09T18:40:17.439989300Z" level=info msg="CreateContainer within sandbox \"ec19bd68e542aea0164bbf55657001cbe87f624c34e5f458d0cea6623ff79b91\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:40:17.453629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1113470594.mount: Deactivated successfully. Feb 9 18:40:17.542588 env[1149]: time="2024-02-09T18:40:17.542459371Z" level=info msg="CreateContainer within sandbox \"ec19bd68e542aea0164bbf55657001cbe87f624c34e5f458d0cea6623ff79b91\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ad0b237486bfd6bfc5de172eecf1cd7eadd7349765bc77e113ee7c92d3fff2d3\"" Feb 9 18:40:17.543321 env[1149]: time="2024-02-09T18:40:17.543288455Z" level=info msg="StartContainer for \"ad0b237486bfd6bfc5de172eecf1cd7eadd7349765bc77e113ee7c92d3fff2d3\"" Feb 9 18:40:17.557155 systemd[1]: Started cri-containerd-ad0b237486bfd6bfc5de172eecf1cd7eadd7349765bc77e113ee7c92d3fff2d3.scope. Feb 9 18:40:17.594877 env[1149]: time="2024-02-09T18:40:17.594827552Z" level=info msg="StartContainer for \"ad0b237486bfd6bfc5de172eecf1cd7eadd7349765bc77e113ee7c92d3fff2d3\" returns successfully" Feb 9 18:40:17.833988 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 18:40:18.190783 kubelet[2008]: I0209 18:40:18.190742 2008 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 18:40:18.190649844 +0000 UTC m=+102.121675539 LastTransitionTime:2024-02-09 18:40:18.190649844 +0000 UTC m=+102.121675539 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 18:40:18.441972 kubelet[2008]: E0209 18:40:18.441870 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:18.454339 kubelet[2008]: I0209 18:40:18.454306 2008 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9tnc9" podStartSLOduration=5.454270441 pod.CreationTimestamp="2024-02-09 18:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:40:18.453475637 +0000 UTC m=+102.384501372" watchObservedRunningTime="2024-02-09 18:40:18.454270441 +0000 UTC m=+102.385296136" Feb 9 18:40:18.465685 kubelet[2008]: W0209 18:40:18.465649 2008 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15ba2a54_a8fc_4833_845a_e2341a6a867c.slice/cri-containerd-f6a1d03ad880b6b9229fe923b5d823e47817e46d1514e116a25e6fd9901df335.scope WatchSource:0}: task f6a1d03ad880b6b9229fe923b5d823e47817e46d1514e116a25e6fd9901df335 not found: not found Feb 9 18:40:19.213180 kubelet[2008]: E0209 18:40:19.213135 2008 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-fd99l" podUID=520ed2c0-efc9-40a1-8c2f-69e0aeae0719 Feb 9 18:40:19.443590 kubelet[2008]: E0209 18:40:19.443550 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:20.301853 systemd[1]: run-containerd-runc-k8s.io-ad0b237486bfd6bfc5de172eecf1cd7eadd7349765bc77e113ee7c92d3fff2d3-runc.KRhnSv.mount: Deactivated successfully. Feb 9 18:40:20.445219 kubelet[2008]: E0209 18:40:20.445193 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:20.452394 systemd-networkd[1057]: lxc_health: Link UP Feb 9 18:40:20.454516 systemd-networkd[1057]: lxc_health: Gained carrier Feb 9 18:40:20.455042 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:40:21.213752 kubelet[2008]: E0209 18:40:21.213713 2008 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-fd99l" podUID=520ed2c0-efc9-40a1-8c2f-69e0aeae0719 Feb 9 18:40:21.574058 kubelet[2008]: W0209 18:40:21.573926 2008 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15ba2a54_a8fc_4833_845a_e2341a6a867c.slice/cri-containerd-269f6234d9fe0c46048e6267e7de3f4dc98957225ab1c59b3e8cc0a491db9630.scope WatchSource:0}: task 269f6234d9fe0c46048e6267e7de3f4dc98957225ab1c59b3e8cc0a491db9630 not found: not found Feb 9 18:40:21.578104 systemd-networkd[1057]: lxc_health: Gained IPv6LL Feb 9 18:40:21.765217 kubelet[2008]: E0209 18:40:21.765175 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:22.447945 kubelet[2008]: E0209 18:40:22.447915 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:23.213480 kubelet[2008]: E0209 18:40:23.213449 2008 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:24.568459 systemd[1]: run-containerd-runc-k8s.io-ad0b237486bfd6bfc5de172eecf1cd7eadd7349765bc77e113ee7c92d3fff2d3-runc.pLWWPD.mount: Deactivated successfully. Feb 9 18:40:24.680795 kubelet[2008]: W0209 18:40:24.680751 2008 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15ba2a54_a8fc_4833_845a_e2341a6a867c.slice/cri-containerd-a2aa7a03e5b6066093be545da393d4fdc5325b012430addc5959e04eefecbd34.scope WatchSource:0}: task a2aa7a03e5b6066093be545da393d4fdc5325b012430addc5959e04eefecbd34 not found: not found Feb 9 18:40:26.747901 sshd[3930]: pam_unix(sshd:session): session closed for user core Feb 9 18:40:26.750785 systemd[1]: sshd@24-10.0.0.103:22-10.0.0.1:35676.service: Deactivated successfully. Feb 9 18:40:26.751551 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 18:40:26.752092 systemd-logind[1138]: Session 25 logged out. Waiting for processes to exit. Feb 9 18:40:26.752684 systemd-logind[1138]: Removed session 25. Feb 9 18:40:27.788053 kubelet[2008]: W0209 18:40:27.787997 2008 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15ba2a54_a8fc_4833_845a_e2341a6a867c.slice/cri-containerd-00796a2a257bb958710601003da09d01484ed5b60cfc0d1375f495dbe3b9ebec.scope WatchSource:0}: task 00796a2a257bb958710601003da09d01484ed5b60cfc0d1375f495dbe3b9ebec not found: not found