Feb 9 10:04:57.720607 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 10:04:57.720627 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 10:04:57.720635 kernel: efi: EFI v2.70 by EDK II Feb 9 10:04:57.720641 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 10:04:57.720646 kernel: random: crng init done Feb 9 10:04:57.720651 kernel: ACPI: Early table checksum verification disabled Feb 9 10:04:57.720657 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 10:04:57.720664 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 10:04:57.720670 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:04:57.720675 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:04:57.720680 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:04:57.720696 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:04:57.720701 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:04:57.720707 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:04:57.720715 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:04:57.720721 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:04:57.720726 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:04:57.720732 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 10:04:57.720738 kernel: NUMA: Failed to initialise from firmware Feb 9 10:04:57.720743 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 10:04:57.720749 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 9 10:04:57.720754 kernel: Zone ranges: Feb 9 10:04:57.720760 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 10:04:57.720767 kernel: DMA32 empty Feb 9 10:04:57.720773 kernel: Normal empty Feb 9 10:04:57.720778 kernel: Movable zone start for each node Feb 9 10:04:57.720784 kernel: Early memory node ranges Feb 9 10:04:57.720789 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 10:04:57.720795 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 10:04:57.720801 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 10:04:57.720806 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 10:04:57.720829 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 10:04:57.720835 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 10:04:57.720842 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 10:04:57.720848 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 10:04:57.720855 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 10:04:57.720861 kernel: psci: probing for conduit method from ACPI. Feb 9 10:04:57.720866 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 10:04:57.720877 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 10:04:57.720883 kernel: psci: Trusted OS migration not required Feb 9 10:04:57.720892 kernel: psci: SMC Calling Convention v1.1 Feb 9 10:04:57.720898 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 10:04:57.720905 kernel: ACPI: SRAT not present Feb 9 10:04:57.720926 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 10:04:57.720932 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 10:04:57.720938 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 10:04:57.720944 kernel: Detected PIPT I-cache on CPU0 Feb 9 10:04:57.720951 kernel: CPU features: detected: GIC system register CPU interface Feb 9 10:04:57.720957 kernel: CPU features: detected: Hardware dirty bit management Feb 9 10:04:57.720963 kernel: CPU features: detected: Spectre-v4 Feb 9 10:04:57.720969 kernel: CPU features: detected: Spectre-BHB Feb 9 10:04:57.720976 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 10:04:57.720982 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 10:04:57.720988 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 10:04:57.720994 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 10:04:57.721000 kernel: Policy zone: DMA Feb 9 10:04:57.721007 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 10:04:57.721014 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 10:04:57.721020 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 10:04:57.721026 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 10:04:57.721032 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 10:04:57.721038 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 9 10:04:57.721046 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 10:04:57.721052 kernel: trace event string verifier disabled Feb 9 10:04:57.721058 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 10:04:57.721064 kernel: rcu: RCU event tracing is enabled. Feb 9 10:04:57.721070 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 10:04:57.721076 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 10:04:57.721082 kernel: Tracing variant of Tasks RCU enabled. Feb 9 10:04:57.721088 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 10:04:57.721094 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 10:04:57.721100 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 10:04:57.721106 kernel: GICv3: 256 SPIs implemented Feb 9 10:04:57.721113 kernel: GICv3: 0 Extended SPIs implemented Feb 9 10:04:57.721119 kernel: GICv3: Distributor has no Range Selector support Feb 9 10:04:57.721125 kernel: Root IRQ handler: gic_handle_irq Feb 9 10:04:57.721132 kernel: GICv3: 16 PPIs implemented Feb 9 10:04:57.721139 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 10:04:57.721145 kernel: ACPI: SRAT not present Feb 9 10:04:57.721151 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 10:04:57.721157 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 10:04:57.721163 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 10:04:57.721169 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 10:04:57.721175 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 10:04:57.721181 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:04:57.721188 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 10:04:57.721194 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 10:04:57.721200 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 10:04:57.721206 kernel: arm-pv: using stolen time PV Feb 9 10:04:57.721213 kernel: Console: colour dummy device 80x25 Feb 9 10:04:57.721219 kernel: ACPI: Core revision 20210730 Feb 9 10:04:57.721225 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 10:04:57.721231 kernel: pid_max: default: 32768 minimum: 301 Feb 9 10:04:57.721237 kernel: LSM: Security Framework initializing Feb 9 10:04:57.721243 kernel: SELinux: Initializing. Feb 9 10:04:57.721251 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 10:04:57.721257 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 10:04:57.721263 kernel: rcu: Hierarchical SRCU implementation. Feb 9 10:04:57.721269 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 10:04:57.721275 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 10:04:57.721281 kernel: Remapping and enabling EFI services. Feb 9 10:04:57.721287 kernel: smp: Bringing up secondary CPUs ... Feb 9 10:04:57.721293 kernel: Detected PIPT I-cache on CPU1 Feb 9 10:04:57.721300 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 10:04:57.721307 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 10:04:57.721314 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:04:57.721320 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 10:04:57.721327 kernel: Detected PIPT I-cache on CPU2 Feb 9 10:04:57.721333 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 10:04:57.721339 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 10:04:57.721345 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:04:57.721351 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 10:04:57.721357 kernel: Detected PIPT I-cache on CPU3 Feb 9 10:04:57.721364 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 10:04:57.721371 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 10:04:57.721377 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:04:57.721383 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 10:04:57.721389 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 10:04:57.721400 kernel: SMP: Total of 4 processors activated. Feb 9 10:04:57.721407 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 10:04:57.721414 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 10:04:57.721420 kernel: CPU features: detected: Common not Private translations Feb 9 10:04:57.721427 kernel: CPU features: detected: CRC32 instructions Feb 9 10:04:57.721433 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 10:04:57.721439 kernel: CPU features: detected: LSE atomic instructions Feb 9 10:04:57.721446 kernel: CPU features: detected: Privileged Access Never Feb 9 10:04:57.721454 kernel: CPU features: detected: RAS Extension Support Feb 9 10:04:57.721460 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 10:04:57.721467 kernel: CPU: All CPU(s) started at EL1 Feb 9 10:04:57.721473 kernel: alternatives: patching kernel code Feb 9 10:04:57.721481 kernel: devtmpfs: initialized Feb 9 10:04:57.721487 kernel: KASLR enabled Feb 9 10:04:57.721493 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 10:04:57.721500 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 10:04:57.721506 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 10:04:57.721513 kernel: SMBIOS 3.0.0 present. Feb 9 10:04:57.721519 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 10:04:57.721526 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 10:04:57.721532 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 10:04:57.721539 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 10:04:57.721547 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 10:04:57.721553 kernel: audit: initializing netlink subsys (disabled) Feb 9 10:04:57.721560 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 Feb 9 10:04:57.721566 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 10:04:57.721572 kernel: cpuidle: using governor menu Feb 9 10:04:57.721579 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 10:04:57.721585 kernel: ASID allocator initialised with 32768 entries Feb 9 10:04:57.721592 kernel: ACPI: bus type PCI registered Feb 9 10:04:57.721601 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 10:04:57.721609 kernel: Serial: AMBA PL011 UART driver Feb 9 10:04:57.721615 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 10:04:57.721622 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 10:04:57.721628 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 10:04:57.721635 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 10:04:57.721642 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 10:04:57.721648 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 10:04:57.721655 kernel: ACPI: Added _OSI(Module Device) Feb 9 10:04:57.721661 kernel: ACPI: Added _OSI(Processor Device) Feb 9 10:04:57.721669 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 10:04:57.721675 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 10:04:57.721682 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 10:04:57.721696 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 10:04:57.721703 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 10:04:57.721709 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 10:04:57.721716 kernel: ACPI: Interpreter enabled Feb 9 10:04:57.721722 kernel: ACPI: Using GIC for interrupt routing Feb 9 10:04:57.721728 kernel: ACPI: MCFG table detected, 1 entries Feb 9 10:04:57.721737 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 10:04:57.721743 kernel: printk: console [ttyAMA0] enabled Feb 9 10:04:57.721750 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 10:04:57.721885 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 10:04:57.721953 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 10:04:57.722012 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 10:04:57.722070 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 10:04:57.722131 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 10:04:57.722139 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 10:04:57.722146 kernel: PCI host bridge to bus 0000:00 Feb 9 10:04:57.722210 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 10:04:57.722267 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 10:04:57.722326 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 10:04:57.722380 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 10:04:57.722454 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 10:04:57.722524 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 10:04:57.722586 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 10:04:57.722650 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 10:04:57.722724 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 10:04:57.722786 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 10:04:57.722845 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 10:04:57.722916 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 10:04:57.722972 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 10:04:57.723025 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 10:04:57.723076 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 10:04:57.723085 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 10:04:57.723091 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 10:04:57.723098 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 10:04:57.723107 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 10:04:57.723113 kernel: iommu: Default domain type: Translated Feb 9 10:04:57.723120 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 10:04:57.723126 kernel: vgaarb: loaded Feb 9 10:04:57.723132 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 10:04:57.723139 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 10:04:57.723146 kernel: PTP clock support registered Feb 9 10:04:57.723152 kernel: Registered efivars operations Feb 9 10:04:57.723158 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 10:04:57.723165 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 10:04:57.723173 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 10:04:57.723179 kernel: pnp: PnP ACPI init Feb 9 10:04:57.723245 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 10:04:57.723254 kernel: pnp: PnP ACPI: found 1 devices Feb 9 10:04:57.723261 kernel: NET: Registered PF_INET protocol family Feb 9 10:04:57.723267 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 10:04:57.723274 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 10:04:57.723280 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 10:04:57.723288 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 10:04:57.723295 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 10:04:57.723303 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 10:04:57.723311 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 10:04:57.723347 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 10:04:57.723357 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 10:04:57.723363 kernel: PCI: CLS 0 bytes, default 64 Feb 9 10:04:57.723370 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 10:04:57.723378 kernel: kvm [1]: HYP mode not available Feb 9 10:04:57.723385 kernel: Initialise system trusted keyrings Feb 9 10:04:57.723391 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 10:04:57.723398 kernel: Key type asymmetric registered Feb 9 10:04:57.723404 kernel: Asymmetric key parser 'x509' registered Feb 9 10:04:57.723411 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 10:04:57.723417 kernel: io scheduler mq-deadline registered Feb 9 10:04:57.723424 kernel: io scheduler kyber registered Feb 9 10:04:57.723430 kernel: io scheduler bfq registered Feb 9 10:04:57.723437 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 10:04:57.723444 kernel: ACPI: button: Power Button [PWRB] Feb 9 10:04:57.723451 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 10:04:57.723591 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 10:04:57.723602 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 10:04:57.723608 kernel: thunder_xcv, ver 1.0 Feb 9 10:04:57.723614 kernel: thunder_bgx, ver 1.0 Feb 9 10:04:57.723621 kernel: nicpf, ver 1.0 Feb 9 10:04:57.723627 kernel: nicvf, ver 1.0 Feb 9 10:04:57.723720 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 10:04:57.723786 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T10:04:57 UTC (1707473097) Feb 9 10:04:57.723795 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 10:04:57.723801 kernel: NET: Registered PF_INET6 protocol family Feb 9 10:04:57.723808 kernel: Segment Routing with IPv6 Feb 9 10:04:57.723815 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 10:04:57.723821 kernel: NET: Registered PF_PACKET protocol family Feb 9 10:04:57.723828 kernel: Key type dns_resolver registered Feb 9 10:04:57.723834 kernel: registered taskstats version 1 Feb 9 10:04:57.723842 kernel: Loading compiled-in X.509 certificates Feb 9 10:04:57.723849 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 10:04:57.723855 kernel: Key type .fscrypt registered Feb 9 10:04:57.723861 kernel: Key type fscrypt-provisioning registered Feb 9 10:04:57.723876 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 10:04:57.723883 kernel: ima: Allocated hash algorithm: sha1 Feb 9 10:04:57.723890 kernel: ima: No architecture policies found Feb 9 10:04:57.723896 kernel: Freeing unused kernel memory: 34688K Feb 9 10:04:57.723902 kernel: Run /init as init process Feb 9 10:04:57.723910 kernel: with arguments: Feb 9 10:04:57.723917 kernel: /init Feb 9 10:04:57.723923 kernel: with environment: Feb 9 10:04:57.723929 kernel: HOME=/ Feb 9 10:04:57.723935 kernel: TERM=linux Feb 9 10:04:57.723942 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 10:04:57.723950 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 10:04:57.723959 systemd[1]: Detected virtualization kvm. Feb 9 10:04:57.723967 systemd[1]: Detected architecture arm64. Feb 9 10:04:57.723974 systemd[1]: Running in initrd. Feb 9 10:04:57.723981 systemd[1]: No hostname configured, using default hostname. Feb 9 10:04:57.723987 systemd[1]: Hostname set to . Feb 9 10:04:57.723995 systemd[1]: Initializing machine ID from VM UUID. Feb 9 10:04:57.724002 systemd[1]: Queued start job for default target initrd.target. Feb 9 10:04:57.724009 systemd[1]: Started systemd-ask-password-console.path. Feb 9 10:04:57.724015 systemd[1]: Reached target cryptsetup.target. Feb 9 10:04:57.724023 systemd[1]: Reached target paths.target. Feb 9 10:04:57.724030 systemd[1]: Reached target slices.target. Feb 9 10:04:57.724037 systemd[1]: Reached target swap.target. Feb 9 10:04:57.724044 systemd[1]: Reached target timers.target. Feb 9 10:04:57.724051 systemd[1]: Listening on iscsid.socket. Feb 9 10:04:57.724058 systemd[1]: Listening on iscsiuio.socket. Feb 9 10:04:57.724065 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 10:04:57.724073 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 10:04:57.724081 systemd[1]: Listening on systemd-journald.socket. Feb 9 10:04:57.724088 systemd[1]: Listening on systemd-networkd.socket. Feb 9 10:04:57.724098 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 10:04:57.724105 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 10:04:57.724112 systemd[1]: Reached target sockets.target. Feb 9 10:04:57.724122 systemd[1]: Starting kmod-static-nodes.service... Feb 9 10:04:57.724130 systemd[1]: Finished network-cleanup.service. Feb 9 10:04:57.724137 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 10:04:57.724145 systemd[1]: Starting systemd-journald.service... Feb 9 10:04:57.724153 systemd[1]: Starting systemd-modules-load.service... Feb 9 10:04:57.724162 systemd[1]: Starting systemd-resolved.service... Feb 9 10:04:57.724169 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 10:04:57.724177 systemd[1]: Finished kmod-static-nodes.service. Feb 9 10:04:57.724184 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 10:04:57.724191 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 10:04:57.724198 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 10:04:57.724205 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 10:04:57.724213 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 10:04:57.724224 systemd-journald[289]: Journal started Feb 9 10:04:57.724266 systemd-journald[289]: Runtime Journal (/run/log/journal/b71e2b0aae804549a6e4b4c31d51f601) is 6.0M, max 48.7M, 42.6M free. Feb 9 10:04:57.716286 systemd-modules-load[290]: Inserted module 'overlay' Feb 9 10:04:57.727463 kernel: audit: type=1130 audit(1707473097.724:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:57.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:57.728029 systemd[1]: Started systemd-journald.service. Feb 9 10:04:57.731985 kernel: audit: type=1130 audit(1707473097.728:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:57.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:57.740703 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 10:04:57.742268 systemd-modules-load[290]: Inserted module 'br_netfilter' Feb 9 10:04:57.742970 kernel: Bridge firewalling registered Feb 9 10:04:57.745424 systemd-resolved[291]: Positive Trust Anchors: Feb 9 10:04:57.745441 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 10:04:57.745470 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 10:04:57.748001 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 10:04:57.754845 kernel: SCSI subsystem initialized Feb 9 10:04:57.754861 kernel: audit: type=1130 audit(1707473097.751:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:57.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:57.755836 systemd-resolved[291]: Defaulting to hostname 'linux'. Feb 9 10:04:57.759365 systemd[1]: Starting dracut-cmdline.service... Feb 9 10:04:57.762411 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 10:04:57.762429 kernel: device-mapper: uevent: version 1.0.3 Feb 9 10:04:57.762438 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 10:04:57.761584 systemd[1]: Started systemd-resolved.service. Feb 9 10:04:57.765866 kernel: audit: type=1130 audit(1707473097.762:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:57.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:57.764530 systemd[1]: Reached target nss-lookup.target. Feb 9 10:04:57.765818 systemd-modules-load[290]: Inserted module 'dm_multipath' Feb 9 10:04:57.766551 systemd[1]: Finished systemd-modules-load.service. Feb 9 10:04:57.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:57.769810 systemd[1]: Starting systemd-sysctl.service... Feb 9 10:04:57.771769 kernel: audit: type=1130 audit(1707473097.768:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:57.771807 dracut-cmdline[309]: dracut-dracut-053 Feb 9 10:04:57.775077 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 10:04:57.778347 systemd[1]: Finished systemd-sysctl.service. Feb 9 10:04:57.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:57.781699 kernel: audit: type=1130 audit(1707473097.779:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:57.832708 kernel: Loading iSCSI transport class v2.0-870. Feb 9 10:04:57.840711 kernel: iscsi: registered transport (tcp) Feb 9 10:04:57.853712 kernel: iscsi: registered transport (qla4xxx) Feb 9 10:04:57.853732 kernel: QLogic iSCSI HBA Driver Feb 9 10:04:57.886643 systemd[1]: Finished dracut-cmdline.service. Feb 9 10:04:57.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:57.888339 systemd[1]: Starting dracut-pre-udev.service... Feb 9 10:04:57.890879 kernel: audit: type=1130 audit(1707473097.887:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:57.931711 kernel: raid6: neonx8 gen() 13776 MB/s Feb 9 10:04:57.948697 kernel: raid6: neonx8 xor() 10805 MB/s Feb 9 10:04:57.965705 kernel: raid6: neonx4 gen() 13492 MB/s Feb 9 10:04:57.982695 kernel: raid6: neonx4 xor() 11232 MB/s Feb 9 10:04:57.999705 kernel: raid6: neonx2 gen() 12897 MB/s Feb 9 10:04:58.016710 kernel: raid6: neonx2 xor() 10221 MB/s Feb 9 10:04:58.033701 kernel: raid6: neonx1 gen() 10442 MB/s Feb 9 10:04:58.050707 kernel: raid6: neonx1 xor() 8774 MB/s Feb 9 10:04:58.067703 kernel: raid6: int64x8 gen() 6270 MB/s Feb 9 10:04:58.084703 kernel: raid6: int64x8 xor() 3535 MB/s Feb 9 10:04:58.101708 kernel: raid6: int64x4 gen() 7170 MB/s Feb 9 10:04:58.118710 kernel: raid6: int64x4 xor() 3837 MB/s Feb 9 10:04:58.135702 kernel: raid6: int64x2 gen() 6137 MB/s Feb 9 10:04:58.152700 kernel: raid6: int64x2 xor() 3311 MB/s Feb 9 10:04:58.169707 kernel: raid6: int64x1 gen() 5033 MB/s Feb 9 10:04:58.186889 kernel: raid6: int64x1 xor() 2638 MB/s Feb 9 10:04:58.186910 kernel: raid6: using algorithm neonx8 gen() 13776 MB/s Feb 9 10:04:58.186927 kernel: raid6: .... xor() 10805 MB/s, rmw enabled Feb 9 10:04:58.186942 kernel: raid6: using neon recovery algorithm Feb 9 10:04:58.197901 kernel: xor: measuring software checksum speed Feb 9 10:04:58.197927 kernel: 8regs : 17275 MB/sec Feb 9 10:04:58.198722 kernel: 32regs : 20755 MB/sec Feb 9 10:04:58.199878 kernel: arm64_neon : 28035 MB/sec Feb 9 10:04:58.199888 kernel: xor: using function: arm64_neon (28035 MB/sec) Feb 9 10:04:58.254706 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 10:04:58.264279 systemd[1]: Finished dracut-pre-udev.service. Feb 9 10:04:58.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:58.265000 audit: BPF prog-id=7 op=LOAD Feb 9 10:04:58.268423 kernel: audit: type=1130 audit(1707473098.264:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:58.269724 kernel: audit: type=1334 audit(1707473098.265:10): prog-id=7 op=LOAD Feb 9 10:04:58.267000 audit: BPF prog-id=8 op=LOAD Feb 9 10:04:58.267909 systemd[1]: Starting systemd-udevd.service... Feb 9 10:04:58.282979 systemd-udevd[492]: Using default interface naming scheme 'v252'. Feb 9 10:04:58.286242 systemd[1]: Started systemd-udevd.service. Feb 9 10:04:58.287681 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 10:04:58.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:58.299304 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Feb 9 10:04:58.325539 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 10:04:58.327083 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 10:04:58.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:58.360100 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 10:04:58.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:58.392947 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 10:04:58.394921 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 10:04:58.394945 kernel: GPT:9289727 != 19775487 Feb 9 10:04:58.394954 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 10:04:58.395955 kernel: GPT:9289727 != 19775487 Feb 9 10:04:58.396997 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 10:04:58.397013 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 10:04:58.406721 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (550) Feb 9 10:04:58.409948 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 10:04:58.410977 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 10:04:58.415094 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 10:04:58.418406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 10:04:58.423541 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 10:04:58.425366 systemd[1]: Starting disk-uuid.service... Feb 9 10:04:58.431227 disk-uuid[564]: Primary Header is updated. Feb 9 10:04:58.431227 disk-uuid[564]: Secondary Entries is updated. Feb 9 10:04:58.431227 disk-uuid[564]: Secondary Header is updated. Feb 9 10:04:58.435708 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 10:04:59.444716 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 10:04:59.444964 disk-uuid[565]: The operation has completed successfully. Feb 9 10:04:59.468443 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 10:04:59.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:59.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:59.468533 systemd[1]: Finished disk-uuid.service. Feb 9 10:04:59.472335 systemd[1]: Starting verity-setup.service... Feb 9 10:04:59.486718 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 10:04:59.507388 systemd[1]: Found device dev-mapper-usr.device. Feb 9 10:04:59.509677 systemd[1]: Mounting sysusr-usr.mount... Feb 9 10:04:59.511380 systemd[1]: Finished verity-setup.service. Feb 9 10:04:59.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:59.557705 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 10:04:59.558158 systemd[1]: Mounted sysusr-usr.mount. Feb 9 10:04:59.559014 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 10:04:59.559731 systemd[1]: Starting ignition-setup.service... Feb 9 10:04:59.562082 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 10:04:59.567817 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 10:04:59.567854 kernel: BTRFS info (device vda6): using free space tree Feb 9 10:04:59.567877 kernel: BTRFS info (device vda6): has skinny extents Feb 9 10:04:59.577106 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 10:04:59.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:59.583930 systemd[1]: Finished ignition-setup.service. Feb 9 10:04:59.585503 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 10:04:59.656264 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 10:04:59.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:59.657000 audit: BPF prog-id=9 op=LOAD Feb 9 10:04:59.658504 systemd[1]: Starting systemd-networkd.service... Feb 9 10:04:59.691886 systemd-networkd[743]: lo: Link UP Feb 9 10:04:59.691894 systemd-networkd[743]: lo: Gained carrier Feb 9 10:04:59.692317 systemd-networkd[743]: Enumeration completed Feb 9 10:04:59.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:59.692418 systemd[1]: Started systemd-networkd.service. Feb 9 10:04:59.692500 systemd-networkd[743]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 10:04:59.693475 systemd[1]: Reached target network.target. Feb 9 10:04:59.693772 systemd-networkd[743]: eth0: Link UP Feb 9 10:04:59.693776 systemd-networkd[743]: eth0: Gained carrier Feb 9 10:04:59.695398 systemd[1]: Starting iscsiuio.service... Feb 9 10:04:59.708549 systemd[1]: Started iscsiuio.service. Feb 9 10:04:59.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:59.710138 systemd[1]: Starting iscsid.service... Feb 9 10:04:59.710798 systemd-networkd[743]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 10:04:59.713724 iscsid[748]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 10:04:59.713724 iscsid[748]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 10:04:59.713724 iscsid[748]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 10:04:59.713724 iscsid[748]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 10:04:59.713724 iscsid[748]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 10:04:59.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:59.714188 ignition[653]: Ignition 2.14.0 Feb 9 10:04:59.729705 iscsid[748]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 10:04:59.719726 systemd[1]: Started iscsid.service. Feb 9 10:04:59.714195 ignition[653]: Stage: fetch-offline Feb 9 10:04:59.722272 systemd[1]: Starting dracut-initqueue.service... Feb 9 10:04:59.714235 ignition[653]: no configs at "/usr/lib/ignition/base.d" Feb 9 10:04:59.714243 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:04:59.714414 ignition[653]: parsed url from cmdline: "" Feb 9 10:04:59.714417 ignition[653]: no config URL provided Feb 9 10:04:59.714422 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 10:04:59.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:59.734976 systemd[1]: Finished dracut-initqueue.service. Feb 9 10:04:59.714429 ignition[653]: no config at "/usr/lib/ignition/user.ign" Feb 9 10:04:59.736453 systemd[1]: Reached target remote-fs-pre.target. Feb 9 10:04:59.714447 ignition[653]: op(1): [started] loading QEMU firmware config module Feb 9 10:04:59.737342 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 10:04:59.714452 ignition[653]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 10:04:59.739238 systemd[1]: Reached target remote-fs.target. Feb 9 10:04:59.723106 ignition[653]: op(1): [finished] loading QEMU firmware config module Feb 9 10:04:59.741400 systemd[1]: Starting dracut-pre-mount.service... Feb 9 10:04:59.748740 systemd[1]: Finished dracut-pre-mount.service. Feb 9 10:04:59.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:59.803461 ignition[653]: parsing config with SHA512: dd2ee389e3275f3633e1fc73fc094e2214ac1357ca65430ed4fb03b1362847977f7f781e3ed18a096ca92b7eda701e2444a1fa70a0d592174becda5a8e043c83 Feb 9 10:04:59.844671 unknown[653]: fetched base config from "system" Feb 9 10:04:59.844682 unknown[653]: fetched user config from "qemu" Feb 9 10:04:59.845322 ignition[653]: fetch-offline: fetch-offline passed Feb 9 10:04:59.846510 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 10:04:59.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:59.845497 ignition[653]: Ignition finished successfully Feb 9 10:04:59.847737 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 10:04:59.848415 systemd[1]: Starting ignition-kargs.service... Feb 9 10:04:59.856803 ignition[764]: Ignition 2.14.0 Feb 9 10:04:59.856812 ignition[764]: Stage: kargs Feb 9 10:04:59.856908 ignition[764]: no configs at "/usr/lib/ignition/base.d" Feb 9 10:04:59.856918 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:04:59.858041 ignition[764]: kargs: kargs passed Feb 9 10:04:59.860048 systemd[1]: Finished ignition-kargs.service. Feb 9 10:04:59.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:59.858084 ignition[764]: Ignition finished successfully Feb 9 10:04:59.861678 systemd[1]: Starting ignition-disks.service... Feb 9 10:04:59.867870 ignition[770]: Ignition 2.14.0 Feb 9 10:04:59.867880 ignition[770]: Stage: disks Feb 9 10:04:59.867960 ignition[770]: no configs at "/usr/lib/ignition/base.d" Feb 9 10:04:59.867969 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:04:59.869152 ignition[770]: disks: disks passed Feb 9 10:04:59.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:59.870402 systemd[1]: Finished ignition-disks.service. Feb 9 10:04:59.869194 ignition[770]: Ignition finished successfully Feb 9 10:04:59.871443 systemd[1]: Reached target initrd-root-device.target. Feb 9 10:04:59.872578 systemd[1]: Reached target local-fs-pre.target. Feb 9 10:04:59.873855 systemd[1]: Reached target local-fs.target. Feb 9 10:04:59.875102 systemd[1]: Reached target sysinit.target. Feb 9 10:04:59.876358 systemd[1]: Reached target basic.target. Feb 9 10:04:59.878329 systemd[1]: Starting systemd-fsck-root.service... Feb 9 10:04:59.888624 systemd-fsck[778]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 10:04:59.894347 systemd[1]: Finished systemd-fsck-root.service. Feb 9 10:04:59.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:59.896711 systemd[1]: Mounting sysroot.mount... Feb 9 10:04:59.902697 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 10:04:59.903017 systemd[1]: Mounted sysroot.mount. Feb 9 10:04:59.903742 systemd[1]: Reached target initrd-root-fs.target. Feb 9 10:04:59.905768 systemd[1]: Mounting sysroot-usr.mount... Feb 9 10:04:59.906589 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 10:04:59.906625 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 10:04:59.906648 systemd[1]: Reached target ignition-diskful.target. Feb 9 10:04:59.908393 systemd[1]: Mounted sysroot-usr.mount. Feb 9 10:04:59.910136 systemd[1]: Starting initrd-setup-root.service... Feb 9 10:04:59.914080 initrd-setup-root[788]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 10:04:59.917837 initrd-setup-root[796]: cut: /sysroot/etc/group: No such file or directory Feb 9 10:04:59.921666 initrd-setup-root[804]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 10:04:59.925254 initrd-setup-root[812]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 10:04:59.951113 systemd[1]: Finished initrd-setup-root.service. Feb 9 10:04:59.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:59.952526 systemd[1]: Starting ignition-mount.service... Feb 9 10:04:59.953786 systemd[1]: Starting sysroot-boot.service... Feb 9 10:04:59.957191 bash[829]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 10:04:59.964352 ignition[830]: INFO : Ignition 2.14.0 Feb 9 10:04:59.964352 ignition[830]: INFO : Stage: mount Feb 9 10:04:59.965531 ignition[830]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 10:04:59.965531 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:04:59.965531 ignition[830]: INFO : mount: mount passed Feb 9 10:04:59.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:04:59.969160 ignition[830]: INFO : Ignition finished successfully Feb 9 10:04:59.967886 systemd[1]: Finished ignition-mount.service. Feb 9 10:04:59.972673 systemd[1]: Finished sysroot-boot.service. Feb 9 10:04:59.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:00.518243 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 10:05:00.523697 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (839) Feb 9 10:05:00.523982 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 10:05:00.524993 kernel: BTRFS info (device vda6): using free space tree Feb 9 10:05:00.525009 kernel: BTRFS info (device vda6): has skinny extents Feb 9 10:05:00.528036 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 10:05:00.529554 systemd[1]: Starting ignition-files.service... Feb 9 10:05:00.543453 ignition[859]: INFO : Ignition 2.14.0 Feb 9 10:05:00.543453 ignition[859]: INFO : Stage: files Feb 9 10:05:00.544629 ignition[859]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 10:05:00.544629 ignition[859]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:05:00.544629 ignition[859]: DEBUG : files: compiled without relabeling support, skipping Feb 9 10:05:00.548099 ignition[859]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 10:05:00.548099 ignition[859]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 10:05:00.553257 ignition[859]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 10:05:00.554244 ignition[859]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 10:05:00.555347 unknown[859]: wrote ssh authorized keys file for user: core Feb 9 10:05:00.556223 ignition[859]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 10:05:00.556223 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 10:05:00.556223 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 10:05:00.592067 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 10:05:00.648412 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 10:05:00.649917 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 10:05:00.649917 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 10:05:00.974996 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 10:05:01.097591 ignition[859]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 10:05:01.099676 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 10:05:01.099676 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 10:05:01.099676 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 10:05:01.335580 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 10:05:01.533776 systemd-networkd[743]: eth0: Gained IPv6LL Feb 9 10:05:01.576647 ignition[859]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 10:05:01.579010 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 10:05:01.579010 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 10:05:01.579010 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 10:05:01.579010 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 10:05:01.579010 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubectl: attempt #1 Feb 9 10:05:01.626755 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 10:05:02.037169 ignition[859]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 14be61ec35669a27acf2df0380afb85b9b42311d50ca1165718421c5f605df1119ec9ae314696a674051712e80deeaa65e62d2d62ed4d107fe99d0aaf419dafc Feb 9 10:05:02.039285 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 10:05:02.040496 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 10:05:02.040496 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Feb 9 10:05:02.062921 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 10:05:02.648241 ignition[859]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Feb 9 10:05:02.650507 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 10:05:02.650507 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 10:05:02.650507 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Feb 9 10:05:02.670710 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 10:05:02.937660 ignition[859]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Feb 9 10:05:02.939868 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 10:05:02.941017 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 10:05:02.942274 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 10:05:03.177372 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 10:05:03.219349 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 10:05:03.219349 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 10:05:03.222255 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 10:05:03.222255 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 10:05:03.222255 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 10:05:03.222255 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 10:05:03.222255 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 10:05:03.222255 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 10:05:03.222255 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 10:05:03.222255 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 10:05:03.222255 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 10:05:03.222255 ignition[859]: INFO : files: op(10): [started] processing unit "prepare-critools.service" Feb 9 10:05:03.222255 ignition[859]: INFO : files: op(10): op(11): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 10:05:03.222255 ignition[859]: INFO : files: op(10): op(11): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 10:05:03.222255 ignition[859]: INFO : files: op(10): [finished] processing unit "prepare-critools.service" Feb 9 10:05:03.222255 ignition[859]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Feb 9 10:05:03.222255 ignition[859]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 10:05:03.222255 ignition[859]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 10:05:03.222255 ignition[859]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Feb 9 10:05:03.222255 ignition[859]: INFO : files: op(14): [started] processing unit "coreos-metadata.service" Feb 9 10:05:03.244679 ignition[859]: INFO : files: op(14): op(15): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 10:05:03.244679 ignition[859]: INFO : files: op(14): op(15): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 10:05:03.244679 ignition[859]: INFO : files: op(14): [finished] processing unit "coreos-metadata.service" Feb 9 10:05:03.244679 ignition[859]: INFO : files: op(16): [started] processing unit "prepare-cni-plugins.service" Feb 9 10:05:03.244679 ignition[859]: INFO : files: op(16): op(17): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 10:05:03.244679 ignition[859]: INFO : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 10:05:03.244679 ignition[859]: INFO : files: op(16): [finished] processing unit "prepare-cni-plugins.service" Feb 9 10:05:03.244679 ignition[859]: INFO : files: op(18): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 10:05:03.244679 ignition[859]: INFO : files: op(18): op(19): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 10:05:03.260611 kernel: kauditd_printk_skb: 22 callbacks suppressed Feb 9 10:05:03.260633 kernel: audit: type=1130 audit(1707473103.254:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.253030 systemd[1]: Finished ignition-files.service. Feb 9 10:05:03.262015 ignition[859]: INFO : files: op(18): op(19): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 10:05:03.262015 ignition[859]: INFO : files: op(18): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 10:05:03.262015 ignition[859]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 10:05:03.262015 ignition[859]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 10:05:03.262015 ignition[859]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Feb 9 10:05:03.262015 ignition[859]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 10:05:03.262015 ignition[859]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Feb 9 10:05:03.262015 ignition[859]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 10:05:03.262015 ignition[859]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 10:05:03.262015 ignition[859]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 10:05:03.262015 ignition[859]: INFO : files: files passed Feb 9 10:05:03.262015 ignition[859]: INFO : Ignition finished successfully Feb 9 10:05:03.285231 kernel: audit: type=1130 audit(1707473103.262:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.285255 kernel: audit: type=1130 audit(1707473103.265:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.285265 kernel: audit: type=1131 audit(1707473103.265:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.255804 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 10:05:03.290191 kernel: audit: type=1130 audit(1707473103.285:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.290210 kernel: audit: type=1131 audit(1707473103.285:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.259140 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 10:05:03.291996 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 10:05:03.259806 systemd[1]: Starting ignition-quench.service... Feb 9 10:05:03.294658 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 10:05:03.261680 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 10:05:03.263918 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 10:05:03.263997 systemd[1]: Finished ignition-quench.service. Feb 9 10:05:03.266755 systemd[1]: Reached target ignition-complete.target. Feb 9 10:05:03.272603 systemd[1]: Starting initrd-parse-etc.service... Feb 9 10:05:03.284679 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 10:05:03.284791 systemd[1]: Finished initrd-parse-etc.service. Feb 9 10:05:03.286055 systemd[1]: Reached target initrd-fs.target. Feb 9 10:05:03.290865 systemd[1]: Reached target initrd.target. Feb 9 10:05:03.292674 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 10:05:03.293451 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 10:05:03.303302 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 10:05:03.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.304836 systemd[1]: Starting initrd-cleanup.service... Feb 9 10:05:03.307322 kernel: audit: type=1130 audit(1707473103.303:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.312665 systemd[1]: Stopped target network.target. Feb 9 10:05:03.313525 systemd[1]: Stopped target nss-lookup.target. Feb 9 10:05:03.314660 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 10:05:03.315883 systemd[1]: Stopped target timers.target. Feb 9 10:05:03.316999 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 10:05:03.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.317106 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 10:05:03.321338 kernel: audit: type=1131 audit(1707473103.317:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.318209 systemd[1]: Stopped target initrd.target. Feb 9 10:05:03.320995 systemd[1]: Stopped target basic.target. Feb 9 10:05:03.322131 systemd[1]: Stopped target ignition-complete.target. Feb 9 10:05:03.323284 systemd[1]: Stopped target ignition-diskful.target. Feb 9 10:05:03.324426 systemd[1]: Stopped target initrd-root-device.target. Feb 9 10:05:03.325667 systemd[1]: Stopped target remote-fs.target. Feb 9 10:05:03.326838 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 10:05:03.328061 systemd[1]: Stopped target sysinit.target. Feb 9 10:05:03.329253 systemd[1]: Stopped target local-fs.target. Feb 9 10:05:03.330416 systemd[1]: Stopped target local-fs-pre.target. Feb 9 10:05:03.331521 systemd[1]: Stopped target swap.target. Feb 9 10:05:03.335731 kernel: audit: type=1131 audit(1707473103.332:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.332527 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 10:05:03.332640 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 10:05:03.339717 kernel: audit: type=1131 audit(1707473103.337:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.333781 systemd[1]: Stopped target cryptsetup.target. Feb 9 10:05:03.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.336460 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 10:05:03.336561 systemd[1]: Stopped dracut-initqueue.service. Feb 9 10:05:03.337807 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 10:05:03.337917 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 10:05:03.340607 systemd[1]: Stopped target paths.target. Feb 9 10:05:03.341564 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 10:05:03.345714 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 10:05:03.346619 systemd[1]: Stopped target slices.target. Feb 9 10:05:03.347823 systemd[1]: Stopped target sockets.target. Feb 9 10:05:03.348873 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 10:05:03.348948 systemd[1]: Closed iscsid.socket. Feb 9 10:05:03.349857 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 10:05:03.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.349928 systemd[1]: Closed iscsiuio.socket. Feb 9 10:05:03.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.350954 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 10:05:03.351052 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 10:05:03.352165 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 10:05:03.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.352258 systemd[1]: Stopped ignition-files.service. Feb 9 10:05:03.354136 systemd[1]: Stopping ignition-mount.service... Feb 9 10:05:03.355197 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 10:05:03.355324 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 10:05:03.357483 systemd[1]: Stopping sysroot-boot.service... Feb 9 10:05:03.358361 systemd[1]: Stopping systemd-networkd.service... Feb 9 10:05:03.361994 systemd[1]: Stopping systemd-resolved.service... Feb 9 10:05:03.363494 ignition[899]: INFO : Ignition 2.14.0 Feb 9 10:05:03.363494 ignition[899]: INFO : Stage: umount Feb 9 10:05:03.363494 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 10:05:03.363494 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:05:03.363494 ignition[899]: INFO : umount: umount passed Feb 9 10:05:03.363494 ignition[899]: INFO : Ignition finished successfully Feb 9 10:05:03.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.362981 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 10:05:03.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.363111 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 10:05:03.364498 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 10:05:03.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.364598 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 10:05:03.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.367869 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 10:05:03.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.368243 systemd-networkd[743]: eth0: DHCPv6 lease lost Feb 9 10:05:03.376000 audit: BPF prog-id=9 op=UNLOAD Feb 9 10:05:03.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.368584 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 10:05:03.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.368662 systemd[1]: Stopped ignition-mount.service. Feb 9 10:05:03.370206 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 10:05:03.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.370286 systemd[1]: Stopped systemd-networkd.service. Feb 9 10:05:03.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.371774 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 10:05:03.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.371857 systemd[1]: Stopped sysroot-boot.service. Feb 9 10:05:03.372964 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 10:05:03.373041 systemd[1]: Closed systemd-networkd.socket. Feb 9 10:05:03.373919 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 10:05:03.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.373957 systemd[1]: Stopped ignition-disks.service. Feb 9 10:05:03.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.375182 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 10:05:03.375218 systemd[1]: Stopped ignition-kargs.service. Feb 9 10:05:03.376369 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 10:05:03.392000 audit: BPF prog-id=6 op=UNLOAD Feb 9 10:05:03.376405 systemd[1]: Stopped ignition-setup.service. Feb 9 10:05:03.377543 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 10:05:03.377580 systemd[1]: Stopped initrd-setup-root.service. Feb 9 10:05:03.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.379304 systemd[1]: Stopping network-cleanup.service... Feb 9 10:05:03.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.380030 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 10:05:03.380081 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 10:05:03.381356 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 10:05:03.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.381396 systemd[1]: Stopped systemd-sysctl.service. Feb 9 10:05:03.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.383176 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 10:05:03.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.383215 systemd[1]: Stopped systemd-modules-load.service. Feb 9 10:05:03.384224 systemd[1]: Stopping systemd-udevd.service... Feb 9 10:05:03.387680 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 10:05:03.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.388191 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 10:05:03.388273 systemd[1]: Stopped systemd-resolved.service. Feb 9 10:05:03.389607 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 10:05:03.389680 systemd[1]: Finished initrd-cleanup.service. Feb 9 10:05:03.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.393803 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 10:05:03.393917 systemd[1]: Stopped systemd-udevd.service. Feb 9 10:05:03.395209 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 10:05:03.395284 systemd[1]: Stopped network-cleanup.service. Feb 9 10:05:03.396198 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 10:05:03.396228 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 10:05:03.397461 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 10:05:03.397491 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 10:05:03.398742 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 10:05:03.398784 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 10:05:03.399945 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 10:05:03.399983 systemd[1]: Stopped dracut-cmdline.service. Feb 9 10:05:03.401152 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 10:05:03.401189 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 10:05:03.403029 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 10:05:03.404257 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 10:05:03.404312 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 10:05:03.407724 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 10:05:03.407803 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 10:05:03.409251 systemd[1]: Reached target initrd-switch-root.target. Feb 9 10:05:03.411022 systemd[1]: Starting initrd-switch-root.service... Feb 9 10:05:03.416832 systemd[1]: Switching root. Feb 9 10:05:03.436151 iscsid[748]: iscsid shutting down. Feb 9 10:05:03.436631 systemd-journald[289]: Journal stopped Feb 9 10:05:05.492721 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Feb 9 10:05:05.492781 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 10:05:05.492800 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 10:05:05.492811 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 10:05:05.492821 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 10:05:05.492831 kernel: SELinux: policy capability open_perms=1 Feb 9 10:05:05.492841 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 10:05:05.492860 kernel: SELinux: policy capability always_check_network=0 Feb 9 10:05:05.492870 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 10:05:05.492880 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 10:05:05.492890 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 10:05:05.492901 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 10:05:05.492912 systemd[1]: Successfully loaded SELinux policy in 30.356ms. Feb 9 10:05:05.492933 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.878ms. Feb 9 10:05:05.492946 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 10:05:05.492957 systemd[1]: Detected virtualization kvm. Feb 9 10:05:05.492968 systemd[1]: Detected architecture arm64. Feb 9 10:05:05.492978 systemd[1]: Detected first boot. Feb 9 10:05:05.492990 systemd[1]: Initializing machine ID from VM UUID. Feb 9 10:05:05.493000 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 10:05:05.493010 systemd[1]: Populated /etc with preset unit settings. Feb 9 10:05:05.493021 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:05:05.493033 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:05:05.493044 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:05:05.493056 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 10:05:05.493068 systemd[1]: Stopped iscsiuio.service. Feb 9 10:05:05.493079 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 10:05:05.493089 systemd[1]: Stopped iscsid.service. Feb 9 10:05:05.493100 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 10:05:05.493111 systemd[1]: Stopped initrd-switch-root.service. Feb 9 10:05:05.493121 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 10:05:05.493132 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 10:05:05.493143 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 10:05:05.493153 systemd[1]: Created slice system-getty.slice. Feb 9 10:05:05.493165 systemd[1]: Created slice system-modprobe.slice. Feb 9 10:05:05.493176 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 10:05:05.493186 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 10:05:05.493197 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 10:05:05.493207 systemd[1]: Created slice user.slice. Feb 9 10:05:05.493219 systemd[1]: Started systemd-ask-password-console.path. Feb 9 10:05:05.493230 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 10:05:05.493243 systemd[1]: Set up automount boot.automount. Feb 9 10:05:05.493253 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 10:05:05.493264 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 10:05:05.493274 systemd[1]: Stopped target initrd-fs.target. Feb 9 10:05:05.493285 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 10:05:05.493296 systemd[1]: Reached target integritysetup.target. Feb 9 10:05:05.493306 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 10:05:05.493317 systemd[1]: Reached target remote-fs.target. Feb 9 10:05:05.493327 systemd[1]: Reached target slices.target. Feb 9 10:05:05.493337 systemd[1]: Reached target swap.target. Feb 9 10:05:05.493349 systemd[1]: Reached target torcx.target. Feb 9 10:05:05.493360 systemd[1]: Reached target veritysetup.target. Feb 9 10:05:05.493370 systemd[1]: Listening on systemd-coredump.socket. Feb 9 10:05:05.493381 systemd[1]: Listening on systemd-initctl.socket. Feb 9 10:05:05.493391 systemd[1]: Listening on systemd-networkd.socket. Feb 9 10:05:05.493402 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 10:05:05.493413 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 10:05:05.493423 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 10:05:05.493433 systemd[1]: Mounting dev-hugepages.mount... Feb 9 10:05:05.493445 systemd[1]: Mounting dev-mqueue.mount... Feb 9 10:05:05.493456 systemd[1]: Mounting media.mount... Feb 9 10:05:05.493466 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 10:05:05.493477 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 10:05:05.493490 systemd[1]: Mounting tmp.mount... Feb 9 10:05:05.493501 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 10:05:05.493512 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 10:05:05.493523 systemd[1]: Starting kmod-static-nodes.service... Feb 9 10:05:05.493533 systemd[1]: Starting modprobe@configfs.service... Feb 9 10:05:05.493545 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 10:05:05.493555 systemd[1]: Starting modprobe@drm.service... Feb 9 10:05:05.493566 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 10:05:05.493576 systemd[1]: Starting modprobe@fuse.service... Feb 9 10:05:05.493587 systemd[1]: Starting modprobe@loop.service... Feb 9 10:05:05.493598 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 10:05:05.493609 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 10:05:05.493619 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 10:05:05.493630 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 10:05:05.493642 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 10:05:05.493652 systemd[1]: Stopped systemd-journald.service. Feb 9 10:05:05.493663 systemd[1]: Starting systemd-journald.service... Feb 9 10:05:05.493673 kernel: loop: module loaded Feb 9 10:05:05.493682 kernel: fuse: init (API version 7.34) Feb 9 10:05:05.493703 systemd[1]: Starting systemd-modules-load.service... Feb 9 10:05:05.493715 systemd[1]: Starting systemd-network-generator.service... Feb 9 10:05:05.493726 systemd[1]: Starting systemd-remount-fs.service... Feb 9 10:05:05.493736 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 10:05:05.493747 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 10:05:05.493757 systemd[1]: Stopped verity-setup.service. Feb 9 10:05:05.493767 systemd[1]: Mounted dev-hugepages.mount. Feb 9 10:05:05.493779 systemd[1]: Mounted dev-mqueue.mount. Feb 9 10:05:05.493789 systemd[1]: Mounted media.mount. Feb 9 10:05:05.493801 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 10:05:05.493811 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 10:05:05.493822 systemd[1]: Mounted tmp.mount. Feb 9 10:05:05.493833 systemd[1]: Finished kmod-static-nodes.service. Feb 9 10:05:05.493850 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 10:05:05.493861 systemd[1]: Finished modprobe@configfs.service. Feb 9 10:05:05.493873 systemd-journald[991]: Journal started Feb 9 10:05:05.493917 systemd-journald[991]: Runtime Journal (/run/log/journal/b71e2b0aae804549a6e4b4c31d51f601) is 6.0M, max 48.7M, 42.6M free. Feb 9 10:05:03.487000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 10:05:03.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 10:05:03.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 10:05:03.661000 audit: BPF prog-id=10 op=LOAD Feb 9 10:05:03.661000 audit: BPF prog-id=10 op=UNLOAD Feb 9 10:05:03.661000 audit: BPF prog-id=11 op=LOAD Feb 9 10:05:03.661000 audit: BPF prog-id=11 op=UNLOAD Feb 9 10:05:03.701000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 10:05:03.701000 audit[932]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c5882 a1=40000c8d98 a2=40000cf000 a3=32 items=0 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:05:03.701000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 10:05:03.702000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 10:05:03.702000 audit[932]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5959 a2=1ed a3=0 items=2 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:05:03.702000 audit: CWD cwd="/" Feb 9 10:05:03.702000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:05:03.702000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:05:03.702000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 10:05:05.360000 audit: BPF prog-id=12 op=LOAD Feb 9 10:05:05.360000 audit: BPF prog-id=3 op=UNLOAD Feb 9 10:05:05.360000 audit: BPF prog-id=13 op=LOAD Feb 9 10:05:05.360000 audit: BPF prog-id=14 op=LOAD Feb 9 10:05:05.360000 audit: BPF prog-id=4 op=UNLOAD Feb 9 10:05:05.360000 audit: BPF prog-id=5 op=UNLOAD Feb 9 10:05:05.361000 audit: BPF prog-id=15 op=LOAD Feb 9 10:05:05.361000 audit: BPF prog-id=12 op=UNLOAD Feb 9 10:05:05.361000 audit: BPF prog-id=16 op=LOAD Feb 9 10:05:05.361000 audit: BPF prog-id=17 op=LOAD Feb 9 10:05:05.361000 audit: BPF prog-id=13 op=UNLOAD Feb 9 10:05:05.361000 audit: BPF prog-id=14 op=UNLOAD Feb 9 10:05:05.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.372000 audit: BPF prog-id=15 op=UNLOAD Feb 9 10:05:05.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.463000 audit: BPF prog-id=18 op=LOAD Feb 9 10:05:05.463000 audit: BPF prog-id=19 op=LOAD Feb 9 10:05:05.463000 audit: BPF prog-id=20 op=LOAD Feb 9 10:05:05.463000 audit: BPF prog-id=16 op=UNLOAD Feb 9 10:05:05.463000 audit: BPF prog-id=17 op=UNLOAD Feb 9 10:05:05.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.491000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 10:05:05.491000 audit[991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffcb3f2cc0 a2=4000 a3=1 items=0 ppid=1 pid=991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:05:05.491000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 10:05:05.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.700189 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:05:05.359197 systemd[1]: Queued start job for default target multi-user.target. Feb 9 10:05:03.700794 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 10:05:05.359209 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 10:05:03.700815 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 10:05:05.363174 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 10:05:03.700844 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:03Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 10:05:03.700862 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:03Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 10:05:05.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:03.700891 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:03Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 10:05:03.700902 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:03Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 10:05:05.495713 systemd[1]: Started systemd-journald.service. Feb 9 10:05:03.701082 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:03Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 10:05:03.701116 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 10:05:03.701127 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 10:05:05.495900 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 10:05:03.701553 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 10:05:03.701587 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 10:05:03.701604 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 10:05:03.701618 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 10:05:03.701634 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 10:05:03.701662 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 10:05:05.496381 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 10:05:05.111272 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:05Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:05:05.111522 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:05Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:05:05.111617 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:05Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:05:05.111799 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:05Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:05:05.111856 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:05Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 10:05:05.111912 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2024-02-09T10:05:05Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 10:05:05.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.497640 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 10:05:05.497828 systemd[1]: Finished modprobe@drm.service. Feb 9 10:05:05.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.498907 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 10:05:05.499188 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 10:05:05.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.500266 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 10:05:05.500409 systemd[1]: Finished modprobe@fuse.service. Feb 9 10:05:05.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.501491 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 10:05:05.501625 systemd[1]: Finished modprobe@loop.service. Feb 9 10:05:05.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.502822 systemd[1]: Finished systemd-modules-load.service. Feb 9 10:05:05.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.503986 systemd[1]: Finished systemd-network-generator.service. Feb 9 10:05:05.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.505300 systemd[1]: Finished systemd-remount-fs.service. Feb 9 10:05:05.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.506579 systemd[1]: Reached target network-pre.target. Feb 9 10:05:05.508581 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 10:05:05.510532 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 10:05:05.511394 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 10:05:05.512862 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 10:05:05.514787 systemd[1]: Starting systemd-journal-flush.service... Feb 9 10:05:05.515600 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 10:05:05.516663 systemd[1]: Starting systemd-random-seed.service... Feb 9 10:05:05.517619 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 10:05:05.518749 systemd[1]: Starting systemd-sysctl.service... Feb 9 10:05:05.521367 systemd-journald[991]: Time spent on flushing to /var/log/journal/b71e2b0aae804549a6e4b4c31d51f601 is 21.833ms for 1022 entries. Feb 9 10:05:05.521367 systemd-journald[991]: System Journal (/var/log/journal/b71e2b0aae804549a6e4b4c31d51f601) is 8.0M, max 195.6M, 187.6M free. Feb 9 10:05:05.555626 systemd-journald[991]: Received client request to flush runtime journal. Feb 9 10:05:05.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.522218 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 10:05:05.523452 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 10:05:05.556279 udevadm[1034]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 10:05:05.524733 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 10:05:05.527366 systemd[1]: Starting systemd-sysusers.service... Feb 9 10:05:05.530935 systemd[1]: Finished systemd-random-seed.service. Feb 9 10:05:05.532141 systemd[1]: Reached target first-boot-complete.target. Feb 9 10:05:05.537079 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 10:05:05.538219 systemd[1]: Finished systemd-sysctl.service. Feb 9 10:05:05.540187 systemd[1]: Starting systemd-udev-settle.service... Feb 9 10:05:05.547040 systemd[1]: Finished systemd-sysusers.service. Feb 9 10:05:05.556542 systemd[1]: Finished systemd-journal-flush.service. Feb 9 10:05:05.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.876778 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 10:05:05.877000 audit: BPF prog-id=21 op=LOAD Feb 9 10:05:05.877000 audit: BPF prog-id=22 op=LOAD Feb 9 10:05:05.877000 audit: BPF prog-id=7 op=UNLOAD Feb 9 10:05:05.877000 audit: BPF prog-id=8 op=UNLOAD Feb 9 10:05:05.878786 systemd[1]: Starting systemd-udevd.service... Feb 9 10:05:05.897001 systemd-udevd[1036]: Using default interface naming scheme 'v252'. Feb 9 10:05:05.908081 systemd[1]: Started systemd-udevd.service. Feb 9 10:05:05.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.909000 audit: BPF prog-id=23 op=LOAD Feb 9 10:05:05.910330 systemd[1]: Starting systemd-networkd.service... Feb 9 10:05:05.914000 audit: BPF prog-id=24 op=LOAD Feb 9 10:05:05.914000 audit: BPF prog-id=25 op=LOAD Feb 9 10:05:05.914000 audit: BPF prog-id=26 op=LOAD Feb 9 10:05:05.915588 systemd[1]: Starting systemd-userdbd.service... Feb 9 10:05:05.925103 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 10:05:05.952254 systemd[1]: Started systemd-userdbd.service. Feb 9 10:05:05.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:05.976150 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 10:05:06.002564 systemd-networkd[1044]: lo: Link UP Feb 9 10:05:06.002576 systemd-networkd[1044]: lo: Gained carrier Feb 9 10:05:06.002920 systemd-networkd[1044]: Enumeration completed Feb 9 10:05:06.002999 systemd[1]: Started systemd-networkd.service. Feb 9 10:05:06.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:06.003757 systemd-networkd[1044]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 10:05:06.004835 systemd-networkd[1044]: eth0: Link UP Feb 9 10:05:06.004853 systemd-networkd[1044]: eth0: Gained carrier Feb 9 10:05:06.017032 systemd[1]: Finished systemd-udev-settle.service. Feb 9 10:05:06.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:06.018881 systemd[1]: Starting lvm2-activation-early.service... Feb 9 10:05:06.027849 systemd-networkd[1044]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 10:05:06.032423 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 10:05:06.055424 systemd[1]: Finished lvm2-activation-early.service. Feb 9 10:05:06.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:06.056243 systemd[1]: Reached target cryptsetup.target. Feb 9 10:05:06.057871 systemd[1]: Starting lvm2-activation.service... Feb 9 10:05:06.061129 lvm[1070]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 10:05:06.081527 systemd[1]: Finished lvm2-activation.service. Feb 9 10:05:06.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:06.082265 systemd[1]: Reached target local-fs-pre.target. Feb 9 10:05:06.082886 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 10:05:06.082911 systemd[1]: Reached target local-fs.target. Feb 9 10:05:06.083445 systemd[1]: Reached target machines.target. Feb 9 10:05:06.085049 systemd[1]: Starting ldconfig.service... Feb 9 10:05:06.085871 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 10:05:06.085925 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 10:05:06.087113 systemd[1]: Starting systemd-boot-update.service... Feb 9 10:05:06.089265 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 10:05:06.092089 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 10:05:06.093090 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 10:05:06.093124 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 10:05:06.094036 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 10:05:06.096343 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1072 (bootctl) Feb 9 10:05:06.097395 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 10:05:06.101166 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 10:05:06.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:06.112899 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 10:05:06.114717 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 10:05:06.122491 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 10:05:06.194143 systemd-fsck[1082]: fsck.fat 4.2 (2021-01-31) Feb 9 10:05:06.194143 systemd-fsck[1082]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 10:05:06.196110 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 10:05:06.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:06.198602 systemd[1]: Mounting boot.mount... Feb 9 10:05:06.208623 systemd[1]: Mounted boot.mount. Feb 9 10:05:06.217623 systemd[1]: Finished systemd-boot-update.service. Feb 9 10:05:06.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:06.234910 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 10:05:06.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:06.275163 ldconfig[1071]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 10:05:06.279271 systemd[1]: Finished ldconfig.service. Feb 9 10:05:06.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:06.282146 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 10:05:06.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:06.284035 systemd[1]: Starting audit-rules.service... Feb 9 10:05:06.285565 systemd[1]: Starting clean-ca-certificates.service... Feb 9 10:05:06.287294 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 10:05:06.288000 audit: BPF prog-id=27 op=LOAD Feb 9 10:05:06.290087 systemd[1]: Starting systemd-resolved.service... Feb 9 10:05:06.290000 audit: BPF prog-id=28 op=LOAD Feb 9 10:05:06.292285 systemd[1]: Starting systemd-timesyncd.service... Feb 9 10:05:06.294110 systemd[1]: Starting systemd-update-utmp.service... Feb 9 10:05:06.296263 systemd[1]: Finished clean-ca-certificates.service. Feb 9 10:05:06.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:06.297491 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 10:05:06.299000 audit[1095]: SYSTEM_BOOT pid=1095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 10:05:06.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:06.302681 systemd[1]: Finished systemd-update-utmp.service. Feb 9 10:05:06.309783 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 10:05:06.311776 systemd[1]: Starting systemd-update-done.service... Feb 9 10:05:06.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:06.317462 systemd[1]: Finished systemd-update-done.service. Feb 9 10:05:06.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:06.331000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 10:05:06.331000 audit[1107]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcbd34d50 a2=420 a3=0 items=0 ppid=1086 pid=1107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:05:06.331000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 10:05:06.332205 augenrules[1107]: No rules Feb 9 10:05:06.332743 systemd[1]: Finished audit-rules.service. Feb 9 10:05:06.339863 systemd-resolved[1090]: Positive Trust Anchors: Feb 9 10:05:06.340178 systemd-resolved[1090]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 10:05:06.340264 systemd-resolved[1090]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 10:05:06.349667 systemd[1]: Started systemd-timesyncd.service. Feb 9 10:05:06.350448 systemd-timesyncd[1091]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 10:05:06.350501 systemd-timesyncd[1091]: Initial clock synchronization to Fri 2024-02-09 10:05:06.181770 UTC. Feb 9 10:05:06.350800 systemd[1]: Reached target time-set.target. Feb 9 10:05:06.355042 systemd-resolved[1090]: Defaulting to hostname 'linux'. Feb 9 10:05:06.356423 systemd[1]: Started systemd-resolved.service. Feb 9 10:05:06.357068 systemd[1]: Reached target network.target. Feb 9 10:05:06.357596 systemd[1]: Reached target nss-lookup.target. Feb 9 10:05:06.358188 systemd[1]: Reached target sysinit.target. Feb 9 10:05:06.358772 systemd[1]: Started motdgen.path. Feb 9 10:05:06.359279 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 10:05:06.360179 systemd[1]: Started logrotate.timer. Feb 9 10:05:06.360948 systemd[1]: Started mdadm.timer. Feb 9 10:05:06.361634 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 10:05:06.362462 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 10:05:06.362495 systemd[1]: Reached target paths.target. Feb 9 10:05:06.363219 systemd[1]: Reached target timers.target. Feb 9 10:05:06.364216 systemd[1]: Listening on dbus.socket. Feb 9 10:05:06.365817 systemd[1]: Starting docker.socket... Feb 9 10:05:06.368640 systemd[1]: Listening on sshd.socket. Feb 9 10:05:06.369465 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 10:05:06.369865 systemd[1]: Listening on docker.socket. Feb 9 10:05:06.370647 systemd[1]: Reached target sockets.target. Feb 9 10:05:06.371408 systemd[1]: Reached target basic.target. Feb 9 10:05:06.372186 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 10:05:06.372216 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 10:05:06.373173 systemd[1]: Starting containerd.service... Feb 9 10:05:06.374755 systemd[1]: Starting dbus.service... Feb 9 10:05:06.376315 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 10:05:06.378122 systemd[1]: Starting extend-filesystems.service... Feb 9 10:05:06.378954 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 10:05:06.380179 systemd[1]: Starting motdgen.service... Feb 9 10:05:06.384918 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 10:05:06.386477 systemd[1]: Starting prepare-critools.service... Feb 9 10:05:06.387326 jq[1117]: false Feb 9 10:05:06.388261 systemd[1]: Starting prepare-helm.service... Feb 9 10:05:06.389919 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 10:05:06.391602 systemd[1]: Starting sshd-keygen.service... Feb 9 10:05:06.394573 systemd[1]: Starting systemd-logind.service... Feb 9 10:05:06.395210 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 10:05:06.395273 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 10:05:06.395642 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 10:05:06.396509 systemd[1]: Starting update-engine.service... Feb 9 10:05:06.399425 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 10:05:06.401615 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 10:05:06.401784 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 10:05:06.402240 jq[1136]: true Feb 9 10:05:06.407070 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 10:05:06.407221 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 10:05:06.418935 tar[1142]: linux-arm64/helm Feb 9 10:05:06.419175 jq[1143]: true Feb 9 10:05:06.419958 tar[1139]: ./ Feb 9 10:05:06.419958 tar[1139]: ./loopback Feb 9 10:05:06.425346 extend-filesystems[1118]: Found vda Feb 9 10:05:06.425346 extend-filesystems[1118]: Found vda1 Feb 9 10:05:06.425346 extend-filesystems[1118]: Found vda2 Feb 9 10:05:06.425346 extend-filesystems[1118]: Found vda3 Feb 9 10:05:06.425346 extend-filesystems[1118]: Found usr Feb 9 10:05:06.425346 extend-filesystems[1118]: Found vda4 Feb 9 10:05:06.425346 extend-filesystems[1118]: Found vda6 Feb 9 10:05:06.425346 extend-filesystems[1118]: Found vda7 Feb 9 10:05:06.425346 extend-filesystems[1118]: Found vda9 Feb 9 10:05:06.425346 extend-filesystems[1118]: Checking size of /dev/vda9 Feb 9 10:05:06.441010 tar[1140]: crictl Feb 9 10:05:06.431272 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 10:05:06.431418 systemd[1]: Finished motdgen.service. Feb 9 10:05:06.453714 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 10:05:06.452798 dbus-daemon[1116]: [system] SELinux support is enabled Feb 9 10:05:06.452948 systemd[1]: Started dbus.service. Feb 9 10:05:06.454171 extend-filesystems[1118]: Resized partition /dev/vda9 Feb 9 10:05:06.470161 extend-filesystems[1157]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 10:05:06.455401 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 10:05:06.455422 systemd[1]: Reached target system-config.target. Feb 9 10:05:06.456357 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 10:05:06.456373 systemd[1]: Reached target user-config.target. Feb 9 10:05:06.474327 systemd-logind[1133]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 10:05:06.478710 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 10:05:06.479760 systemd-logind[1133]: New seat seat0. Feb 9 10:05:06.481119 systemd[1]: Started systemd-logind.service. Feb 9 10:05:06.483112 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 10:05:06.500001 extend-filesystems[1157]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 10:05:06.500001 extend-filesystems[1157]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 10:05:06.500001 extend-filesystems[1157]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 10:05:06.504188 extend-filesystems[1118]: Resized filesystem in /dev/vda9 Feb 9 10:05:06.500487 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 10:05:06.500639 systemd[1]: Finished extend-filesystems.service. Feb 9 10:05:06.508554 update_engine[1135]: I0209 10:05:06.508224 1135 main.cc:92] Flatcar Update Engine starting Feb 9 10:05:06.511335 systemd[1]: Started update-engine.service. Feb 9 10:05:06.511444 update_engine[1135]: I0209 10:05:06.511418 1135 update_check_scheduler.cc:74] Next update check in 9m0s Feb 9 10:05:06.514166 systemd[1]: Started locksmithd.service. Feb 9 10:05:06.519732 bash[1173]: Updated "/home/core/.ssh/authorized_keys" Feb 9 10:05:06.519926 tar[1139]: ./bandwidth Feb 9 10:05:06.520524 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 10:05:06.544246 env[1144]: time="2024-02-09T10:05:06.544185920Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 10:05:06.561232 tar[1139]: ./ptp Feb 9 10:05:06.564223 env[1144]: time="2024-02-09T10:05:06.564188760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 10:05:06.564509 env[1144]: time="2024-02-09T10:05:06.564488120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:05:06.566047 env[1144]: time="2024-02-09T10:05:06.566013640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 10:05:06.566138 env[1144]: time="2024-02-09T10:05:06.566122800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:05:06.566582 env[1144]: time="2024-02-09T10:05:06.566555800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 10:05:06.566847 env[1144]: time="2024-02-09T10:05:06.566749920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 10:05:06.566943 env[1144]: time="2024-02-09T10:05:06.566925360Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 10:05:06.567000 env[1144]: time="2024-02-09T10:05:06.566985640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 10:05:06.567180 env[1144]: time="2024-02-09T10:05:06.567158680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:05:06.567536 env[1144]: time="2024-02-09T10:05:06.567514000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:05:06.567755 env[1144]: time="2024-02-09T10:05:06.567732920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 10:05:06.567854 env[1144]: time="2024-02-09T10:05:06.567826640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 10:05:06.567969 env[1144]: time="2024-02-09T10:05:06.567950840Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 10:05:06.568038 env[1144]: time="2024-02-09T10:05:06.568023920Z" level=info msg="metadata content store policy set" policy=shared Feb 9 10:05:06.581710 env[1144]: time="2024-02-09T10:05:06.579267640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 10:05:06.581710 env[1144]: time="2024-02-09T10:05:06.579300600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 10:05:06.581710 env[1144]: time="2024-02-09T10:05:06.579313560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 10:05:06.581710 env[1144]: time="2024-02-09T10:05:06.579342680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 10:05:06.581710 env[1144]: time="2024-02-09T10:05:06.579356160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 10:05:06.581710 env[1144]: time="2024-02-09T10:05:06.579370360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 10:05:06.581710 env[1144]: time="2024-02-09T10:05:06.579382480Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 10:05:06.581710 env[1144]: time="2024-02-09T10:05:06.579754200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 10:05:06.581710 env[1144]: time="2024-02-09T10:05:06.579777640Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 10:05:06.581710 env[1144]: time="2024-02-09T10:05:06.579791440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 10:05:06.581710 env[1144]: time="2024-02-09T10:05:06.579804960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 10:05:06.581710 env[1144]: time="2024-02-09T10:05:06.579818680Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 10:05:06.581710 env[1144]: time="2024-02-09T10:05:06.579984960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 10:05:06.581710 env[1144]: time="2024-02-09T10:05:06.580068280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 10:05:06.582027 env[1144]: time="2024-02-09T10:05:06.580293600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 10:05:06.582027 env[1144]: time="2024-02-09T10:05:06.580321200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 10:05:06.582027 env[1144]: time="2024-02-09T10:05:06.580336040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 10:05:06.582027 env[1144]: time="2024-02-09T10:05:06.580486000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 10:05:06.582027 env[1144]: time="2024-02-09T10:05:06.580498240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 10:05:06.582027 env[1144]: time="2024-02-09T10:05:06.580510560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 10:05:06.582027 env[1144]: time="2024-02-09T10:05:06.580521840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 10:05:06.582027 env[1144]: time="2024-02-09T10:05:06.580533480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 10:05:06.582027 env[1144]: time="2024-02-09T10:05:06.580545200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 10:05:06.582027 env[1144]: time="2024-02-09T10:05:06.580556640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 10:05:06.582027 env[1144]: time="2024-02-09T10:05:06.580568280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 10:05:06.582027 env[1144]: time="2024-02-09T10:05:06.580580400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 10:05:06.582027 env[1144]: time="2024-02-09T10:05:06.580717760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 10:05:06.582027 env[1144]: time="2024-02-09T10:05:06.580739480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 10:05:06.582027 env[1144]: time="2024-02-09T10:05:06.580752440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 10:05:06.582352 env[1144]: time="2024-02-09T10:05:06.580763680Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 10:05:06.582352 env[1144]: time="2024-02-09T10:05:06.580778840Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 10:05:06.582352 env[1144]: time="2024-02-09T10:05:06.580789560Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 10:05:06.582352 env[1144]: time="2024-02-09T10:05:06.580807440Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 10:05:06.582352 env[1144]: time="2024-02-09T10:05:06.580847400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 10:05:06.582483 env[1144]: time="2024-02-09T10:05:06.581042280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 10:05:06.582483 env[1144]: time="2024-02-09T10:05:06.581096920Z" level=info msg="Connect containerd service" Feb 9 10:05:06.582483 env[1144]: time="2024-02-09T10:05:06.581127400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 10:05:06.583266 env[1144]: time="2024-02-09T10:05:06.583243120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 10:05:06.583622 env[1144]: time="2024-02-09T10:05:06.583599000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 10:05:06.583743 env[1144]: time="2024-02-09T10:05:06.583728240Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 10:05:06.583861 env[1144]: time="2024-02-09T10:05:06.583836400Z" level=info msg="containerd successfully booted in 0.040417s" Feb 9 10:05:06.583921 systemd[1]: Started containerd.service. Feb 9 10:05:06.584937 env[1144]: time="2024-02-09T10:05:06.584892680Z" level=info msg="Start subscribing containerd event" Feb 9 10:05:06.584994 env[1144]: time="2024-02-09T10:05:06.584950440Z" level=info msg="Start recovering state" Feb 9 10:05:06.585018 env[1144]: time="2024-02-09T10:05:06.585009960Z" level=info msg="Start event monitor" Feb 9 10:05:06.585040 env[1144]: time="2024-02-09T10:05:06.585027360Z" level=info msg="Start snapshots syncer" Feb 9 10:05:06.585040 env[1144]: time="2024-02-09T10:05:06.585037280Z" level=info msg="Start cni network conf syncer for default" Feb 9 10:05:06.585078 env[1144]: time="2024-02-09T10:05:06.585044840Z" level=info msg="Start streaming server" Feb 9 10:05:06.610263 tar[1139]: ./vlan Feb 9 10:05:06.658281 tar[1139]: ./host-device Feb 9 10:05:06.703914 tar[1139]: ./tuning Feb 9 10:05:06.746194 tar[1139]: ./vrf Feb 9 10:05:06.789063 tar[1139]: ./sbr Feb 9 10:05:06.829733 tar[1139]: ./tap Feb 9 10:05:06.839773 tar[1142]: linux-arm64/LICENSE Feb 9 10:05:06.839896 tar[1142]: linux-arm64/README.md Feb 9 10:05:06.844154 systemd[1]: Finished prepare-helm.service. Feb 9 10:05:06.848082 locksmithd[1175]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 10:05:06.869047 tar[1139]: ./dhcp Feb 9 10:05:06.949605 tar[1139]: ./static Feb 9 10:05:06.956665 systemd[1]: Finished prepare-critools.service. Feb 9 10:05:06.973448 tar[1139]: ./firewall Feb 9 10:05:07.005206 tar[1139]: ./macvlan Feb 9 10:05:07.033556 tar[1139]: ./dummy Feb 9 10:05:07.061441 tar[1139]: ./bridge Feb 9 10:05:07.091816 tar[1139]: ./ipvlan Feb 9 10:05:07.119673 tar[1139]: ./portmap Feb 9 10:05:07.146205 tar[1139]: ./host-local Feb 9 10:05:07.180616 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 10:05:07.292887 systemd-networkd[1044]: eth0: Gained IPv6LL Feb 9 10:05:08.273810 sshd_keygen[1141]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 10:05:08.290244 systemd[1]: Finished sshd-keygen.service. Feb 9 10:05:08.292383 systemd[1]: Starting issuegen.service... Feb 9 10:05:08.296480 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 10:05:08.296611 systemd[1]: Finished issuegen.service. Feb 9 10:05:08.298596 systemd[1]: Starting systemd-user-sessions.service... Feb 9 10:05:08.304015 systemd[1]: Finished systemd-user-sessions.service. Feb 9 10:05:08.305995 systemd[1]: Started getty@tty1.service. Feb 9 10:05:08.307835 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 10:05:08.308830 systemd[1]: Reached target getty.target. Feb 9 10:05:08.309606 systemd[1]: Reached target multi-user.target. Feb 9 10:05:08.311400 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 10:05:08.317151 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 10:05:08.317287 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 10:05:08.318315 systemd[1]: Startup finished in 555ms (kernel) + 5.878s (initrd) + 4.862s (userspace) = 11.296s. Feb 9 10:05:10.258365 systemd[1]: Created slice system-sshd.slice. Feb 9 10:05:10.259442 systemd[1]: Started sshd@0-10.0.0.120:22-10.0.0.1:36442.service. Feb 9 10:05:10.307957 sshd[1204]: Accepted publickey for core from 10.0.0.1 port 36442 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:05:10.311813 sshd[1204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:05:10.318867 systemd[1]: Created slice user-500.slice. Feb 9 10:05:10.319848 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 10:05:10.321148 systemd-logind[1133]: New session 1 of user core. Feb 9 10:05:10.326956 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 10:05:10.328160 systemd[1]: Starting user@500.service... Feb 9 10:05:10.330577 (systemd)[1207]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:05:10.384739 systemd[1207]: Queued start job for default target default.target. Feb 9 10:05:10.385317 systemd[1207]: Reached target paths.target. Feb 9 10:05:10.385425 systemd[1207]: Reached target sockets.target. Feb 9 10:05:10.385499 systemd[1207]: Reached target timers.target. Feb 9 10:05:10.385567 systemd[1207]: Reached target basic.target. Feb 9 10:05:10.385729 systemd[1]: Started user@500.service. Feb 9 10:05:10.386276 systemd[1207]: Reached target default.target. Feb 9 10:05:10.386316 systemd[1207]: Startup finished in 50ms. Feb 9 10:05:10.386436 systemd[1]: Started session-1.scope. Feb 9 10:05:10.436011 systemd[1]: Started sshd@1-10.0.0.120:22-10.0.0.1:36448.service. Feb 9 10:05:10.486731 sshd[1216]: Accepted publickey for core from 10.0.0.1 port 36448 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:05:10.487775 sshd[1216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:05:10.494627 systemd-logind[1133]: New session 2 of user core. Feb 9 10:05:10.495357 systemd[1]: Started session-2.scope. Feb 9 10:05:10.548982 sshd[1216]: pam_unix(sshd:session): session closed for user core Feb 9 10:05:10.552866 systemd[1]: Started sshd@2-10.0.0.120:22-10.0.0.1:36464.service. Feb 9 10:05:10.553652 systemd[1]: sshd@1-10.0.0.120:22-10.0.0.1:36448.service: Deactivated successfully. Feb 9 10:05:10.554418 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 10:05:10.554968 systemd-logind[1133]: Session 2 logged out. Waiting for processes to exit. Feb 9 10:05:10.555616 systemd-logind[1133]: Removed session 2. Feb 9 10:05:10.595843 sshd[1221]: Accepted publickey for core from 10.0.0.1 port 36464 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:05:10.596832 sshd[1221]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:05:10.599604 systemd-logind[1133]: New session 3 of user core. Feb 9 10:05:10.600295 systemd[1]: Started session-3.scope. Feb 9 10:05:10.647060 sshd[1221]: pam_unix(sshd:session): session closed for user core Feb 9 10:05:10.649272 systemd[1]: sshd@2-10.0.0.120:22-10.0.0.1:36464.service: Deactivated successfully. Feb 9 10:05:10.649884 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 10:05:10.650323 systemd-logind[1133]: Session 3 logged out. Waiting for processes to exit. Feb 9 10:05:10.651178 systemd[1]: Started sshd@3-10.0.0.120:22-10.0.0.1:36474.service. Feb 9 10:05:10.651729 systemd-logind[1133]: Removed session 3. Feb 9 10:05:10.691813 sshd[1228]: Accepted publickey for core from 10.0.0.1 port 36474 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:05:10.692819 sshd[1228]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:05:10.695624 systemd-logind[1133]: New session 4 of user core. Feb 9 10:05:10.696319 systemd[1]: Started session-4.scope. Feb 9 10:05:10.747197 sshd[1228]: pam_unix(sshd:session): session closed for user core Feb 9 10:05:10.750728 systemd[1]: sshd@3-10.0.0.120:22-10.0.0.1:36474.service: Deactivated successfully. Feb 9 10:05:10.751310 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 10:05:10.751827 systemd-logind[1133]: Session 4 logged out. Waiting for processes to exit. Feb 9 10:05:10.752798 systemd[1]: Started sshd@4-10.0.0.120:22-10.0.0.1:36480.service. Feb 9 10:05:10.753591 systemd-logind[1133]: Removed session 4. Feb 9 10:05:10.793111 sshd[1234]: Accepted publickey for core from 10.0.0.1 port 36480 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:05:10.794096 sshd[1234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:05:10.796920 systemd-logind[1133]: New session 5 of user core. Feb 9 10:05:10.797762 systemd[1]: Started session-5.scope. Feb 9 10:05:10.854962 sudo[1237]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 10:05:10.855154 sudo[1237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 10:05:11.576587 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 10:05:11.583709 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 10:05:11.584080 systemd[1]: Reached target network-online.target. Feb 9 10:05:11.585389 systemd[1]: Starting docker.service... Feb 9 10:05:11.665130 env[1255]: time="2024-02-09T10:05:11.665086839Z" level=info msg="Starting up" Feb 9 10:05:11.666812 env[1255]: time="2024-02-09T10:05:11.666785986Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 10:05:11.666895 env[1255]: time="2024-02-09T10:05:11.666883170Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 10:05:11.666979 env[1255]: time="2024-02-09T10:05:11.666962695Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 10:05:11.667032 env[1255]: time="2024-02-09T10:05:11.667018873Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 10:05:11.669022 env[1255]: time="2024-02-09T10:05:11.668998827Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 10:05:11.669109 env[1255]: time="2024-02-09T10:05:11.669095972Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 10:05:11.669161 env[1255]: time="2024-02-09T10:05:11.669149028Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 10:05:11.669222 env[1255]: time="2024-02-09T10:05:11.669209472Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 10:05:11.890427 env[1255]: time="2024-02-09T10:05:11.890303152Z" level=info msg="Loading containers: start." Feb 9 10:05:11.975819 kernel: Initializing XFRM netlink socket Feb 9 10:05:11.997284 env[1255]: time="2024-02-09T10:05:11.997252313Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 10:05:12.053387 systemd-networkd[1044]: docker0: Link UP Feb 9 10:05:12.061020 env[1255]: time="2024-02-09T10:05:12.060976684Z" level=info msg="Loading containers: done." Feb 9 10:05:12.079470 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck854155847-merged.mount: Deactivated successfully. Feb 9 10:05:12.084522 env[1255]: time="2024-02-09T10:05:12.084473794Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 10:05:12.084658 env[1255]: time="2024-02-09T10:05:12.084631985Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 10:05:12.084765 env[1255]: time="2024-02-09T10:05:12.084744199Z" level=info msg="Daemon has completed initialization" Feb 9 10:05:12.098582 systemd[1]: Started docker.service. Feb 9 10:05:12.105386 env[1255]: time="2024-02-09T10:05:12.105339816Z" level=info msg="API listen on /run/docker.sock" Feb 9 10:05:12.120449 systemd[1]: Reloading. Feb 9 10:05:12.157705 /usr/lib/systemd/system-generators/torcx-generator[1397]: time="2024-02-09T10:05:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:05:12.160721 /usr/lib/systemd/system-generators/torcx-generator[1397]: time="2024-02-09T10:05:12Z" level=info msg="torcx already run" Feb 9 10:05:12.211741 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:05:12.211758 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:05:12.226379 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:05:12.287431 systemd[1]: Started kubelet.service. Feb 9 10:05:12.426394 kubelet[1434]: E0209 10:05:12.426260 1434 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 10:05:12.428654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 10:05:12.428785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 10:05:12.632817 env[1144]: time="2024-02-09T10:05:12.632768813Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 9 10:05:13.269818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878012921.mount: Deactivated successfully. Feb 9 10:05:15.236966 env[1144]: time="2024-02-09T10:05:15.236903312Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:15.238285 env[1144]: time="2024-02-09T10:05:15.238259817Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d19178cf7413f0942a116deaaea447983d297afb5dc7f62456c43839e7aaecfa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:15.239949 env[1144]: time="2024-02-09T10:05:15.239918677Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:15.241515 env[1144]: time="2024-02-09T10:05:15.241494224Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:15.242298 env[1144]: time="2024-02-09T10:05:15.242274215Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:d19178cf7413f0942a116deaaea447983d297afb5dc7f62456c43839e7aaecfa\"" Feb 9 10:05:15.251912 env[1144]: time="2024-02-09T10:05:15.251875578Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 9 10:05:17.164265 env[1144]: time="2024-02-09T10:05:17.164222080Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:17.165969 env[1144]: time="2024-02-09T10:05:17.165937476Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6b9759f115be4c68b4a500b8c1d7bbeaf16e8e887b01eaf79c135b7b267baf95,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:17.167606 env[1144]: time="2024-02-09T10:05:17.167578289Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:17.170027 env[1144]: time="2024-02-09T10:05:17.169995094Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:17.170775 env[1144]: time="2024-02-09T10:05:17.170744516Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:6b9759f115be4c68b4a500b8c1d7bbeaf16e8e887b01eaf79c135b7b267baf95\"" Feb 9 10:05:17.179841 env[1144]: time="2024-02-09T10:05:17.179813991Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 9 10:05:18.467069 env[1144]: time="2024-02-09T10:05:18.467015650Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:18.468227 env[1144]: time="2024-02-09T10:05:18.468197249Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:745369ed75bfc0dd1319e4c64383b4ef2cb163cec6630fa288ad3fb6bf6624eb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:18.470179 env[1144]: time="2024-02-09T10:05:18.470154906Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:18.471572 env[1144]: time="2024-02-09T10:05:18.471544372Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:18.473046 env[1144]: time="2024-02-09T10:05:18.472994264Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:745369ed75bfc0dd1319e4c64383b4ef2cb163cec6630fa288ad3fb6bf6624eb\"" Feb 9 10:05:18.481802 env[1144]: time="2024-02-09T10:05:18.481764151Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 10:05:19.566768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003767462.mount: Deactivated successfully. Feb 9 10:05:20.053698 env[1144]: time="2024-02-09T10:05:20.053581112Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:20.054998 env[1144]: time="2024-02-09T10:05:20.054972644Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:20.056394 env[1144]: time="2024-02-09T10:05:20.056364934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:20.057897 env[1144]: time="2024-02-09T10:05:20.057868050Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:20.058392 env[1144]: time="2024-02-09T10:05:20.058347662Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef\"" Feb 9 10:05:20.066870 env[1144]: time="2024-02-09T10:05:20.066836694Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 10:05:20.509246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2588672403.mount: Deactivated successfully. Feb 9 10:05:20.512741 env[1144]: time="2024-02-09T10:05:20.512700936Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:20.514556 env[1144]: time="2024-02-09T10:05:20.514526451Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:20.515839 env[1144]: time="2024-02-09T10:05:20.515815684Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:20.517273 env[1144]: time="2024-02-09T10:05:20.517249220Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:20.517808 env[1144]: time="2024-02-09T10:05:20.517783309Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 10:05:20.526469 env[1144]: time="2024-02-09T10:05:20.526438880Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 9 10:05:21.346002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1108858069.mount: Deactivated successfully. Feb 9 10:05:22.679483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 10:05:22.679660 systemd[1]: Stopped kubelet.service. Feb 9 10:05:22.681105 systemd[1]: Started kubelet.service. Feb 9 10:05:22.720284 kubelet[1488]: E0209 10:05:22.720236 1488 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 10:05:22.723092 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 10:05:22.723215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 10:05:23.237084 env[1144]: time="2024-02-09T10:05:23.237034845Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:23.238422 env[1144]: time="2024-02-09T10:05:23.238371671Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:23.239908 env[1144]: time="2024-02-09T10:05:23.239868337Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:23.241454 env[1144]: time="2024-02-09T10:05:23.241413363Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:23.242207 env[1144]: time="2024-02-09T10:05:23.242178814Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737\"" Feb 9 10:05:23.251020 env[1144]: time="2024-02-09T10:05:23.250986882Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 10:05:23.815619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3784326732.mount: Deactivated successfully. Feb 9 10:05:24.859334 env[1144]: time="2024-02-09T10:05:24.859276263Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:24.860698 env[1144]: time="2024-02-09T10:05:24.860651502Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:24.862338 env[1144]: time="2024-02-09T10:05:24.862314433Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:24.863665 env[1144]: time="2024-02-09T10:05:24.863633276Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:24.864217 env[1144]: time="2024-02-09T10:05:24.864189582Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Feb 9 10:05:28.712649 systemd[1]: Stopped kubelet.service. Feb 9 10:05:28.726058 systemd[1]: Reloading. Feb 9 10:05:28.777791 /usr/lib/systemd/system-generators/torcx-generator[1599]: time="2024-02-09T10:05:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:05:28.777820 /usr/lib/systemd/system-generators/torcx-generator[1599]: time="2024-02-09T10:05:28Z" level=info msg="torcx already run" Feb 9 10:05:28.828748 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:05:28.828768 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:05:28.843521 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:05:28.910132 systemd[1]: Started kubelet.service. Feb 9 10:05:28.958718 kubelet[1637]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:05:28.959018 kubelet[1637]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 10:05:28.959064 kubelet[1637]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:05:28.959217 kubelet[1637]: I0209 10:05:28.959169 1637 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 10:05:30.245491 kubelet[1637]: I0209 10:05:30.245451 1637 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 10:05:30.245491 kubelet[1637]: I0209 10:05:30.245483 1637 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 10:05:30.245816 kubelet[1637]: I0209 10:05:30.245675 1637 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 10:05:30.250570 kubelet[1637]: I0209 10:05:30.250549 1637 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 10:05:30.250848 kubelet[1637]: E0209 10:05:30.250833 1637 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.120:6443: connect: connection refused Feb 9 10:05:30.252037 kubelet[1637]: W0209 10:05:30.252017 1637 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 10:05:30.252643 kubelet[1637]: I0209 10:05:30.252627 1637 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 10:05:30.252864 kubelet[1637]: I0209 10:05:30.252841 1637 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 10:05:30.252941 kubelet[1637]: I0209 10:05:30.252906 1637 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 10:05:30.252941 kubelet[1637]: I0209 10:05:30.252926 1637 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 10:05:30.252941 kubelet[1637]: I0209 10:05:30.252936 1637 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 10:05:30.253064 kubelet[1637]: I0209 10:05:30.253016 1637 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:05:30.255779 kubelet[1637]: I0209 10:05:30.255754 1637 kubelet.go:405] "Attempting to sync node with API server" Feb 9 10:05:30.255886 kubelet[1637]: I0209 10:05:30.255873 1637 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 10:05:30.255968 kubelet[1637]: I0209 10:05:30.255958 1637 kubelet.go:309] "Adding apiserver pod source" Feb 9 10:05:30.256029 kubelet[1637]: I0209 10:05:30.256020 1637 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 10:05:30.256818 kubelet[1637]: W0209 10:05:30.256780 1637 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Feb 9 10:05:30.256910 kubelet[1637]: E0209 10:05:30.256899 1637 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Feb 9 10:05:30.257041 kubelet[1637]: I0209 10:05:30.257028 1637 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 10:05:30.257439 kubelet[1637]: W0209 10:05:30.257421 1637 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 10:05:30.258145 kubelet[1637]: I0209 10:05:30.258126 1637 server.go:1168] "Started kubelet" Feb 9 10:05:30.259883 kubelet[1637]: E0209 10:05:30.259800 1637 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b229c3f08f10fa", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 5, 30, 258108666, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 5, 30, 258108666, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.120:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.120:6443: connect: connection refused'(may retry after sleeping) Feb 9 10:05:30.259978 kubelet[1637]: I0209 10:05:30.259966 1637 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 10:05:30.260019 kubelet[1637]: I0209 10:05:30.260008 1637 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 10:05:30.261273 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 10:05:30.261346 kubelet[1637]: E0209 10:05:30.260856 1637 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 10:05:30.261346 kubelet[1637]: E0209 10:05:30.260880 1637 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 10:05:30.261513 kubelet[1637]: W0209 10:05:30.261449 1637 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Feb 9 10:05:30.261558 kubelet[1637]: E0209 10:05:30.261535 1637 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Feb 9 10:05:30.261776 kubelet[1637]: I0209 10:05:30.261745 1637 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 10:05:30.261833 kubelet[1637]: I0209 10:05:30.261762 1637 server.go:461] "Adding debug handlers to kubelet server" Feb 9 10:05:30.263777 kubelet[1637]: I0209 10:05:30.263747 1637 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 10:05:30.264771 kubelet[1637]: I0209 10:05:30.264749 1637 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 10:05:30.265042 kubelet[1637]: E0209 10:05:30.265016 1637 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="200ms" Feb 9 10:05:30.265042 kubelet[1637]: W0209 10:05:30.265006 1637 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Feb 9 10:05:30.265309 kubelet[1637]: E0209 10:05:30.265287 1637 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Feb 9 10:05:30.275975 kubelet[1637]: I0209 10:05:30.275952 1637 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 10:05:30.276864 kubelet[1637]: I0209 10:05:30.276847 1637 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 10:05:30.276923 kubelet[1637]: I0209 10:05:30.276874 1637 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 10:05:30.276923 kubelet[1637]: I0209 10:05:30.276892 1637 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 10:05:30.276975 kubelet[1637]: E0209 10:05:30.276936 1637 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 10:05:30.277441 kubelet[1637]: W0209 10:05:30.277416 1637 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Feb 9 10:05:30.277545 kubelet[1637]: E0209 10:05:30.277533 1637 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Feb 9 10:05:30.281180 kubelet[1637]: I0209 10:05:30.281160 1637 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 10:05:30.281180 kubelet[1637]: I0209 10:05:30.281178 1637 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 10:05:30.281271 kubelet[1637]: I0209 10:05:30.281193 1637 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:05:30.283598 kubelet[1637]: I0209 10:05:30.283562 1637 policy_none.go:49] "None policy: Start" Feb 9 10:05:30.284103 kubelet[1637]: I0209 10:05:30.284074 1637 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 10:05:30.284103 kubelet[1637]: I0209 10:05:30.284096 1637 state_mem.go:35] "Initializing new in-memory state store" Feb 9 10:05:30.288988 systemd[1]: Created slice kubepods.slice. Feb 9 10:05:30.292388 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 10:05:30.294744 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 10:05:30.305371 kubelet[1637]: I0209 10:05:30.305344 1637 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 10:05:30.305557 kubelet[1637]: I0209 10:05:30.305534 1637 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 10:05:30.306552 kubelet[1637]: E0209 10:05:30.306537 1637 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 10:05:30.365764 kubelet[1637]: I0209 10:05:30.365743 1637 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 10:05:30.366149 kubelet[1637]: E0209 10:05:30.366136 1637 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Feb 9 10:05:30.377262 kubelet[1637]: I0209 10:05:30.377237 1637 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:05:30.378090 kubelet[1637]: I0209 10:05:30.378065 1637 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:05:30.378880 kubelet[1637]: I0209 10:05:30.378857 1637 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:05:30.383353 systemd[1]: Created slice kubepods-burstable-pod554f6a6401872c20854c5a48c1c73e16.slice. Feb 9 10:05:30.394624 systemd[1]: Created slice kubepods-burstable-pod2b0e94b38682f4e439413801d3cc54db.slice. Feb 9 10:05:30.408513 systemd[1]: Created slice kubepods-burstable-pod7709ea05d7cdf82b0d7e594b61a10331.slice. Feb 9 10:05:30.465956 kubelet[1637]: E0209 10:05:30.465909 1637 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="400ms" Feb 9 10:05:30.566312 kubelet[1637]: I0209 10:05:30.566243 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:05:30.566312 kubelet[1637]: I0209 10:05:30.566277 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:05:30.566312 kubelet[1637]: I0209 10:05:30.566303 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/554f6a6401872c20854c5a48c1c73e16-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"554f6a6401872c20854c5a48c1c73e16\") " pod="kube-system/kube-apiserver-localhost" Feb 9 10:05:30.566463 kubelet[1637]: I0209 10:05:30.566322 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/554f6a6401872c20854c5a48c1c73e16-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"554f6a6401872c20854c5a48c1c73e16\") " pod="kube-system/kube-apiserver-localhost" Feb 9 10:05:30.566463 kubelet[1637]: I0209 10:05:30.566424 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/554f6a6401872c20854c5a48c1c73e16-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"554f6a6401872c20854c5a48c1c73e16\") " pod="kube-system/kube-apiserver-localhost" Feb 9 10:05:30.566530 kubelet[1637]: I0209 10:05:30.566479 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:05:30.566556 kubelet[1637]: I0209 10:05:30.566530 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:05:30.566583 kubelet[1637]: I0209 10:05:30.566570 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:05:30.566609 kubelet[1637]: I0209 10:05:30.566596 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b0e94b38682f4e439413801d3cc54db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2b0e94b38682f4e439413801d3cc54db\") " pod="kube-system/kube-scheduler-localhost" Feb 9 10:05:30.568090 kubelet[1637]: I0209 10:05:30.568071 1637 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 10:05:30.568332 kubelet[1637]: E0209 10:05:30.568317 1637 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Feb 9 10:05:30.693997 kubelet[1637]: E0209 10:05:30.693967 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:30.694523 env[1144]: time="2024-02-09T10:05:30.694465029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:554f6a6401872c20854c5a48c1c73e16,Namespace:kube-system,Attempt:0,}" Feb 9 10:05:30.707060 kubelet[1637]: E0209 10:05:30.707029 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:30.707404 env[1144]: time="2024-02-09T10:05:30.707348908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2b0e94b38682f4e439413801d3cc54db,Namespace:kube-system,Attempt:0,}" Feb 9 10:05:30.710724 kubelet[1637]: E0209 10:05:30.710540 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:30.711556 env[1144]: time="2024-02-09T10:05:30.711496811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7709ea05d7cdf82b0d7e594b61a10331,Namespace:kube-system,Attempt:0,}" Feb 9 10:05:30.866837 kubelet[1637]: E0209 10:05:30.866767 1637 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="800ms" Feb 9 10:05:30.969993 kubelet[1637]: I0209 10:05:30.969971 1637 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 10:05:30.970260 kubelet[1637]: E0209 10:05:30.970244 1637 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Feb 9 10:05:31.079094 kubelet[1637]: W0209 10:05:31.079043 1637 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Feb 9 10:05:31.079094 kubelet[1637]: E0209 10:05:31.079095 1637 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Feb 9 10:05:31.161182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824994678.mount: Deactivated successfully. Feb 9 10:05:31.167091 env[1144]: time="2024-02-09T10:05:31.167059827Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:31.168939 env[1144]: time="2024-02-09T10:05:31.168903530Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:31.170018 env[1144]: time="2024-02-09T10:05:31.169993917Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:31.170724 env[1144]: time="2024-02-09T10:05:31.170697315Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:31.172381 env[1144]: time="2024-02-09T10:05:31.172351779Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:31.174721 env[1144]: time="2024-02-09T10:05:31.174696972Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:31.177782 env[1144]: time="2024-02-09T10:05:31.177752957Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:31.182669 env[1144]: time="2024-02-09T10:05:31.182642653Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:31.183783 env[1144]: time="2024-02-09T10:05:31.183755181Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:31.184561 env[1144]: time="2024-02-09T10:05:31.184538031Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:31.185260 env[1144]: time="2024-02-09T10:05:31.185234635Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:31.185906 env[1144]: time="2024-02-09T10:05:31.185885278Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:31.204478 env[1144]: time="2024-02-09T10:05:31.204360508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:05:31.204478 env[1144]: time="2024-02-09T10:05:31.204444516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:05:31.204478 env[1144]: time="2024-02-09T10:05:31.204455147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:05:31.204737 env[1144]: time="2024-02-09T10:05:31.204675718Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/81857267725723eca3735d3304446e128a8b00f5ca181b92ff9b41dccccdca4e pid=1680 runtime=io.containerd.runc.v2 Feb 9 10:05:31.207584 env[1144]: time="2024-02-09T10:05:31.206447282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:05:31.207584 env[1144]: time="2024-02-09T10:05:31.206515704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:05:31.207584 env[1144]: time="2024-02-09T10:05:31.206551553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:05:31.207584 env[1144]: time="2024-02-09T10:05:31.206705941Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a19cc5154ee4d8faca618412980115cdfec86cdaacb3e611e70a6d0153a81be pid=1693 runtime=io.containerd.runc.v2 Feb 9 10:05:31.213494 env[1144]: time="2024-02-09T10:05:31.213417318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:05:31.213494 env[1144]: time="2024-02-09T10:05:31.213453047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:05:31.213494 env[1144]: time="2024-02-09T10:05:31.213462719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:05:31.213779 env[1144]: time="2024-02-09T10:05:31.213731329Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5de4fcc9e19fb7164e957d837740e05a81d7feecd5317a749b1202352714029a pid=1717 runtime=io.containerd.runc.v2 Feb 9 10:05:31.221638 systemd[1]: Started cri-containerd-4a19cc5154ee4d8faca618412980115cdfec86cdaacb3e611e70a6d0153a81be.scope. Feb 9 10:05:31.222557 systemd[1]: Started cri-containerd-81857267725723eca3735d3304446e128a8b00f5ca181b92ff9b41dccccdca4e.scope. Feb 9 10:05:31.233279 systemd[1]: Started cri-containerd-5de4fcc9e19fb7164e957d837740e05a81d7feecd5317a749b1202352714029a.scope. Feb 9 10:05:31.294741 env[1144]: time="2024-02-09T10:05:31.294664113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2b0e94b38682f4e439413801d3cc54db,Namespace:kube-system,Attempt:0,} returns sandbox id \"81857267725723eca3735d3304446e128a8b00f5ca181b92ff9b41dccccdca4e\"" Feb 9 10:05:31.301383 kubelet[1637]: E0209 10:05:31.301243 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:31.303568 env[1144]: time="2024-02-09T10:05:31.303517697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7709ea05d7cdf82b0d7e594b61a10331,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a19cc5154ee4d8faca618412980115cdfec86cdaacb3e611e70a6d0153a81be\"" Feb 9 10:05:31.303879 env[1144]: time="2024-02-09T10:05:31.303843938Z" level=info msg="CreateContainer within sandbox \"81857267725723eca3735d3304446e128a8b00f5ca181b92ff9b41dccccdca4e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 10:05:31.304181 kubelet[1637]: E0209 10:05:31.304064 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:31.305952 env[1144]: time="2024-02-09T10:05:31.305916204Z" level=info msg="CreateContainer within sandbox \"4a19cc5154ee4d8faca618412980115cdfec86cdaacb3e611e70a6d0153a81be\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 10:05:31.306449 env[1144]: time="2024-02-09T10:05:31.306417695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:554f6a6401872c20854c5a48c1c73e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"5de4fcc9e19fb7164e957d837740e05a81d7feecd5317a749b1202352714029a\"" Feb 9 10:05:31.307001 kubelet[1637]: E0209 10:05:31.306881 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:31.311147 env[1144]: time="2024-02-09T10:05:31.309568479Z" level=info msg="CreateContainer within sandbox \"5de4fcc9e19fb7164e957d837740e05a81d7feecd5317a749b1202352714029a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 10:05:31.323467 env[1144]: time="2024-02-09T10:05:31.323420346Z" level=info msg="CreateContainer within sandbox \"4a19cc5154ee4d8faca618412980115cdfec86cdaacb3e611e70a6d0153a81be\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5ca3182a18c8b54feb2eaa965d2bf4267477c89cd5d5088e2eacc7657393f074\"" Feb 9 10:05:31.324158 env[1144]: time="2024-02-09T10:05:31.324133096Z" level=info msg="StartContainer for \"5ca3182a18c8b54feb2eaa965d2bf4267477c89cd5d5088e2eacc7657393f074\"" Feb 9 10:05:31.324258 env[1144]: time="2024-02-09T10:05:31.324230333Z" level=info msg="CreateContainer within sandbox \"81857267725723eca3735d3304446e128a8b00f5ca181b92ff9b41dccccdca4e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d3a3c7b88091cbfef398431fc3573774a8ca767f1ef5f9dfddcd992fef20abe1\"" Feb 9 10:05:31.324598 env[1144]: time="2024-02-09T10:05:31.324576277Z" level=info msg="StartContainer for \"d3a3c7b88091cbfef398431fc3573774a8ca767f1ef5f9dfddcd992fef20abe1\"" Feb 9 10:05:31.325859 env[1144]: time="2024-02-09T10:05:31.325824568Z" level=info msg="CreateContainer within sandbox \"5de4fcc9e19fb7164e957d837740e05a81d7feecd5317a749b1202352714029a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c53be1247163d020d37438ed066cf34a6c505d840f48b90a222b7d93268a21ba\"" Feb 9 10:05:31.326252 env[1144]: time="2024-02-09T10:05:31.326230261Z" level=info msg="StartContainer for \"c53be1247163d020d37438ed066cf34a6c505d840f48b90a222b7d93268a21ba\"" Feb 9 10:05:31.340516 systemd[1]: Started cri-containerd-5ca3182a18c8b54feb2eaa965d2bf4267477c89cd5d5088e2eacc7657393f074.scope. Feb 9 10:05:31.342394 systemd[1]: Started cri-containerd-c53be1247163d020d37438ed066cf34a6c505d840f48b90a222b7d93268a21ba.scope. Feb 9 10:05:31.351273 systemd[1]: Started cri-containerd-d3a3c7b88091cbfef398431fc3573774a8ca767f1ef5f9dfddcd992fef20abe1.scope. Feb 9 10:05:31.398109 env[1144]: time="2024-02-09T10:05:31.397905887Z" level=info msg="StartContainer for \"c53be1247163d020d37438ed066cf34a6c505d840f48b90a222b7d93268a21ba\" returns successfully" Feb 9 10:05:31.400325 env[1144]: time="2024-02-09T10:05:31.400223024Z" level=info msg="StartContainer for \"d3a3c7b88091cbfef398431fc3573774a8ca767f1ef5f9dfddcd992fef20abe1\" returns successfully" Feb 9 10:05:31.423811 env[1144]: time="2024-02-09T10:05:31.423606774Z" level=info msg="StartContainer for \"5ca3182a18c8b54feb2eaa965d2bf4267477c89cd5d5088e2eacc7657393f074\" returns successfully" Feb 9 10:05:31.503805 kubelet[1637]: W0209 10:05:31.502626 1637 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Feb 9 10:05:31.503805 kubelet[1637]: E0209 10:05:31.502708 1637 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Feb 9 10:05:31.772142 kubelet[1637]: I0209 10:05:31.772046 1637 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 10:05:32.285317 kubelet[1637]: E0209 10:05:32.285293 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:32.287485 kubelet[1637]: E0209 10:05:32.287467 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:32.289179 kubelet[1637]: E0209 10:05:32.289161 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:33.291188 kubelet[1637]: E0209 10:05:33.291153 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:33.567570 kubelet[1637]: E0209 10:05:33.567468 1637 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 10:05:33.645268 kubelet[1637]: I0209 10:05:33.645225 1637 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 10:05:34.260501 kubelet[1637]: I0209 10:05:34.260451 1637 apiserver.go:52] "Watching apiserver" Feb 9 10:05:34.265153 kubelet[1637]: I0209 10:05:34.265124 1637 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 10:05:34.285357 kubelet[1637]: I0209 10:05:34.285327 1637 reconciler.go:41] "Reconciler: start to sync state" Feb 9 10:05:34.295577 kubelet[1637]: E0209 10:05:34.295550 1637 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 9 10:05:34.296055 kubelet[1637]: E0209 10:05:34.296019 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:35.991130 systemd[1]: Reloading. Feb 9 10:05:36.034925 /usr/lib/systemd/system-generators/torcx-generator[1932]: time="2024-02-09T10:05:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:05:36.034956 /usr/lib/systemd/system-generators/torcx-generator[1932]: time="2024-02-09T10:05:36Z" level=info msg="torcx already run" Feb 9 10:05:36.091635 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:05:36.091662 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:05:36.107042 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:05:36.185401 systemd[1]: Stopping kubelet.service... Feb 9 10:05:36.199096 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 10:05:36.199289 systemd[1]: Stopped kubelet.service. Feb 9 10:05:36.199331 systemd[1]: kubelet.service: Consumed 1.544s CPU time. Feb 9 10:05:36.200817 systemd[1]: Started kubelet.service. Feb 9 10:05:36.249759 kubelet[1970]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:05:36.249759 kubelet[1970]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 10:05:36.249759 kubelet[1970]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:05:36.250074 kubelet[1970]: I0209 10:05:36.249986 1970 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 10:05:36.255260 kubelet[1970]: I0209 10:05:36.255212 1970 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 10:05:36.255260 kubelet[1970]: I0209 10:05:36.255237 1970 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 10:05:36.255400 kubelet[1970]: I0209 10:05:36.255392 1970 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 10:05:36.256866 kubelet[1970]: I0209 10:05:36.256850 1970 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 10:05:36.257726 kubelet[1970]: I0209 10:05:36.257697 1970 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 10:05:36.259091 kubelet[1970]: W0209 10:05:36.259072 1970 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 10:05:36.259799 kubelet[1970]: I0209 10:05:36.259778 1970 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 10:05:36.259978 kubelet[1970]: I0209 10:05:36.259962 1970 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 10:05:36.260038 kubelet[1970]: I0209 10:05:36.260024 1970 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 10:05:36.260112 kubelet[1970]: I0209 10:05:36.260043 1970 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 10:05:36.260112 kubelet[1970]: I0209 10:05:36.260053 1970 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 10:05:36.260112 kubelet[1970]: I0209 10:05:36.260085 1970 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:05:36.262950 kubelet[1970]: I0209 10:05:36.262929 1970 kubelet.go:405] "Attempting to sync node with API server" Feb 9 10:05:36.263038 kubelet[1970]: I0209 10:05:36.262956 1970 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 10:05:36.263038 kubelet[1970]: I0209 10:05:36.262996 1970 kubelet.go:309] "Adding apiserver pod source" Feb 9 10:05:36.263038 kubelet[1970]: I0209 10:05:36.263010 1970 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 10:05:36.263919 kubelet[1970]: I0209 10:05:36.263901 1970 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 10:05:36.266939 kubelet[1970]: I0209 10:05:36.266918 1970 server.go:1168] "Started kubelet" Feb 9 10:05:36.267185 kubelet[1970]: I0209 10:05:36.267170 1970 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 10:05:36.267809 kubelet[1970]: I0209 10:05:36.267792 1970 server.go:461] "Adding debug handlers to kubelet server" Feb 9 10:05:36.268063 kubelet[1970]: I0209 10:05:36.268045 1970 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 10:05:36.268729 kubelet[1970]: I0209 10:05:36.268702 1970 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 10:05:36.271559 kubelet[1970]: E0209 10:05:36.271523 1970 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 10:05:36.271647 kubelet[1970]: E0209 10:05:36.271576 1970 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 10:05:36.273958 kubelet[1970]: I0209 10:05:36.273939 1970 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 10:05:36.274069 kubelet[1970]: I0209 10:05:36.274054 1970 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 10:05:36.304307 kubelet[1970]: I0209 10:05:36.304281 1970 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 10:05:36.305297 kubelet[1970]: I0209 10:05:36.305280 1970 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 10:05:36.305407 kubelet[1970]: I0209 10:05:36.305395 1970 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 10:05:36.305484 kubelet[1970]: I0209 10:05:36.305474 1970 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 10:05:36.305594 kubelet[1970]: E0209 10:05:36.305582 1970 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 10:05:36.340277 kubelet[1970]: I0209 10:05:36.340254 1970 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 10:05:36.340428 kubelet[1970]: I0209 10:05:36.340416 1970 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 10:05:36.340489 kubelet[1970]: I0209 10:05:36.340480 1970 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:05:36.340736 kubelet[1970]: I0209 10:05:36.340720 1970 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 10:05:36.340820 kubelet[1970]: I0209 10:05:36.340810 1970 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 10:05:36.340872 kubelet[1970]: I0209 10:05:36.340864 1970 policy_none.go:49] "None policy: Start" Feb 9 10:05:36.341483 kubelet[1970]: I0209 10:05:36.341470 1970 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 10:05:36.341594 kubelet[1970]: I0209 10:05:36.341582 1970 state_mem.go:35] "Initializing new in-memory state store" Feb 9 10:05:36.341777 kubelet[1970]: I0209 10:05:36.341762 1970 state_mem.go:75] "Updated machine memory state" Feb 9 10:05:36.345155 kubelet[1970]: I0209 10:05:36.345137 1970 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 10:05:36.345648 kubelet[1970]: I0209 10:05:36.345630 1970 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 10:05:36.377335 sudo[2004]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 10:05:36.377552 sudo[2004]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 10:05:36.379728 kubelet[1970]: I0209 10:05:36.379696 1970 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 10:05:36.385888 kubelet[1970]: I0209 10:05:36.385868 1970 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 10:05:36.386062 kubelet[1970]: I0209 10:05:36.386051 1970 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 10:05:36.406114 kubelet[1970]: I0209 10:05:36.406093 1970 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:05:36.406310 kubelet[1970]: I0209 10:05:36.406293 1970 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:05:36.406421 kubelet[1970]: I0209 10:05:36.406408 1970 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:05:36.474586 kubelet[1970]: I0209 10:05:36.474547 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:05:36.474586 kubelet[1970]: I0209 10:05:36.474590 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/554f6a6401872c20854c5a48c1c73e16-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"554f6a6401872c20854c5a48c1c73e16\") " pod="kube-system/kube-apiserver-localhost" Feb 9 10:05:36.474771 kubelet[1970]: I0209 10:05:36.474615 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/554f6a6401872c20854c5a48c1c73e16-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"554f6a6401872c20854c5a48c1c73e16\") " pod="kube-system/kube-apiserver-localhost" Feb 9 10:05:36.474771 kubelet[1970]: I0209 10:05:36.474635 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:05:36.474771 kubelet[1970]: I0209 10:05:36.474659 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:05:36.474771 kubelet[1970]: I0209 10:05:36.474712 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b0e94b38682f4e439413801d3cc54db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2b0e94b38682f4e439413801d3cc54db\") " pod="kube-system/kube-scheduler-localhost" Feb 9 10:05:36.474771 kubelet[1970]: I0209 10:05:36.474747 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/554f6a6401872c20854c5a48c1c73e16-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"554f6a6401872c20854c5a48c1c73e16\") " pod="kube-system/kube-apiserver-localhost" Feb 9 10:05:36.474890 kubelet[1970]: I0209 10:05:36.474773 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:05:36.474890 kubelet[1970]: I0209 10:05:36.474805 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 10:05:36.718009 kubelet[1970]: E0209 10:05:36.717980 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:36.718189 kubelet[1970]: E0209 10:05:36.718161 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:36.718260 kubelet[1970]: E0209 10:05:36.718107 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:36.891403 sudo[2004]: pam_unix(sudo:session): session closed for user root Feb 9 10:05:37.264183 kubelet[1970]: I0209 10:05:37.264140 1970 apiserver.go:52] "Watching apiserver" Feb 9 10:05:37.274620 kubelet[1970]: I0209 10:05:37.274581 1970 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 10:05:37.280756 kubelet[1970]: I0209 10:05:37.280727 1970 reconciler.go:41] "Reconciler: start to sync state" Feb 9 10:05:37.318863 kubelet[1970]: E0209 10:05:37.318214 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:37.323368 kubelet[1970]: E0209 10:05:37.323030 1970 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 10:05:37.323480 kubelet[1970]: E0209 10:05:37.323443 1970 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 10:05:37.323580 kubelet[1970]: E0209 10:05:37.323550 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:37.324105 kubelet[1970]: E0209 10:05:37.324088 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:37.341388 kubelet[1970]: I0209 10:05:37.341339 1970 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.341302899 podCreationTimestamp="2024-02-09 10:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:05:37.340662625 +0000 UTC m=+1.134607991" watchObservedRunningTime="2024-02-09 10:05:37.341302899 +0000 UTC m=+1.135248265" Feb 9 10:05:37.341503 kubelet[1970]: I0209 10:05:37.341435 1970 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.341418574 podCreationTimestamp="2024-02-09 10:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:05:37.334159281 +0000 UTC m=+1.128104647" watchObservedRunningTime="2024-02-09 10:05:37.341418574 +0000 UTC m=+1.135363940" Feb 9 10:05:37.347094 kubelet[1970]: I0209 10:05:37.347056 1970 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.347029621 podCreationTimestamp="2024-02-09 10:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:05:37.346365436 +0000 UTC m=+1.140310802" watchObservedRunningTime="2024-02-09 10:05:37.347029621 +0000 UTC m=+1.140974947" Feb 9 10:05:38.319101 kubelet[1970]: E0209 10:05:38.319065 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:38.319101 kubelet[1970]: E0209 10:05:38.319065 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:38.383425 sudo[1237]: pam_unix(sudo:session): session closed for user root Feb 9 10:05:38.385036 sshd[1234]: pam_unix(sshd:session): session closed for user core Feb 9 10:05:38.387400 systemd[1]: sshd@4-10.0.0.120:22-10.0.0.1:36480.service: Deactivated successfully. Feb 9 10:05:38.388192 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 10:05:38.388363 systemd[1]: session-5.scope: Consumed 5.893s CPU time. Feb 9 10:05:38.388713 systemd-logind[1133]: Session 5 logged out. Waiting for processes to exit. Feb 9 10:05:38.389310 systemd-logind[1133]: Removed session 5. Feb 9 10:05:40.084097 kubelet[1970]: E0209 10:05:40.084072 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:40.339270 kubelet[1970]: E0209 10:05:40.339005 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:43.988590 kubelet[1970]: E0209 10:05:43.988257 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:44.326553 kubelet[1970]: E0209 10:05:44.326520 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:48.506411 kubelet[1970]: I0209 10:05:48.506384 1970 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 10:05:48.507265 env[1144]: time="2024-02-09T10:05:48.507220870Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 10:05:48.507493 kubelet[1970]: I0209 10:05:48.507418 1970 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 10:05:49.443304 kubelet[1970]: I0209 10:05:49.443263 1970 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:05:49.446766 kubelet[1970]: W0209 10:05:49.446746 1970 reflector.go:533] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 10:05:49.446897 kubelet[1970]: E0209 10:05:49.446886 1970 reflector.go:148] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 10:05:49.447004 kubelet[1970]: W0209 10:05:49.446992 1970 reflector.go:533] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 10:05:49.447083 kubelet[1970]: E0209 10:05:49.447075 1970 reflector.go:148] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 10:05:49.448245 kubelet[1970]: I0209 10:05:49.448225 1970 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:05:49.450675 systemd[1]: Created slice kubepods-besteffort-pod2b0aa5f4_fb7b_4e0c_9260_b4dc655b4a9b.slice. Feb 9 10:05:49.460220 kubelet[1970]: I0209 10:05:49.460192 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b-kube-proxy\") pod \"kube-proxy-882xx\" (UID: \"2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b\") " pod="kube-system/kube-proxy-882xx" Feb 9 10:05:49.460352 kubelet[1970]: I0209 10:05:49.460339 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-bpf-maps\") pod \"cilium-l9jkd\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " pod="kube-system/cilium-l9jkd" Feb 9 10:05:49.460445 kubelet[1970]: I0209 10:05:49.460435 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-host-proc-sys-kernel\") pod \"cilium-l9jkd\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " pod="kube-system/cilium-l9jkd" Feb 9 10:05:49.460549 kubelet[1970]: I0209 10:05:49.460539 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cac3996f-db00-49cc-8664-f837c20fb825-cilium-config-path\") pod \"cilium-l9jkd\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " pod="kube-system/cilium-l9jkd" Feb 9 10:05:49.460996 kubelet[1970]: I0209 10:05:49.460979 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b-lib-modules\") pod \"kube-proxy-882xx\" (UID: \"2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b\") " pod="kube-system/kube-proxy-882xx" Feb 9 10:05:49.461116 kubelet[1970]: I0209 10:05:49.461105 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-cni-path\") pod \"cilium-l9jkd\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " pod="kube-system/cilium-l9jkd" Feb 9 10:05:49.461219 kubelet[1970]: I0209 10:05:49.461199 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cac3996f-db00-49cc-8664-f837c20fb825-clustermesh-secrets\") pod \"cilium-l9jkd\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " pod="kube-system/cilium-l9jkd" Feb 9 10:05:49.461361 kubelet[1970]: I0209 10:05:49.461349 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-hostproc\") pod \"cilium-l9jkd\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " pod="kube-system/cilium-l9jkd" Feb 9 10:05:49.461459 kubelet[1970]: I0209 10:05:49.461448 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-etc-cni-netd\") pod \"cilium-l9jkd\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " pod="kube-system/cilium-l9jkd" Feb 9 10:05:49.461553 kubelet[1970]: I0209 10:05:49.461543 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-host-proc-sys-net\") pod \"cilium-l9jkd\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " pod="kube-system/cilium-l9jkd" Feb 9 10:05:49.461763 kubelet[1970]: I0209 10:05:49.461739 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-cilium-cgroup\") pod \"cilium-l9jkd\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " pod="kube-system/cilium-l9jkd" Feb 9 10:05:49.461813 kubelet[1970]: I0209 10:05:49.461779 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vh78\" (UniqueName: \"kubernetes.io/projected/cac3996f-db00-49cc-8664-f837c20fb825-kube-api-access-6vh78\") pod \"cilium-l9jkd\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " pod="kube-system/cilium-l9jkd" Feb 9 10:05:49.461813 kubelet[1970]: I0209 10:05:49.461802 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b-xtables-lock\") pod \"kube-proxy-882xx\" (UID: \"2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b\") " pod="kube-system/kube-proxy-882xx" Feb 9 10:05:49.461875 kubelet[1970]: I0209 10:05:49.461821 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-cilium-run\") pod \"cilium-l9jkd\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " pod="kube-system/cilium-l9jkd" Feb 9 10:05:49.461875 kubelet[1970]: I0209 10:05:49.461840 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cac3996f-db00-49cc-8664-f837c20fb825-hubble-tls\") pod \"cilium-l9jkd\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " pod="kube-system/cilium-l9jkd" Feb 9 10:05:49.461875 kubelet[1970]: I0209 10:05:49.461857 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-lib-modules\") pod \"cilium-l9jkd\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " pod="kube-system/cilium-l9jkd" Feb 9 10:05:49.461875 kubelet[1970]: I0209 10:05:49.461876 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-xtables-lock\") pod \"cilium-l9jkd\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " pod="kube-system/cilium-l9jkd" Feb 9 10:05:49.461970 kubelet[1970]: I0209 10:05:49.461896 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp8wg\" (UniqueName: \"kubernetes.io/projected/2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b-kube-api-access-jp8wg\") pod \"kube-proxy-882xx\" (UID: \"2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b\") " pod="kube-system/kube-proxy-882xx" Feb 9 10:05:49.463651 systemd[1]: Created slice kubepods-burstable-podcac3996f_db00_49cc_8664_f837c20fb825.slice. Feb 9 10:05:49.540538 kubelet[1970]: I0209 10:05:49.540470 1970 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:05:49.545460 systemd[1]: Created slice kubepods-besteffort-podd581129b_31be_40fb_afe2_cf7dd49c665d.slice. Feb 9 10:05:49.562901 kubelet[1970]: I0209 10:05:49.562866 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9twkt\" (UniqueName: \"kubernetes.io/projected/d581129b-31be-40fb-afe2-cf7dd49c665d-kube-api-access-9twkt\") pod \"cilium-operator-574c4bb98d-6fd9d\" (UID: \"d581129b-31be-40fb-afe2-cf7dd49c665d\") " pod="kube-system/cilium-operator-574c4bb98d-6fd9d" Feb 9 10:05:49.563044 kubelet[1970]: I0209 10:05:49.562961 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d581129b-31be-40fb-afe2-cf7dd49c665d-cilium-config-path\") pod \"cilium-operator-574c4bb98d-6fd9d\" (UID: \"d581129b-31be-40fb-afe2-cf7dd49c665d\") " pod="kube-system/cilium-operator-574c4bb98d-6fd9d" Feb 9 10:05:50.092620 kubelet[1970]: E0209 10:05:50.092582 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:50.347412 kubelet[1970]: E0209 10:05:50.347317 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:50.564449 kubelet[1970]: E0209 10:05:50.564412 1970 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 9 10:05:50.564781 kubelet[1970]: E0209 10:05:50.564516 1970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b-kube-proxy podName:2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b nodeName:}" failed. No retries permitted until 2024-02-09 10:05:51.06449581 +0000 UTC m=+14.858441176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b-kube-proxy") pod "kube-proxy-882xx" (UID: "2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b") : failed to sync configmap cache: timed out waiting for the condition Feb 9 10:05:50.577817 kubelet[1970]: E0209 10:05:50.577776 1970 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 9 10:05:50.577817 kubelet[1970]: E0209 10:05:50.577808 1970 projected.go:198] Error preparing data for projected volume kube-api-access-jp8wg for pod kube-system/kube-proxy-882xx: failed to sync configmap cache: timed out waiting for the condition Feb 9 10:05:50.577993 kubelet[1970]: E0209 10:05:50.577867 1970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b-kube-api-access-jp8wg podName:2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b nodeName:}" failed. No retries permitted until 2024-02-09 10:05:51.077852717 +0000 UTC m=+14.871798043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jp8wg" (UniqueName: "kubernetes.io/projected/2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b-kube-api-access-jp8wg") pod "kube-proxy-882xx" (UID: "2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b") : failed to sync configmap cache: timed out waiting for the condition Feb 9 10:05:50.578866 kubelet[1970]: E0209 10:05:50.578841 1970 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 9 10:05:50.578866 kubelet[1970]: E0209 10:05:50.578866 1970 projected.go:198] Error preparing data for projected volume kube-api-access-6vh78 for pod kube-system/cilium-l9jkd: failed to sync configmap cache: timed out waiting for the condition Feb 9 10:05:50.578943 kubelet[1970]: E0209 10:05:50.578901 1970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cac3996f-db00-49cc-8664-f837c20fb825-kube-api-access-6vh78 podName:cac3996f-db00-49cc-8664-f837c20fb825 nodeName:}" failed. No retries permitted until 2024-02-09 10:05:51.078889978 +0000 UTC m=+14.872835344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6vh78" (UniqueName: "kubernetes.io/projected/cac3996f-db00-49cc-8664-f837c20fb825-kube-api-access-6vh78") pod "cilium-l9jkd" (UID: "cac3996f-db00-49cc-8664-f837c20fb825") : failed to sync configmap cache: timed out waiting for the condition Feb 9 10:05:50.671600 kubelet[1970]: E0209 10:05:50.671483 1970 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 9 10:05:50.671600 kubelet[1970]: E0209 10:05:50.671515 1970 projected.go:198] Error preparing data for projected volume kube-api-access-9twkt for pod kube-system/cilium-operator-574c4bb98d-6fd9d: failed to sync configmap cache: timed out waiting for the condition Feb 9 10:05:50.671600 kubelet[1970]: E0209 10:05:50.671566 1970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d581129b-31be-40fb-afe2-cf7dd49c665d-kube-api-access-9twkt podName:d581129b-31be-40fb-afe2-cf7dd49c665d nodeName:}" failed. No retries permitted until 2024-02-09 10:05:51.171550825 +0000 UTC m=+14.965496191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9twkt" (UniqueName: "kubernetes.io/projected/d581129b-31be-40fb-afe2-cf7dd49c665d-kube-api-access-9twkt") pod "cilium-operator-574c4bb98d-6fd9d" (UID: "d581129b-31be-40fb-afe2-cf7dd49c665d") : failed to sync configmap cache: timed out waiting for the condition Feb 9 10:05:51.264486 kubelet[1970]: E0209 10:05:51.264445 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:51.264958 env[1144]: time="2024-02-09T10:05:51.264922533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-882xx,Uid:2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b,Namespace:kube-system,Attempt:0,}" Feb 9 10:05:51.266607 kubelet[1970]: E0209 10:05:51.266583 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:51.266976 env[1144]: time="2024-02-09T10:05:51.266942056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l9jkd,Uid:cac3996f-db00-49cc-8664-f837c20fb825,Namespace:kube-system,Attempt:0,}" Feb 9 10:05:51.300141 env[1144]: time="2024-02-09T10:05:51.299952863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:05:51.300141 env[1144]: time="2024-02-09T10:05:51.300014942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:05:51.300141 env[1144]: time="2024-02-09T10:05:51.300029822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:05:51.300323 env[1144]: time="2024-02-09T10:05:51.300186179Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61 pid=2068 runtime=io.containerd.runc.v2 Feb 9 10:05:51.305367 env[1144]: time="2024-02-09T10:05:51.305301487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:05:51.305367 env[1144]: time="2024-02-09T10:05:51.305339726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:05:51.305367 env[1144]: time="2024-02-09T10:05:51.305351926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:05:51.305587 env[1144]: time="2024-02-09T10:05:51.305561522Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7acdbeee77a23fdda2767d74d7ad1df14d3013cca15e54c9ac2ce77861579e06 pid=2085 runtime=io.containerd.runc.v2 Feb 9 10:05:51.311145 systemd[1]: Started cri-containerd-5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61.scope. Feb 9 10:05:51.321634 systemd[1]: Started cri-containerd-7acdbeee77a23fdda2767d74d7ad1df14d3013cca15e54c9ac2ce77861579e06.scope. Feb 9 10:05:51.343724 env[1144]: time="2024-02-09T10:05:51.343647278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l9jkd,Uid:cac3996f-db00-49cc-8664-f837c20fb825,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61\"" Feb 9 10:05:51.344388 kubelet[1970]: E0209 10:05:51.344359 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:51.345532 env[1144]: time="2024-02-09T10:05:51.345504005Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 10:05:51.347952 kubelet[1970]: E0209 10:05:51.347924 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:51.348891 env[1144]: time="2024-02-09T10:05:51.348736587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-6fd9d,Uid:d581129b-31be-40fb-afe2-cf7dd49c665d,Namespace:kube-system,Attempt:0,}" Feb 9 10:05:51.362338 env[1144]: time="2024-02-09T10:05:51.362286463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-882xx,Uid:2b0aa5f4-fb7b-4e0c-9260-b4dc655b4a9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7acdbeee77a23fdda2767d74d7ad1df14d3013cca15e54c9ac2ce77861579e06\"" Feb 9 10:05:51.363228 kubelet[1970]: E0209 10:05:51.363158 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:51.366213 env[1144]: time="2024-02-09T10:05:51.366121674Z" level=info msg="CreateContainer within sandbox \"7acdbeee77a23fdda2767d74d7ad1df14d3013cca15e54c9ac2ce77861579e06\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 10:05:51.374354 env[1144]: time="2024-02-09T10:05:51.374275728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:05:51.374354 env[1144]: time="2024-02-09T10:05:51.374318247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:05:51.374354 env[1144]: time="2024-02-09T10:05:51.374328687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:05:51.374596 env[1144]: time="2024-02-09T10:05:51.374531643Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/285c39c810ed994c75f2acfec0f384972082ddf5eb35d607f703c04e1fd18c12 pid=2150 runtime=io.containerd.runc.v2 Feb 9 10:05:51.379717 env[1144]: time="2024-02-09T10:05:51.379664031Z" level=info msg="CreateContainer within sandbox \"7acdbeee77a23fdda2767d74d7ad1df14d3013cca15e54c9ac2ce77861579e06\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b2f80fd3f24d84b03d091ba48b5f2406def975c7eb6942d118fa9e3c2c12dd09\"" Feb 9 10:05:51.381922 env[1144]: time="2024-02-09T10:05:51.380379898Z" level=info msg="StartContainer for \"b2f80fd3f24d84b03d091ba48b5f2406def975c7eb6942d118fa9e3c2c12dd09\"" Feb 9 10:05:51.388384 systemd[1]: Started cri-containerd-285c39c810ed994c75f2acfec0f384972082ddf5eb35d607f703c04e1fd18c12.scope. Feb 9 10:05:51.402136 systemd[1]: Started cri-containerd-b2f80fd3f24d84b03d091ba48b5f2406def975c7eb6942d118fa9e3c2c12dd09.scope. Feb 9 10:05:51.440872 env[1144]: time="2024-02-09T10:05:51.440831012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-6fd9d,Uid:d581129b-31be-40fb-afe2-cf7dd49c665d,Namespace:kube-system,Attempt:0,} returns sandbox id \"285c39c810ed994c75f2acfec0f384972082ddf5eb35d607f703c04e1fd18c12\"" Feb 9 10:05:51.442227 kubelet[1970]: E0209 10:05:51.442204 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:51.449383 env[1144]: time="2024-02-09T10:05:51.449343259Z" level=info msg="StartContainer for \"b2f80fd3f24d84b03d091ba48b5f2406def975c7eb6942d118fa9e3c2c12dd09\" returns successfully" Feb 9 10:05:52.023133 update_engine[1135]: I0209 10:05:52.023092 1135 update_attempter.cc:509] Updating boot flags... Feb 9 10:05:52.338793 kubelet[1970]: E0209 10:05:52.338764 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:54.644257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686014118.mount: Deactivated successfully. Feb 9 10:05:56.865446 env[1144]: time="2024-02-09T10:05:56.865399766Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:56.867149 env[1144]: time="2024-02-09T10:05:56.867112822Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:56.868965 env[1144]: time="2024-02-09T10:05:56.868939397Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:56.869467 env[1144]: time="2024-02-09T10:05:56.869442990Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 10:05:56.870929 env[1144]: time="2024-02-09T10:05:56.870520574Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 10:05:56.873208 env[1144]: time="2024-02-09T10:05:56.873155137Z" level=info msg="CreateContainer within sandbox \"5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:05:56.881583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2136808436.mount: Deactivated successfully. Feb 9 10:05:56.884108 env[1144]: time="2024-02-09T10:05:56.884075424Z" level=info msg="CreateContainer within sandbox \"5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4\"" Feb 9 10:05:56.885810 env[1144]: time="2024-02-09T10:05:56.885783800Z" level=info msg="StartContainer for \"ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4\"" Feb 9 10:05:56.905528 systemd[1]: Started cri-containerd-ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4.scope. Feb 9 10:05:56.953075 env[1144]: time="2024-02-09T10:05:56.953032973Z" level=info msg="StartContainer for \"ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4\" returns successfully" Feb 9 10:05:56.992584 systemd[1]: cri-containerd-ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4.scope: Deactivated successfully. Feb 9 10:05:57.147551 env[1144]: time="2024-02-09T10:05:57.147438292Z" level=info msg="shim disconnected" id=ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4 Feb 9 10:05:57.147796 env[1144]: time="2024-02-09T10:05:57.147773807Z" level=warning msg="cleaning up after shim disconnected" id=ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4 namespace=k8s.io Feb 9 10:05:57.147861 env[1144]: time="2024-02-09T10:05:57.147847286Z" level=info msg="cleaning up dead shim" Feb 9 10:05:57.155383 env[1144]: time="2024-02-09T10:05:57.155350306Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:05:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2404 runtime=io.containerd.runc.v2\n" Feb 9 10:05:57.347385 kubelet[1970]: E0209 10:05:57.347357 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:57.361709 env[1144]: time="2024-02-09T10:05:57.351639269Z" level=info msg="CreateContainer within sandbox \"5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 10:05:57.371879 kubelet[1970]: I0209 10:05:57.371839 1970 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-882xx" podStartSLOduration=8.371803438 podCreationTimestamp="2024-02-09 10:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:05:52.347919538 +0000 UTC m=+16.141864904" watchObservedRunningTime="2024-02-09 10:05:57.371803438 +0000 UTC m=+21.165748764" Feb 9 10:05:57.384467 env[1144]: time="2024-02-09T10:05:57.384408429Z" level=info msg="CreateContainer within sandbox \"5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d\"" Feb 9 10:05:57.384967 env[1144]: time="2024-02-09T10:05:57.384932222Z" level=info msg="StartContainer for \"410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d\"" Feb 9 10:05:57.398131 systemd[1]: Started cri-containerd-410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d.scope. Feb 9 10:05:57.432131 env[1144]: time="2024-02-09T10:05:57.431893951Z" level=info msg="StartContainer for \"410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d\" returns successfully" Feb 9 10:05:57.439498 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 10:05:57.439766 systemd[1]: Stopped systemd-sysctl.service. Feb 9 10:05:57.439945 systemd[1]: Stopping systemd-sysctl.service... Feb 9 10:05:57.441893 systemd[1]: Starting systemd-sysctl.service... Feb 9 10:05:57.443603 systemd[1]: cri-containerd-410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d.scope: Deactivated successfully. Feb 9 10:05:57.452657 systemd[1]: Finished systemd-sysctl.service. Feb 9 10:05:57.462632 env[1144]: time="2024-02-09T10:05:57.462591659Z" level=info msg="shim disconnected" id=410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d Feb 9 10:05:57.462632 env[1144]: time="2024-02-09T10:05:57.462633899Z" level=warning msg="cleaning up after shim disconnected" id=410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d namespace=k8s.io Feb 9 10:05:57.462893 env[1144]: time="2024-02-09T10:05:57.462643738Z" level=info msg="cleaning up dead shim" Feb 9 10:05:57.468913 env[1144]: time="2024-02-09T10:05:57.468873175Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:05:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2468 runtime=io.containerd.runc.v2\n" Feb 9 10:05:57.880582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4-rootfs.mount: Deactivated successfully. Feb 9 10:05:58.308958 env[1144]: time="2024-02-09T10:05:58.308919077Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:58.310117 env[1144]: time="2024-02-09T10:05:58.310088382Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:58.312181 env[1144]: time="2024-02-09T10:05:58.312152716Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:05:58.312384 env[1144]: time="2024-02-09T10:05:58.312360153Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 10:05:58.314274 env[1144]: time="2024-02-09T10:05:58.314241729Z" level=info msg="CreateContainer within sandbox \"285c39c810ed994c75f2acfec0f384972082ddf5eb35d607f703c04e1fd18c12\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 10:05:58.323216 env[1144]: time="2024-02-09T10:05:58.323170374Z" level=info msg="CreateContainer within sandbox \"285c39c810ed994c75f2acfec0f384972082ddf5eb35d607f703c04e1fd18c12\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e\"" Feb 9 10:05:58.323742 env[1144]: time="2024-02-09T10:05:58.323714207Z" level=info msg="StartContainer for \"7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e\"" Feb 9 10:05:58.342591 systemd[1]: Started cri-containerd-7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e.scope. Feb 9 10:05:58.354818 kubelet[1970]: E0209 10:05:58.354605 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:58.369957 env[1144]: time="2024-02-09T10:05:58.369894975Z" level=info msg="CreateContainer within sandbox \"5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 10:05:58.420460 env[1144]: time="2024-02-09T10:05:58.420382247Z" level=info msg="StartContainer for \"7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e\" returns successfully" Feb 9 10:05:58.435628 env[1144]: time="2024-02-09T10:05:58.435575492Z" level=info msg="CreateContainer within sandbox \"5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6\"" Feb 9 10:05:58.437439 env[1144]: time="2024-02-09T10:05:58.436301683Z" level=info msg="StartContainer for \"d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6\"" Feb 9 10:05:58.453194 systemd[1]: Started cri-containerd-d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6.scope. Feb 9 10:05:58.519326 env[1144]: time="2024-02-09T10:05:58.519277058Z" level=info msg="StartContainer for \"d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6\" returns successfully" Feb 9 10:05:58.524527 systemd[1]: cri-containerd-d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6.scope: Deactivated successfully. Feb 9 10:05:58.568434 env[1144]: time="2024-02-09T10:05:58.568329709Z" level=info msg="shim disconnected" id=d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6 Feb 9 10:05:58.568761 env[1144]: time="2024-02-09T10:05:58.568737623Z" level=warning msg="cleaning up after shim disconnected" id=d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6 namespace=k8s.io Feb 9 10:05:58.568858 env[1144]: time="2024-02-09T10:05:58.568842422Z" level=info msg="cleaning up dead shim" Feb 9 10:05:58.580863 env[1144]: time="2024-02-09T10:05:58.580824108Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:05:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2567 runtime=io.containerd.runc.v2\n" Feb 9 10:05:58.879922 systemd[1]: run-containerd-runc-k8s.io-7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e-runc.CLlGvQ.mount: Deactivated successfully. Feb 9 10:05:59.357083 kubelet[1970]: E0209 10:05:59.357036 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:59.358501 kubelet[1970]: E0209 10:05:59.358467 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:05:59.360401 env[1144]: time="2024-02-09T10:05:59.360357828Z" level=info msg="CreateContainer within sandbox \"5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 10:05:59.374515 env[1144]: time="2024-02-09T10:05:59.374476775Z" level=info msg="CreateContainer within sandbox \"5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c\"" Feb 9 10:05:59.375119 env[1144]: time="2024-02-09T10:05:59.375082048Z" level=info msg="StartContainer for \"8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c\"" Feb 9 10:05:59.386717 kubelet[1970]: I0209 10:05:59.385352 1970 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-6fd9d" podStartSLOduration=3.515352949 podCreationTimestamp="2024-02-09 10:05:49 +0000 UTC" firstStartedPulling="2024-02-09 10:05:51.443045692 +0000 UTC m=+15.236991058" lastFinishedPulling="2024-02-09 10:05:58.312995425 +0000 UTC m=+22.106940791" observedRunningTime="2024-02-09 10:05:59.366174517 +0000 UTC m=+23.160119883" watchObservedRunningTime="2024-02-09 10:05:59.385302682 +0000 UTC m=+23.179248048" Feb 9 10:05:59.392551 systemd[1]: Started cri-containerd-8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c.scope. Feb 9 10:05:59.432561 systemd[1]: cri-containerd-8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c.scope: Deactivated successfully. Feb 9 10:05:59.441852 env[1144]: time="2024-02-09T10:05:59.441758110Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcac3996f_db00_49cc_8664_f837c20fb825.slice/cri-containerd-8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c.scope/memory.events\": no such file or directory" Feb 9 10:05:59.442867 env[1144]: time="2024-02-09T10:05:59.442825257Z" level=info msg="StartContainer for \"8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c\" returns successfully" Feb 9 10:05:59.462458 env[1144]: time="2024-02-09T10:05:59.462411456Z" level=info msg="shim disconnected" id=8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c Feb 9 10:05:59.462458 env[1144]: time="2024-02-09T10:05:59.462453936Z" level=warning msg="cleaning up after shim disconnected" id=8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c namespace=k8s.io Feb 9 10:05:59.462632 env[1144]: time="2024-02-09T10:05:59.462463656Z" level=info msg="cleaning up dead shim" Feb 9 10:05:59.469021 env[1144]: time="2024-02-09T10:05:59.468980776Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:05:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2625 runtime=io.containerd.runc.v2\n" Feb 9 10:05:59.880000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c-rootfs.mount: Deactivated successfully. Feb 9 10:06:00.362100 kubelet[1970]: E0209 10:06:00.362075 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:00.362430 kubelet[1970]: E0209 10:06:00.362101 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:00.365434 env[1144]: time="2024-02-09T10:06:00.365395570Z" level=info msg="CreateContainer within sandbox \"5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 10:06:00.378703 env[1144]: time="2024-02-09T10:06:00.378652495Z" level=info msg="CreateContainer within sandbox \"5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb\"" Feb 9 10:06:00.379099 env[1144]: time="2024-02-09T10:06:00.379070370Z" level=info msg="StartContainer for \"700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb\"" Feb 9 10:06:00.394540 systemd[1]: Started cri-containerd-700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb.scope. Feb 9 10:06:00.434210 env[1144]: time="2024-02-09T10:06:00.434163963Z" level=info msg="StartContainer for \"700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb\" returns successfully" Feb 9 10:06:00.583095 kubelet[1970]: I0209 10:06:00.582362 1970 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 10:06:00.601018 kubelet[1970]: I0209 10:06:00.600251 1970 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:06:00.604957 kubelet[1970]: I0209 10:06:00.604404 1970 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:06:00.605904 systemd[1]: Created slice kubepods-burstable-pod9dbf9d8c_3a99_4593_bdeb_13b1c3290fe7.slice. Feb 9 10:06:00.610425 systemd[1]: Created slice kubepods-burstable-pod93124aac_5714_4432_9349_ef7827b11713.slice. Feb 9 10:06:00.644286 kubelet[1970]: I0209 10:06:00.644185 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hvm2\" (UniqueName: \"kubernetes.io/projected/9dbf9d8c-3a99-4593-bdeb-13b1c3290fe7-kube-api-access-2hvm2\") pod \"coredns-5d78c9869d-sst2d\" (UID: \"9dbf9d8c-3a99-4593-bdeb-13b1c3290fe7\") " pod="kube-system/coredns-5d78c9869d-sst2d" Feb 9 10:06:00.644561 kubelet[1970]: I0209 10:06:00.644548 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndz8q\" (UniqueName: \"kubernetes.io/projected/93124aac-5714-4432-9349-ef7827b11713-kube-api-access-ndz8q\") pod \"coredns-5d78c9869d-zttmp\" (UID: \"93124aac-5714-4432-9349-ef7827b11713\") " pod="kube-system/coredns-5d78c9869d-zttmp" Feb 9 10:06:00.644704 kubelet[1970]: I0209 10:06:00.644675 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dbf9d8c-3a99-4593-bdeb-13b1c3290fe7-config-volume\") pod \"coredns-5d78c9869d-sst2d\" (UID: \"9dbf9d8c-3a99-4593-bdeb-13b1c3290fe7\") " pod="kube-system/coredns-5d78c9869d-sst2d" Feb 9 10:06:00.644801 kubelet[1970]: I0209 10:06:00.644789 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93124aac-5714-4432-9349-ef7827b11713-config-volume\") pod \"coredns-5d78c9869d-zttmp\" (UID: \"93124aac-5714-4432-9349-ef7827b11713\") " pod="kube-system/coredns-5d78c9869d-zttmp" Feb 9 10:06:00.720680 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 10:06:00.909004 kubelet[1970]: E0209 10:06:00.908887 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:00.909659 env[1144]: time="2024-02-09T10:06:00.909614580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-sst2d,Uid:9dbf9d8c-3a99-4593-bdeb-13b1c3290fe7,Namespace:kube-system,Attempt:0,}" Feb 9 10:06:00.913321 kubelet[1970]: E0209 10:06:00.913297 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:00.913758 env[1144]: time="2024-02-09T10:06:00.913723492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-zttmp,Uid:93124aac-5714-4432-9349-ef7827b11713,Namespace:kube-system,Attempt:0,}" Feb 9 10:06:01.047714 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 10:06:01.367107 kubelet[1970]: E0209 10:06:01.367077 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:01.379971 kubelet[1970]: I0209 10:06:01.379681 1970 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-l9jkd" podStartSLOduration=6.854859757 podCreationTimestamp="2024-02-09 10:05:49 +0000 UTC" firstStartedPulling="2024-02-09 10:05:51.345131091 +0000 UTC m=+15.139076457" lastFinishedPulling="2024-02-09 10:05:56.869918423 +0000 UTC m=+20.663863749" observedRunningTime="2024-02-09 10:06:01.379051976 +0000 UTC m=+25.172997342" watchObservedRunningTime="2024-02-09 10:06:01.379647049 +0000 UTC m=+25.173592415" Feb 9 10:06:02.368379 kubelet[1970]: E0209 10:06:02.368345 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:02.659100 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 10:06:02.659195 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 10:06:02.656599 systemd-networkd[1044]: cilium_host: Link UP Feb 9 10:06:02.656761 systemd-networkd[1044]: cilium_net: Link UP Feb 9 10:06:02.658639 systemd-networkd[1044]: cilium_net: Gained carrier Feb 9 10:06:02.658839 systemd-networkd[1044]: cilium_host: Gained carrier Feb 9 10:06:02.738222 systemd-networkd[1044]: cilium_vxlan: Link UP Feb 9 10:06:02.738229 systemd-networkd[1044]: cilium_vxlan: Gained carrier Feb 9 10:06:02.836511 systemd[1]: Started sshd@5-10.0.0.120:22-10.0.0.1:40702.service. Feb 9 10:06:02.883402 sshd[2892]: Accepted publickey for core from 10.0.0.1 port 40702 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:02.884828 sshd[2892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:02.888552 systemd-logind[1133]: New session 6 of user core. Feb 9 10:06:02.888995 systemd[1]: Started session-6.scope. Feb 9 10:06:03.064716 kernel: NET: Registered PF_ALG protocol family Feb 9 10:06:03.073222 sshd[2892]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:03.081398 systemd-logind[1133]: Session 6 logged out. Waiting for processes to exit. Feb 9 10:06:03.081546 systemd[1]: sshd@5-10.0.0.120:22-10.0.0.1:40702.service: Deactivated successfully. Feb 9 10:06:03.082331 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 10:06:03.083058 systemd-logind[1133]: Removed session 6. Feb 9 10:06:03.228828 systemd-networkd[1044]: cilium_host: Gained IPv6LL Feb 9 10:06:03.292839 systemd-networkd[1044]: cilium_net: Gained IPv6LL Feb 9 10:06:03.369895 kubelet[1970]: E0209 10:06:03.369803 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:03.615930 systemd-networkd[1044]: lxc_health: Link UP Feb 9 10:06:03.632777 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:06:03.630861 systemd-networkd[1044]: lxc_health: Gained carrier Feb 9 10:06:04.049733 systemd-networkd[1044]: lxc724bada4b597: Link UP Feb 9 10:06:04.057723 kernel: eth0: renamed from tmp0c34b Feb 9 10:06:04.071763 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 10:06:04.071837 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc724bada4b597: link becomes ready Feb 9 10:06:04.071897 systemd-networkd[1044]: lxc724bada4b597: Gained carrier Feb 9 10:06:04.072446 systemd-networkd[1044]: lxccaba24c090e7: Link UP Feb 9 10:06:04.082005 kernel: eth0: renamed from tmp04573 Feb 9 10:06:04.094431 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 10:06:04.094508 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccaba24c090e7: link becomes ready Feb 9 10:06:04.090764 systemd-networkd[1044]: lxccaba24c090e7: Gained carrier Feb 9 10:06:04.316799 systemd-networkd[1044]: cilium_vxlan: Gained IPv6LL Feb 9 10:06:05.212835 systemd-networkd[1044]: lxc724bada4b597: Gained IPv6LL Feb 9 10:06:05.269089 kubelet[1970]: E0209 10:06:05.269050 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:05.373646 kubelet[1970]: E0209 10:06:05.373577 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:05.404798 systemd-networkd[1044]: lxc_health: Gained IPv6LL Feb 9 10:06:06.044878 systemd-networkd[1044]: lxccaba24c090e7: Gained IPv6LL Feb 9 10:06:07.524039 env[1144]: time="2024-02-09T10:06:07.523944986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:06:07.524039 env[1144]: time="2024-02-09T10:06:07.523991946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:06:07.524039 env[1144]: time="2024-02-09T10:06:07.524001866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:06:07.524510 env[1144]: time="2024-02-09T10:06:07.524220184Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c34b806b613e93ba4a0c252dec03872cc2d2c99c10c78c3432a7c45d36cd555 pid=3209 runtime=io.containerd.runc.v2 Feb 9 10:06:07.531814 env[1144]: time="2024-02-09T10:06:07.529628296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:06:07.531814 env[1144]: time="2024-02-09T10:06:07.529668135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:06:07.531814 env[1144]: time="2024-02-09T10:06:07.529677775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:06:07.531814 env[1144]: time="2024-02-09T10:06:07.529790774Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/04573b1998e519989bf95a1914dc585b6c90371d02f43f02222345348485031c pid=3224 runtime=io.containerd.runc.v2 Feb 9 10:06:07.538991 systemd[1]: Started cri-containerd-0c34b806b613e93ba4a0c252dec03872cc2d2c99c10c78c3432a7c45d36cd555.scope. Feb 9 10:06:07.549753 systemd[1]: Started cri-containerd-04573b1998e519989bf95a1914dc585b6c90371d02f43f02222345348485031c.scope. Feb 9 10:06:07.579461 systemd-resolved[1090]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 10:06:07.584655 systemd-resolved[1090]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 10:06:07.600135 env[1144]: time="2024-02-09T10:06:07.600097151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-zttmp,Uid:93124aac-5714-4432-9349-ef7827b11713,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c34b806b613e93ba4a0c252dec03872cc2d2c99c10c78c3432a7c45d36cd555\"" Feb 9 10:06:07.601272 kubelet[1970]: E0209 10:06:07.600814 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:07.604077 env[1144]: time="2024-02-09T10:06:07.603330482Z" level=info msg="CreateContainer within sandbox \"0c34b806b613e93ba4a0c252dec03872cc2d2c99c10c78c3432a7c45d36cd555\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 10:06:07.607389 env[1144]: time="2024-02-09T10:06:07.607354846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-sst2d,Uid:9dbf9d8c-3a99-4593-bdeb-13b1c3290fe7,Namespace:kube-system,Attempt:0,} returns sandbox id \"04573b1998e519989bf95a1914dc585b6c90371d02f43f02222345348485031c\"" Feb 9 10:06:07.608088 kubelet[1970]: E0209 10:06:07.607932 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:07.611251 env[1144]: time="2024-02-09T10:06:07.611216332Z" level=info msg="CreateContainer within sandbox \"04573b1998e519989bf95a1914dc585b6c90371d02f43f02222345348485031c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 10:06:07.620339 env[1144]: time="2024-02-09T10:06:07.620298892Z" level=info msg="CreateContainer within sandbox \"0c34b806b613e93ba4a0c252dec03872cc2d2c99c10c78c3432a7c45d36cd555\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b44659aa4a8fc9e249ca6cf9ab541fb6801c8aeea2ad57f2d010b96869a7104\"" Feb 9 10:06:07.620985 env[1144]: time="2024-02-09T10:06:07.620955646Z" level=info msg="StartContainer for \"2b44659aa4a8fc9e249ca6cf9ab541fb6801c8aeea2ad57f2d010b96869a7104\"" Feb 9 10:06:07.624793 env[1144]: time="2024-02-09T10:06:07.624750652Z" level=info msg="CreateContainer within sandbox \"04573b1998e519989bf95a1914dc585b6c90371d02f43f02222345348485031c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e04bf3a86dcda98437144323e7600da5d4df027fdd306d7e3fb58e84f2334f0\"" Feb 9 10:06:07.625457 env[1144]: time="2024-02-09T10:06:07.625411726Z" level=info msg="StartContainer for \"7e04bf3a86dcda98437144323e7600da5d4df027fdd306d7e3fb58e84f2334f0\"" Feb 9 10:06:07.636448 systemd[1]: Started cri-containerd-2b44659aa4a8fc9e249ca6cf9ab541fb6801c8aeea2ad57f2d010b96869a7104.scope. Feb 9 10:06:07.646287 systemd[1]: Started cri-containerd-7e04bf3a86dcda98437144323e7600da5d4df027fdd306d7e3fb58e84f2334f0.scope. Feb 9 10:06:07.680576 env[1144]: time="2024-02-09T10:06:07.680534718Z" level=info msg="StartContainer for \"2b44659aa4a8fc9e249ca6cf9ab541fb6801c8aeea2ad57f2d010b96869a7104\" returns successfully" Feb 9 10:06:07.698888 env[1144]: time="2024-02-09T10:06:07.698323520Z" level=info msg="StartContainer for \"7e04bf3a86dcda98437144323e7600da5d4df027fdd306d7e3fb58e84f2334f0\" returns successfully" Feb 9 10:06:08.077853 systemd[1]: Started sshd@6-10.0.0.120:22-10.0.0.1:40710.service. Feb 9 10:06:08.122208 sshd[3354]: Accepted publickey for core from 10.0.0.1 port 40710 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:08.123534 sshd[3354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:08.126721 systemd-logind[1133]: New session 7 of user core. Feb 9 10:06:08.127569 systemd[1]: Started session-7.scope. Feb 9 10:06:08.238516 sshd[3354]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:08.240754 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 10:06:08.241304 systemd[1]: sshd@6-10.0.0.120:22-10.0.0.1:40710.service: Deactivated successfully. Feb 9 10:06:08.242198 systemd-logind[1133]: Session 7 logged out. Waiting for processes to exit. Feb 9 10:06:08.242760 systemd-logind[1133]: Removed session 7. Feb 9 10:06:08.380063 kubelet[1970]: E0209 10:06:08.379909 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:08.382790 kubelet[1970]: E0209 10:06:08.382753 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:08.396957 kubelet[1970]: I0209 10:06:08.396903 1970 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-sst2d" podStartSLOduration=19.396863410999998 podCreationTimestamp="2024-02-09 10:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:06:08.396296976 +0000 UTC m=+32.190242342" watchObservedRunningTime="2024-02-09 10:06:08.396863411 +0000 UTC m=+32.190808777" Feb 9 10:06:08.397074 kubelet[1970]: I0209 10:06:08.396981 1970 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-zttmp" podStartSLOduration=19.39696481 podCreationTimestamp="2024-02-09 10:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:06:08.389260076 +0000 UTC m=+32.183205402" watchObservedRunningTime="2024-02-09 10:06:08.39696481 +0000 UTC m=+32.190910176" Feb 9 10:06:09.385084 kubelet[1970]: E0209 10:06:09.385045 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:09.385464 kubelet[1970]: E0209 10:06:09.385445 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:10.386734 kubelet[1970]: E0209 10:06:10.386671 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:13.242757 systemd[1]: Started sshd@7-10.0.0.120:22-10.0.0.1:33526.service. Feb 9 10:06:13.284801 sshd[3375]: Accepted publickey for core from 10.0.0.1 port 33526 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:13.286078 sshd[3375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:13.290204 systemd-logind[1133]: New session 8 of user core. Feb 9 10:06:13.291185 systemd[1]: Started session-8.scope. Feb 9 10:06:13.421786 sshd[3375]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:13.424482 systemd[1]: Started sshd@8-10.0.0.120:22-10.0.0.1:33540.service. Feb 9 10:06:13.425082 systemd[1]: sshd@7-10.0.0.120:22-10.0.0.1:33526.service: Deactivated successfully. Feb 9 10:06:13.425947 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 10:06:13.427842 systemd-logind[1133]: Session 8 logged out. Waiting for processes to exit. Feb 9 10:06:13.431827 systemd-logind[1133]: Removed session 8. Feb 9 10:06:13.484604 sshd[3389]: Accepted publickey for core from 10.0.0.1 port 33540 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:13.485821 sshd[3389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:13.490982 systemd[1]: Started session-9.scope. Feb 9 10:06:13.492629 systemd-logind[1133]: New session 9 of user core. Feb 9 10:06:14.247019 sshd[3389]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:14.250169 systemd[1]: Started sshd@9-10.0.0.120:22-10.0.0.1:33554.service. Feb 9 10:06:14.254385 systemd[1]: sshd@8-10.0.0.120:22-10.0.0.1:33540.service: Deactivated successfully. Feb 9 10:06:14.254732 systemd-logind[1133]: Session 9 logged out. Waiting for processes to exit. Feb 9 10:06:14.255617 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 10:06:14.256435 systemd-logind[1133]: Removed session 9. Feb 9 10:06:14.294834 sshd[3400]: Accepted publickey for core from 10.0.0.1 port 33554 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:14.296070 sshd[3400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:14.299759 systemd-logind[1133]: New session 10 of user core. Feb 9 10:06:14.299897 systemd[1]: Started session-10.scope. Feb 9 10:06:14.414347 sshd[3400]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:14.416560 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 10:06:14.417127 systemd[1]: sshd@9-10.0.0.120:22-10.0.0.1:33554.service: Deactivated successfully. Feb 9 10:06:14.417943 systemd-logind[1133]: Session 10 logged out. Waiting for processes to exit. Feb 9 10:06:14.418487 systemd-logind[1133]: Removed session 10. Feb 9 10:06:19.418922 systemd[1]: Started sshd@10-10.0.0.120:22-10.0.0.1:33566.service. Feb 9 10:06:19.460372 sshd[3414]: Accepted publickey for core from 10.0.0.1 port 33566 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:19.461452 sshd[3414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:19.465204 systemd-logind[1133]: New session 11 of user core. Feb 9 10:06:19.465906 systemd[1]: Started session-11.scope. Feb 9 10:06:19.575074 sshd[3414]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:19.577386 systemd[1]: sshd@10-10.0.0.120:22-10.0.0.1:33566.service: Deactivated successfully. Feb 9 10:06:19.578234 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 10:06:19.579062 systemd-logind[1133]: Session 11 logged out. Waiting for processes to exit. Feb 9 10:06:19.579886 systemd-logind[1133]: Removed session 11. Feb 9 10:06:24.579856 systemd[1]: Started sshd@11-10.0.0.120:22-10.0.0.1:56888.service. Feb 9 10:06:24.621177 sshd[3430]: Accepted publickey for core from 10.0.0.1 port 56888 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:24.622249 sshd[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:24.625664 systemd-logind[1133]: New session 12 of user core. Feb 9 10:06:24.626090 systemd[1]: Started session-12.scope. Feb 9 10:06:24.736773 sshd[3430]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:24.740756 systemd[1]: sshd@11-10.0.0.120:22-10.0.0.1:56888.service: Deactivated successfully. Feb 9 10:06:24.741359 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 10:06:24.741955 systemd-logind[1133]: Session 12 logged out. Waiting for processes to exit. Feb 9 10:06:24.743067 systemd[1]: Started sshd@12-10.0.0.120:22-10.0.0.1:56900.service. Feb 9 10:06:24.743808 systemd-logind[1133]: Removed session 12. Feb 9 10:06:24.784143 sshd[3443]: Accepted publickey for core from 10.0.0.1 port 56900 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:24.785197 sshd[3443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:24.788451 systemd-logind[1133]: New session 13 of user core. Feb 9 10:06:24.788829 systemd[1]: Started session-13.scope. Feb 9 10:06:24.957165 sshd[3443]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:24.960878 systemd[1]: Started sshd@13-10.0.0.120:22-10.0.0.1:56914.service. Feb 9 10:06:24.961398 systemd[1]: sshd@12-10.0.0.120:22-10.0.0.1:56900.service: Deactivated successfully. Feb 9 10:06:24.962095 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 10:06:24.962843 systemd-logind[1133]: Session 13 logged out. Waiting for processes to exit. Feb 9 10:06:24.963599 systemd-logind[1133]: Removed session 13. Feb 9 10:06:25.003260 sshd[3453]: Accepted publickey for core from 10.0.0.1 port 56914 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:25.004662 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:25.007738 systemd-logind[1133]: New session 14 of user core. Feb 9 10:06:25.008516 systemd[1]: Started session-14.scope. Feb 9 10:06:25.813167 sshd[3453]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:25.816436 systemd[1]: sshd@13-10.0.0.120:22-10.0.0.1:56914.service: Deactivated successfully. Feb 9 10:06:25.817171 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 10:06:25.820022 systemd-logind[1133]: Session 14 logged out. Waiting for processes to exit. Feb 9 10:06:25.821310 systemd[1]: Started sshd@14-10.0.0.120:22-10.0.0.1:56922.service. Feb 9 10:06:25.822567 systemd-logind[1133]: Removed session 14. Feb 9 10:06:25.869412 sshd[3474]: Accepted publickey for core from 10.0.0.1 port 56922 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:25.870669 sshd[3474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:25.873892 systemd-logind[1133]: New session 15 of user core. Feb 9 10:06:25.874751 systemd[1]: Started session-15.scope. Feb 9 10:06:26.183461 sshd[3474]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:26.186565 systemd[1]: sshd@14-10.0.0.120:22-10.0.0.1:56922.service: Deactivated successfully. Feb 9 10:06:26.187288 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 10:06:26.187888 systemd-logind[1133]: Session 15 logged out. Waiting for processes to exit. Feb 9 10:06:26.190383 systemd[1]: Started sshd@15-10.0.0.120:22-10.0.0.1:56928.service. Feb 9 10:06:26.192652 systemd-logind[1133]: Removed session 15. Feb 9 10:06:26.235316 sshd[3485]: Accepted publickey for core from 10.0.0.1 port 56928 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:26.236449 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:26.239900 systemd-logind[1133]: New session 16 of user core. Feb 9 10:06:26.240682 systemd[1]: Started session-16.scope. Feb 9 10:06:26.354949 sshd[3485]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:26.357829 systemd[1]: sshd@15-10.0.0.120:22-10.0.0.1:56928.service: Deactivated successfully. Feb 9 10:06:26.358548 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 10:06:26.359121 systemd-logind[1133]: Session 16 logged out. Waiting for processes to exit. Feb 9 10:06:26.359867 systemd-logind[1133]: Removed session 16. Feb 9 10:06:31.359811 systemd[1]: Started sshd@16-10.0.0.120:22-10.0.0.1:56942.service. Feb 9 10:06:31.401287 sshd[3501]: Accepted publickey for core from 10.0.0.1 port 56942 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:31.402556 sshd[3501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:31.406383 systemd-logind[1133]: New session 17 of user core. Feb 9 10:06:31.407080 systemd[1]: Started session-17.scope. Feb 9 10:06:31.515043 sshd[3501]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:31.517577 systemd[1]: sshd@16-10.0.0.120:22-10.0.0.1:56942.service: Deactivated successfully. Feb 9 10:06:31.518401 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 10:06:31.518987 systemd-logind[1133]: Session 17 logged out. Waiting for processes to exit. Feb 9 10:06:31.519606 systemd-logind[1133]: Removed session 17. Feb 9 10:06:36.519699 systemd[1]: Started sshd@17-10.0.0.120:22-10.0.0.1:45964.service. Feb 9 10:06:36.561495 sshd[3517]: Accepted publickey for core from 10.0.0.1 port 45964 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:36.562973 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:36.566482 systemd-logind[1133]: New session 18 of user core. Feb 9 10:06:36.566852 systemd[1]: Started session-18.scope. Feb 9 10:06:36.675447 sshd[3517]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:36.677790 systemd[1]: sshd@17-10.0.0.120:22-10.0.0.1:45964.service: Deactivated successfully. Feb 9 10:06:36.678493 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 10:06:36.679038 systemd-logind[1133]: Session 18 logged out. Waiting for processes to exit. Feb 9 10:06:36.679647 systemd-logind[1133]: Removed session 18. Feb 9 10:06:41.680421 systemd[1]: Started sshd@18-10.0.0.120:22-10.0.0.1:45976.service. Feb 9 10:06:41.722169 sshd[3531]: Accepted publickey for core from 10.0.0.1 port 45976 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:41.723410 sshd[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:41.727170 systemd-logind[1133]: New session 19 of user core. Feb 9 10:06:41.728472 systemd[1]: Started session-19.scope. Feb 9 10:06:41.849330 sshd[3531]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:41.852269 systemd[1]: sshd@18-10.0.0.120:22-10.0.0.1:45976.service: Deactivated successfully. Feb 9 10:06:41.853018 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 10:06:41.853824 systemd-logind[1133]: Session 19 logged out. Waiting for processes to exit. Feb 9 10:06:41.854514 systemd-logind[1133]: Removed session 19. Feb 9 10:06:46.854099 systemd[1]: Started sshd@19-10.0.0.120:22-10.0.0.1:44178.service. Feb 9 10:06:46.896271 sshd[3544]: Accepted publickey for core from 10.0.0.1 port 44178 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:46.897458 sshd[3544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:46.900895 systemd-logind[1133]: New session 20 of user core. Feb 9 10:06:46.902413 systemd[1]: Started session-20.scope. Feb 9 10:06:47.021754 sshd[3544]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:47.024852 systemd-logind[1133]: Session 20 logged out. Waiting for processes to exit. Feb 9 10:06:47.024970 systemd[1]: sshd@19-10.0.0.120:22-10.0.0.1:44178.service: Deactivated successfully. Feb 9 10:06:47.025977 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 10:06:47.028502 systemd[1]: Started sshd@20-10.0.0.120:22-10.0.0.1:44192.service. Feb 9 10:06:47.029247 systemd-logind[1133]: Removed session 20. Feb 9 10:06:47.070507 sshd[3557]: Accepted publickey for core from 10.0.0.1 port 44192 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:47.072098 sshd[3557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:47.076563 systemd-logind[1133]: New session 21 of user core. Feb 9 10:06:47.077370 systemd[1]: Started session-21.scope. Feb 9 10:06:49.034841 env[1144]: time="2024-02-09T10:06:49.034791501Z" level=info msg="StopContainer for \"7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e\" with timeout 30 (s)" Feb 9 10:06:49.035430 env[1144]: time="2024-02-09T10:06:49.035342269Z" level=info msg="Stop container \"7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e\" with signal terminated" Feb 9 10:06:49.045967 systemd[1]: cri-containerd-7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e.scope: Deactivated successfully. Feb 9 10:06:49.066264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e-rootfs.mount: Deactivated successfully. Feb 9 10:06:49.070131 env[1144]: time="2024-02-09T10:06:49.070080398Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 10:06:49.075571 env[1144]: time="2024-02-09T10:06:49.075489635Z" level=info msg="StopContainer for \"700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb\" with timeout 1 (s)" Feb 9 10:06:49.076256 env[1144]: time="2024-02-09T10:06:49.076224805Z" level=info msg="Stop container \"700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb\" with signal terminated" Feb 9 10:06:49.078197 env[1144]: time="2024-02-09T10:06:49.078158192Z" level=info msg="shim disconnected" id=7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e Feb 9 10:06:49.078285 env[1144]: time="2024-02-09T10:06:49.078201993Z" level=warning msg="cleaning up after shim disconnected" id=7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e namespace=k8s.io Feb 9 10:06:49.078285 env[1144]: time="2024-02-09T10:06:49.078211633Z" level=info msg="cleaning up dead shim" Feb 9 10:06:49.084460 systemd-networkd[1044]: lxc_health: Link DOWN Feb 9 10:06:49.084467 systemd-networkd[1044]: lxc_health: Lost carrier Feb 9 10:06:49.087705 env[1144]: time="2024-02-09T10:06:49.087647086Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3604 runtime=io.containerd.runc.v2\n" Feb 9 10:06:49.089985 env[1144]: time="2024-02-09T10:06:49.089940998Z" level=info msg="StopContainer for \"7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e\" returns successfully" Feb 9 10:06:49.093934 env[1144]: time="2024-02-09T10:06:49.093902574Z" level=info msg="StopPodSandbox for \"285c39c810ed994c75f2acfec0f384972082ddf5eb35d607f703c04e1fd18c12\"" Feb 9 10:06:49.094016 env[1144]: time="2024-02-09T10:06:49.093965575Z" level=info msg="Container to stop \"7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:06:49.095276 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-285c39c810ed994c75f2acfec0f384972082ddf5eb35d607f703c04e1fd18c12-shm.mount: Deactivated successfully. Feb 9 10:06:49.102635 systemd[1]: cri-containerd-285c39c810ed994c75f2acfec0f384972082ddf5eb35d607f703c04e1fd18c12.scope: Deactivated successfully. Feb 9 10:06:49.118244 systemd[1]: cri-containerd-700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb.scope: Deactivated successfully. Feb 9 10:06:49.118546 systemd[1]: cri-containerd-700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb.scope: Consumed 6.392s CPU time. Feb 9 10:06:49.128089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-285c39c810ed994c75f2acfec0f384972082ddf5eb35d607f703c04e1fd18c12-rootfs.mount: Deactivated successfully. Feb 9 10:06:49.132964 env[1144]: time="2024-02-09T10:06:49.132914524Z" level=info msg="shim disconnected" id=285c39c810ed994c75f2acfec0f384972082ddf5eb35d607f703c04e1fd18c12 Feb 9 10:06:49.133112 env[1144]: time="2024-02-09T10:06:49.132965764Z" level=warning msg="cleaning up after shim disconnected" id=285c39c810ed994c75f2acfec0f384972082ddf5eb35d607f703c04e1fd18c12 namespace=k8s.io Feb 9 10:06:49.133112 env[1144]: time="2024-02-09T10:06:49.132977605Z" level=info msg="cleaning up dead shim" Feb 9 10:06:49.138290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb-rootfs.mount: Deactivated successfully. Feb 9 10:06:49.141428 env[1144]: time="2024-02-09T10:06:49.141389763Z" level=info msg="shim disconnected" id=700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb Feb 9 10:06:49.141601 env[1144]: time="2024-02-09T10:06:49.141583126Z" level=warning msg="cleaning up after shim disconnected" id=700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb namespace=k8s.io Feb 9 10:06:49.141673 env[1144]: time="2024-02-09T10:06:49.141649287Z" level=info msg="cleaning up dead shim" Feb 9 10:06:49.144191 env[1144]: time="2024-02-09T10:06:49.144144602Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3654 runtime=io.containerd.runc.v2\n" Feb 9 10:06:49.144471 env[1144]: time="2024-02-09T10:06:49.144429246Z" level=info msg="TearDown network for sandbox \"285c39c810ed994c75f2acfec0f384972082ddf5eb35d607f703c04e1fd18c12\" successfully" Feb 9 10:06:49.144471 env[1144]: time="2024-02-09T10:06:49.144459286Z" level=info msg="StopPodSandbox for \"285c39c810ed994c75f2acfec0f384972082ddf5eb35d607f703c04e1fd18c12\" returns successfully" Feb 9 10:06:49.150194 env[1144]: time="2024-02-09T10:06:49.150161807Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3667 runtime=io.containerd.runc.v2\n" Feb 9 10:06:49.152269 env[1144]: time="2024-02-09T10:06:49.152233716Z" level=info msg="StopContainer for \"700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb\" returns successfully" Feb 9 10:06:49.152869 env[1144]: time="2024-02-09T10:06:49.152838124Z" level=info msg="StopPodSandbox for \"5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61\"" Feb 9 10:06:49.152972 env[1144]: time="2024-02-09T10:06:49.152900565Z" level=info msg="Container to stop \"ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:06:49.152972 env[1144]: time="2024-02-09T10:06:49.152916046Z" level=info msg="Container to stop \"410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:06:49.152972 env[1144]: time="2024-02-09T10:06:49.152928686Z" level=info msg="Container to stop \"d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:06:49.152972 env[1144]: time="2024-02-09T10:06:49.152942206Z" level=info msg="Container to stop \"8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:06:49.152972 env[1144]: time="2024-02-09T10:06:49.152952446Z" level=info msg="Container to stop \"700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:06:49.160959 systemd[1]: cri-containerd-5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61.scope: Deactivated successfully. Feb 9 10:06:49.185898 env[1144]: time="2024-02-09T10:06:49.185850030Z" level=info msg="shim disconnected" id=5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61 Feb 9 10:06:49.185898 env[1144]: time="2024-02-09T10:06:49.185893110Z" level=warning msg="cleaning up after shim disconnected" id=5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61 namespace=k8s.io Feb 9 10:06:49.185898 env[1144]: time="2024-02-09T10:06:49.185901910Z" level=info msg="cleaning up dead shim" Feb 9 10:06:49.196432 env[1144]: time="2024-02-09T10:06:49.196389018Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3698 runtime=io.containerd.runc.v2\n" Feb 9 10:06:49.196884 env[1144]: time="2024-02-09T10:06:49.196853305Z" level=info msg="TearDown network for sandbox \"5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61\" successfully" Feb 9 10:06:49.196975 env[1144]: time="2024-02-09T10:06:49.196957026Z" level=info msg="StopPodSandbox for \"5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61\" returns successfully" Feb 9 10:06:49.220883 kubelet[1970]: I0209 10:06:49.220848 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-bpf-maps\") pod \"cac3996f-db00-49cc-8664-f837c20fb825\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " Feb 9 10:06:49.220883 kubelet[1970]: I0209 10:06:49.220891 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-host-proc-sys-kernel\") pod \"cac3996f-db00-49cc-8664-f837c20fb825\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " Feb 9 10:06:49.221282 kubelet[1970]: I0209 10:06:49.220916 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vh78\" (UniqueName: \"kubernetes.io/projected/cac3996f-db00-49cc-8664-f837c20fb825-kube-api-access-6vh78\") pod \"cac3996f-db00-49cc-8664-f837c20fb825\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " Feb 9 10:06:49.221282 kubelet[1970]: I0209 10:06:49.220934 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-xtables-lock\") pod \"cac3996f-db00-49cc-8664-f837c20fb825\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " Feb 9 10:06:49.221282 kubelet[1970]: I0209 10:06:49.220953 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cac3996f-db00-49cc-8664-f837c20fb825-hubble-tls\") pod \"cac3996f-db00-49cc-8664-f837c20fb825\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " Feb 9 10:06:49.221282 kubelet[1970]: I0209 10:06:49.220969 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-lib-modules\") pod \"cac3996f-db00-49cc-8664-f837c20fb825\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " Feb 9 10:06:49.221282 kubelet[1970]: I0209 10:06:49.220989 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9twkt\" (UniqueName: \"kubernetes.io/projected/d581129b-31be-40fb-afe2-cf7dd49c665d-kube-api-access-9twkt\") pod \"d581129b-31be-40fb-afe2-cf7dd49c665d\" (UID: \"d581129b-31be-40fb-afe2-cf7dd49c665d\") " Feb 9 10:06:49.221282 kubelet[1970]: I0209 10:06:49.221010 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cac3996f-db00-49cc-8664-f837c20fb825-cilium-config-path\") pod \"cac3996f-db00-49cc-8664-f837c20fb825\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " Feb 9 10:06:49.221455 kubelet[1970]: I0209 10:06:49.221027 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-cilium-cgroup\") pod \"cac3996f-db00-49cc-8664-f837c20fb825\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " Feb 9 10:06:49.221455 kubelet[1970]: I0209 10:06:49.221044 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-cilium-run\") pod \"cac3996f-db00-49cc-8664-f837c20fb825\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " Feb 9 10:06:49.221455 kubelet[1970]: I0209 10:06:49.221059 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-etc-cni-netd\") pod \"cac3996f-db00-49cc-8664-f837c20fb825\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " Feb 9 10:06:49.221455 kubelet[1970]: I0209 10:06:49.221079 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cac3996f-db00-49cc-8664-f837c20fb825-clustermesh-secrets\") pod \"cac3996f-db00-49cc-8664-f837c20fb825\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " Feb 9 10:06:49.221455 kubelet[1970]: I0209 10:06:49.221096 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-cni-path\") pod \"cac3996f-db00-49cc-8664-f837c20fb825\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " Feb 9 10:06:49.221455 kubelet[1970]: I0209 10:06:49.221121 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-hostproc\") pod \"cac3996f-db00-49cc-8664-f837c20fb825\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " Feb 9 10:06:49.221619 kubelet[1970]: I0209 10:06:49.221138 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-host-proc-sys-net\") pod \"cac3996f-db00-49cc-8664-f837c20fb825\" (UID: \"cac3996f-db00-49cc-8664-f837c20fb825\") " Feb 9 10:06:49.221619 kubelet[1970]: I0209 10:06:49.221160 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d581129b-31be-40fb-afe2-cf7dd49c665d-cilium-config-path\") pod \"d581129b-31be-40fb-afe2-cf7dd49c665d\" (UID: \"d581129b-31be-40fb-afe2-cf7dd49c665d\") " Feb 9 10:06:49.226261 kubelet[1970]: I0209 10:06:49.226057 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cac3996f-db00-49cc-8664-f837c20fb825" (UID: "cac3996f-db00-49cc-8664-f837c20fb825"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.226261 kubelet[1970]: I0209 10:06:49.226053 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cac3996f-db00-49cc-8664-f837c20fb825" (UID: "cac3996f-db00-49cc-8664-f837c20fb825"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.226261 kubelet[1970]: I0209 10:06:49.226107 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cac3996f-db00-49cc-8664-f837c20fb825" (UID: "cac3996f-db00-49cc-8664-f837c20fb825"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.226261 kubelet[1970]: I0209 10:06:49.226194 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-hostproc" (OuterVolumeSpecName: "hostproc") pod "cac3996f-db00-49cc-8664-f837c20fb825" (UID: "cac3996f-db00-49cc-8664-f837c20fb825"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.226261 kubelet[1970]: I0209 10:06:49.226211 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-cni-path" (OuterVolumeSpecName: "cni-path") pod "cac3996f-db00-49cc-8664-f837c20fb825" (UID: "cac3996f-db00-49cc-8664-f837c20fb825"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.226417 kubelet[1970]: I0209 10:06:49.226225 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cac3996f-db00-49cc-8664-f837c20fb825" (UID: "cac3996f-db00-49cc-8664-f837c20fb825"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.226417 kubelet[1970]: I0209 10:06:49.226307 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cac3996f-db00-49cc-8664-f837c20fb825" (UID: "cac3996f-db00-49cc-8664-f837c20fb825"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.226417 kubelet[1970]: I0209 10:06:49.226326 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cac3996f-db00-49cc-8664-f837c20fb825" (UID: "cac3996f-db00-49cc-8664-f837c20fb825"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.226417 kubelet[1970]: I0209 10:06:49.226342 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cac3996f-db00-49cc-8664-f837c20fb825" (UID: "cac3996f-db00-49cc-8664-f837c20fb825"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.226417 kubelet[1970]: I0209 10:06:49.226357 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cac3996f-db00-49cc-8664-f837c20fb825" (UID: "cac3996f-db00-49cc-8664-f837c20fb825"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.226995 kubelet[1970]: W0209 10:06:49.226975 1970 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/cac3996f-db00-49cc-8664-f837c20fb825/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 10:06:49.227233 kubelet[1970]: W0209 10:06:49.226973 1970 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d581129b-31be-40fb-afe2-cf7dd49c665d/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 10:06:49.229202 kubelet[1970]: I0209 10:06:49.229173 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d581129b-31be-40fb-afe2-cf7dd49c665d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d581129b-31be-40fb-afe2-cf7dd49c665d" (UID: "d581129b-31be-40fb-afe2-cf7dd49c665d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:06:49.229407 kubelet[1970]: I0209 10:06:49.229381 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d581129b-31be-40fb-afe2-cf7dd49c665d-kube-api-access-9twkt" (OuterVolumeSpecName: "kube-api-access-9twkt") pod "d581129b-31be-40fb-afe2-cf7dd49c665d" (UID: "d581129b-31be-40fb-afe2-cf7dd49c665d"). InnerVolumeSpecName "kube-api-access-9twkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:06:49.229569 kubelet[1970]: I0209 10:06:49.229535 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cac3996f-db00-49cc-8664-f837c20fb825-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cac3996f-db00-49cc-8664-f837c20fb825" (UID: "cac3996f-db00-49cc-8664-f837c20fb825"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:06:49.229838 kubelet[1970]: I0209 10:06:49.229809 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac3996f-db00-49cc-8664-f837c20fb825-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cac3996f-db00-49cc-8664-f837c20fb825" (UID: "cac3996f-db00-49cc-8664-f837c20fb825"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:06:49.231168 kubelet[1970]: I0209 10:06:49.231137 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac3996f-db00-49cc-8664-f837c20fb825-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cac3996f-db00-49cc-8664-f837c20fb825" (UID: "cac3996f-db00-49cc-8664-f837c20fb825"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:06:49.232066 kubelet[1970]: I0209 10:06:49.232042 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac3996f-db00-49cc-8664-f837c20fb825-kube-api-access-6vh78" (OuterVolumeSpecName: "kube-api-access-6vh78") pod "cac3996f-db00-49cc-8664-f837c20fb825" (UID: "cac3996f-db00-49cc-8664-f837c20fb825"). InnerVolumeSpecName "kube-api-access-6vh78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:06:49.321761 kubelet[1970]: I0209 10:06:49.321723 1970 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:49.321761 kubelet[1970]: I0209 10:06:49.321756 1970 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cac3996f-db00-49cc-8664-f837c20fb825-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:49.321971 kubelet[1970]: I0209 10:06:49.321770 1970 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9twkt\" (UniqueName: \"kubernetes.io/projected/d581129b-31be-40fb-afe2-cf7dd49c665d-kube-api-access-9twkt\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:49.321971 kubelet[1970]: I0209 10:06:49.321786 1970 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cac3996f-db00-49cc-8664-f837c20fb825-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:49.321971 kubelet[1970]: I0209 10:06:49.321796 1970 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:49.321971 kubelet[1970]: I0209 10:06:49.321804 1970 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:49.321971 kubelet[1970]: I0209 10:06:49.321813 1970 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cac3996f-db00-49cc-8664-f837c20fb825-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:49.321971 kubelet[1970]: I0209 10:06:49.321824 1970 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:49.321971 kubelet[1970]: I0209 10:06:49.321833 1970 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:49.321971 kubelet[1970]: I0209 10:06:49.321842 1970 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:49.322177 kubelet[1970]: I0209 10:06:49.321851 1970 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:49.322177 kubelet[1970]: I0209 10:06:49.321860 1970 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d581129b-31be-40fb-afe2-cf7dd49c665d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:49.322177 kubelet[1970]: I0209 10:06:49.321868 1970 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:49.322177 kubelet[1970]: I0209 10:06:49.321877 1970 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:49.322177 kubelet[1970]: I0209 10:06:49.321887 1970 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6vh78\" (UniqueName: \"kubernetes.io/projected/cac3996f-db00-49cc-8664-f837c20fb825-kube-api-access-6vh78\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:49.322177 kubelet[1970]: I0209 10:06:49.321895 1970 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cac3996f-db00-49cc-8664-f837c20fb825-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:49.460947 kubelet[1970]: I0209 10:06:49.460909 1970 scope.go:115] "RemoveContainer" containerID="7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e" Feb 9 10:06:49.462169 env[1144]: time="2024-02-09T10:06:49.462130042Z" level=info msg="RemoveContainer for \"7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e\"" Feb 9 10:06:49.465991 systemd[1]: Removed slice kubepods-besteffort-podd581129b_31be_40fb_afe2_cf7dd49c665d.slice. Feb 9 10:06:49.468135 env[1144]: time="2024-02-09T10:06:49.468093926Z" level=info msg="RemoveContainer for \"7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e\" returns successfully" Feb 9 10:06:49.468328 kubelet[1970]: I0209 10:06:49.468306 1970 scope.go:115] "RemoveContainer" containerID="7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e" Feb 9 10:06:49.469776 env[1144]: time="2024-02-09T10:06:49.469435185Z" level=error msg="ContainerStatus for \"7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e\": not found" Feb 9 10:06:49.469862 kubelet[1970]: E0209 10:06:49.469617 1970 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e\": not found" containerID="7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e" Feb 9 10:06:49.469862 kubelet[1970]: I0209 10:06:49.469850 1970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e} err="failed to get container status \"7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e\": rpc error: code = NotFound desc = an error occurred when try to find container \"7095aaebb2062b739dd3d6c49dec9a092546d7e2bb13fea63936a938b8c6b19e\": not found" Feb 9 10:06:49.469925 kubelet[1970]: I0209 10:06:49.469873 1970 scope.go:115] "RemoveContainer" containerID="700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb" Feb 9 10:06:49.471452 systemd[1]: Removed slice kubepods-burstable-podcac3996f_db00_49cc_8664_f837c20fb825.slice. Feb 9 10:06:49.471532 systemd[1]: kubepods-burstable-podcac3996f_db00_49cc_8664_f837c20fb825.slice: Consumed 6.625s CPU time. Feb 9 10:06:49.472545 env[1144]: time="2024-02-09T10:06:49.472511108Z" level=info msg="RemoveContainer for \"700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb\"" Feb 9 10:06:49.474981 env[1144]: time="2024-02-09T10:06:49.474952663Z" level=info msg="RemoveContainer for \"700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb\" returns successfully" Feb 9 10:06:49.475125 kubelet[1970]: I0209 10:06:49.475102 1970 scope.go:115] "RemoveContainer" containerID="8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c" Feb 9 10:06:49.477188 env[1144]: time="2024-02-09T10:06:49.477160214Z" level=info msg="RemoveContainer for \"8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c\"" Feb 9 10:06:49.485050 env[1144]: time="2024-02-09T10:06:49.484950604Z" level=info msg="RemoveContainer for \"8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c\" returns successfully" Feb 9 10:06:49.485316 kubelet[1970]: I0209 10:06:49.485276 1970 scope.go:115] "RemoveContainer" containerID="d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6" Feb 9 10:06:49.486547 env[1144]: time="2024-02-09T10:06:49.486514186Z" level=info msg="RemoveContainer for \"d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6\"" Feb 9 10:06:49.488870 env[1144]: time="2024-02-09T10:06:49.488579135Z" level=info msg="RemoveContainer for \"d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6\" returns successfully" Feb 9 10:06:49.488944 kubelet[1970]: I0209 10:06:49.488747 1970 scope.go:115] "RemoveContainer" containerID="410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d" Feb 9 10:06:49.490603 env[1144]: time="2024-02-09T10:06:49.490571803Z" level=info msg="RemoveContainer for \"410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d\"" Feb 9 10:06:49.493227 env[1144]: time="2024-02-09T10:06:49.493193280Z" level=info msg="RemoveContainer for \"410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d\" returns successfully" Feb 9 10:06:49.493386 kubelet[1970]: I0209 10:06:49.493364 1970 scope.go:115] "RemoveContainer" containerID="ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4" Feb 9 10:06:49.494428 env[1144]: time="2024-02-09T10:06:49.494385296Z" level=info msg="RemoveContainer for \"ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4\"" Feb 9 10:06:49.496474 env[1144]: time="2024-02-09T10:06:49.496437125Z" level=info msg="RemoveContainer for \"ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4\" returns successfully" Feb 9 10:06:49.496622 kubelet[1970]: I0209 10:06:49.496595 1970 scope.go:115] "RemoveContainer" containerID="700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb" Feb 9 10:06:49.496892 env[1144]: time="2024-02-09T10:06:49.496828451Z" level=error msg="ContainerStatus for \"700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb\": not found" Feb 9 10:06:49.497063 kubelet[1970]: E0209 10:06:49.497037 1970 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb\": not found" containerID="700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb" Feb 9 10:06:49.497092 kubelet[1970]: I0209 10:06:49.497073 1970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb} err="failed to get container status \"700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"700e427897194dae4dde4ab83e7d555a7d33557aa0b9708e1ddcd984294985cb\": not found" Feb 9 10:06:49.497092 kubelet[1970]: I0209 10:06:49.497086 1970 scope.go:115] "RemoveContainer" containerID="8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c" Feb 9 10:06:49.497334 env[1144]: time="2024-02-09T10:06:49.497280137Z" level=error msg="ContainerStatus for \"8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c\": not found" Feb 9 10:06:49.497437 kubelet[1970]: E0209 10:06:49.497423 1970 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c\": not found" containerID="8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c" Feb 9 10:06:49.497463 kubelet[1970]: I0209 10:06:49.497452 1970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c} err="failed to get container status \"8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"8dffab711d9203b6418170400e9c2a12ac71e915abb2787fdd4e10b2a180fb5c\": not found" Feb 9 10:06:49.497463 kubelet[1970]: I0209 10:06:49.497463 1970 scope.go:115] "RemoveContainer" containerID="d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6" Feb 9 10:06:49.497648 env[1144]: time="2024-02-09T10:06:49.497612102Z" level=error msg="ContainerStatus for \"d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6\": not found" Feb 9 10:06:49.497760 kubelet[1970]: E0209 10:06:49.497745 1970 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6\": not found" containerID="d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6" Feb 9 10:06:49.497788 kubelet[1970]: I0209 10:06:49.497776 1970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6} err="failed to get container status \"d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0aeb7e62ca6822d083c876bd5ef6423dc56378d302666a087e8af24a69201f6\": not found" Feb 9 10:06:49.497788 kubelet[1970]: I0209 10:06:49.497786 1970 scope.go:115] "RemoveContainer" containerID="410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d" Feb 9 10:06:49.497979 env[1144]: time="2024-02-09T10:06:49.497935707Z" level=error msg="ContainerStatus for \"410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d\": not found" Feb 9 10:06:49.498077 kubelet[1970]: E0209 10:06:49.498066 1970 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d\": not found" containerID="410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d" Feb 9 10:06:49.498108 kubelet[1970]: I0209 10:06:49.498089 1970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d} err="failed to get container status \"410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d\": rpc error: code = NotFound desc = an error occurred when try to find container \"410deeb465d56a69e591d96a8e6c93dfd406312f59740c9bebeae50ecaf5b81d\": not found" Feb 9 10:06:49.498108 kubelet[1970]: I0209 10:06:49.498099 1970 scope.go:115] "RemoveContainer" containerID="ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4" Feb 9 10:06:49.498277 env[1144]: time="2024-02-09T10:06:49.498229391Z" level=error msg="ContainerStatus for \"ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4\": not found" Feb 9 10:06:49.498422 kubelet[1970]: E0209 10:06:49.498396 1970 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4\": not found" containerID="ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4" Feb 9 10:06:49.498451 kubelet[1970]: I0209 10:06:49.498441 1970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4} err="failed to get container status \"ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee91d1a000f846ecb05b74fea094285e91b8a3e49e1d682c9b978d261da461f4\": not found" Feb 9 10:06:50.040347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61-rootfs.mount: Deactivated successfully. Feb 9 10:06:50.040437 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c3ed11ead3e0bd13ddb3082646a712c1410e613ae4a3c1bd44470b425194c61-shm.mount: Deactivated successfully. Feb 9 10:06:50.040491 systemd[1]: var-lib-kubelet-pods-d581129b\x2d31be\x2d40fb\x2dafe2\x2dcf7dd49c665d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9twkt.mount: Deactivated successfully. Feb 9 10:06:50.040557 systemd[1]: var-lib-kubelet-pods-cac3996f\x2ddb00\x2d49cc\x2d8664\x2df837c20fb825-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6vh78.mount: Deactivated successfully. Feb 9 10:06:50.040605 systemd[1]: var-lib-kubelet-pods-cac3996f\x2ddb00\x2d49cc\x2d8664\x2df837c20fb825-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 10:06:50.040655 systemd[1]: var-lib-kubelet-pods-cac3996f\x2ddb00\x2d49cc\x2d8664\x2df837c20fb825-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 10:06:50.308827 kubelet[1970]: I0209 10:06:50.308801 1970 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=cac3996f-db00-49cc-8664-f837c20fb825 path="/var/lib/kubelet/pods/cac3996f-db00-49cc-8664-f837c20fb825/volumes" Feb 9 10:06:50.309356 kubelet[1970]: I0209 10:06:50.309329 1970 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=d581129b-31be-40fb-afe2-cf7dd49c665d path="/var/lib/kubelet/pods/d581129b-31be-40fb-afe2-cf7dd49c665d/volumes" Feb 9 10:06:50.995978 sshd[3557]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:50.999242 systemd[1]: Started sshd@21-10.0.0.120:22-10.0.0.1:44198.service. Feb 9 10:06:51.001166 systemd[1]: sshd@20-10.0.0.120:22-10.0.0.1:44192.service: Deactivated successfully. Feb 9 10:06:51.001893 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 10:06:51.002082 systemd[1]: session-21.scope: Consumed 1.292s CPU time. Feb 9 10:06:51.002462 systemd-logind[1133]: Session 21 logged out. Waiting for processes to exit. Feb 9 10:06:51.003340 systemd-logind[1133]: Removed session 21. Feb 9 10:06:51.040864 sshd[3716]: Accepted publickey for core from 10.0.0.1 port 44198 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:51.042085 sshd[3716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:51.045145 systemd-logind[1133]: New session 22 of user core. Feb 9 10:06:51.046003 systemd[1]: Started session-22.scope. Feb 9 10:06:51.361826 kubelet[1970]: E0209 10:06:51.361802 1970 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:06:52.927301 sshd[3716]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:52.931158 systemd[1]: Started sshd@22-10.0.0.120:22-10.0.0.1:56674.service. Feb 9 10:06:52.939100 systemd[1]: sshd@21-10.0.0.120:22-10.0.0.1:44198.service: Deactivated successfully. Feb 9 10:06:52.939857 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 10:06:52.940035 systemd[1]: session-22.scope: Consumed 1.793s CPU time. Feb 9 10:06:52.943886 systemd-logind[1133]: Session 22 logged out. Waiting for processes to exit. Feb 9 10:06:52.946001 systemd-logind[1133]: Removed session 22. Feb 9 10:06:52.966988 kubelet[1970]: I0209 10:06:52.966949 1970 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:06:52.967288 kubelet[1970]: E0209 10:06:52.967013 1970 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cac3996f-db00-49cc-8664-f837c20fb825" containerName="mount-cgroup" Feb 9 10:06:52.967288 kubelet[1970]: E0209 10:06:52.967024 1970 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cac3996f-db00-49cc-8664-f837c20fb825" containerName="mount-bpf-fs" Feb 9 10:06:52.967288 kubelet[1970]: E0209 10:06:52.967031 1970 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cac3996f-db00-49cc-8664-f837c20fb825" containerName="clean-cilium-state" Feb 9 10:06:52.967288 kubelet[1970]: E0209 10:06:52.967037 1970 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cac3996f-db00-49cc-8664-f837c20fb825" containerName="cilium-agent" Feb 9 10:06:52.967288 kubelet[1970]: E0209 10:06:52.967044 1970 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cac3996f-db00-49cc-8664-f837c20fb825" containerName="apply-sysctl-overwrites" Feb 9 10:06:52.967288 kubelet[1970]: E0209 10:06:52.967056 1970 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d581129b-31be-40fb-afe2-cf7dd49c665d" containerName="cilium-operator" Feb 9 10:06:52.967288 kubelet[1970]: I0209 10:06:52.967250 1970 memory_manager.go:346] "RemoveStaleState removing state" podUID="d581129b-31be-40fb-afe2-cf7dd49c665d" containerName="cilium-operator" Feb 9 10:06:52.967288 kubelet[1970]: I0209 10:06:52.967264 1970 memory_manager.go:346] "RemoveStaleState removing state" podUID="cac3996f-db00-49cc-8664-f837c20fb825" containerName="cilium-agent" Feb 9 10:06:52.973072 systemd[1]: Created slice kubepods-burstable-pod2fb86247_b672_421f_a56a_44c2dbec702f.slice. Feb 9 10:06:52.979908 sshd[3730]: Accepted publickey for core from 10.0.0.1 port 56674 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:52.980623 sshd[3730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:52.984002 systemd-logind[1133]: New session 23 of user core. Feb 9 10:06:52.984777 systemd[1]: Started session-23.scope. Feb 9 10:06:53.105593 sshd[3730]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:53.108801 systemd[1]: Started sshd@23-10.0.0.120:22-10.0.0.1:56686.service. Feb 9 10:06:53.109926 systemd[1]: sshd@22-10.0.0.120:22-10.0.0.1:56674.service: Deactivated successfully. Feb 9 10:06:53.110913 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 10:06:53.111831 systemd-logind[1133]: Session 23 logged out. Waiting for processes to exit. Feb 9 10:06:53.115660 systemd-logind[1133]: Removed session 23. Feb 9 10:06:53.140360 kubelet[1970]: I0209 10:06:53.140330 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-hostproc\") pod \"cilium-wgmff\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " pod="kube-system/cilium-wgmff" Feb 9 10:06:53.140486 kubelet[1970]: I0209 10:06:53.140373 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fb86247-b672-421f-a56a-44c2dbec702f-hubble-tls\") pod \"cilium-wgmff\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " pod="kube-system/cilium-wgmff" Feb 9 10:06:53.140486 kubelet[1970]: I0209 10:06:53.140396 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfz94\" (UniqueName: \"kubernetes.io/projected/2fb86247-b672-421f-a56a-44c2dbec702f-kube-api-access-hfz94\") pod \"cilium-wgmff\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " pod="kube-system/cilium-wgmff" Feb 9 10:06:53.140486 kubelet[1970]: I0209 10:06:53.140415 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-cilium-run\") pod \"cilium-wgmff\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " pod="kube-system/cilium-wgmff" Feb 9 10:06:53.140486 kubelet[1970]: I0209 10:06:53.140436 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-host-proc-sys-net\") pod \"cilium-wgmff\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " pod="kube-system/cilium-wgmff" Feb 9 10:06:53.140486 kubelet[1970]: I0209 10:06:53.140458 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-etc-cni-netd\") pod \"cilium-wgmff\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " pod="kube-system/cilium-wgmff" Feb 9 10:06:53.140486 kubelet[1970]: I0209 10:06:53.140479 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-lib-modules\") pod \"cilium-wgmff\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " pod="kube-system/cilium-wgmff" Feb 9 10:06:53.140651 kubelet[1970]: I0209 10:06:53.140498 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2fb86247-b672-421f-a56a-44c2dbec702f-cilium-ipsec-secrets\") pod \"cilium-wgmff\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " pod="kube-system/cilium-wgmff" Feb 9 10:06:53.140651 kubelet[1970]: I0209 10:06:53.140519 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-xtables-lock\") pod \"cilium-wgmff\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " pod="kube-system/cilium-wgmff" Feb 9 10:06:53.140651 kubelet[1970]: I0209 10:06:53.140537 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fb86247-b672-421f-a56a-44c2dbec702f-cilium-config-path\") pod \"cilium-wgmff\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " pod="kube-system/cilium-wgmff" Feb 9 10:06:53.141307 kubelet[1970]: I0209 10:06:53.140677 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-cni-path\") pod \"cilium-wgmff\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " pod="kube-system/cilium-wgmff" Feb 9 10:06:53.141307 kubelet[1970]: I0209 10:06:53.140749 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fb86247-b672-421f-a56a-44c2dbec702f-clustermesh-secrets\") pod \"cilium-wgmff\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " pod="kube-system/cilium-wgmff" Feb 9 10:06:53.141307 kubelet[1970]: I0209 10:06:53.140772 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-host-proc-sys-kernel\") pod \"cilium-wgmff\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " pod="kube-system/cilium-wgmff" Feb 9 10:06:53.141307 kubelet[1970]: I0209 10:06:53.140792 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-cilium-cgroup\") pod \"cilium-wgmff\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " pod="kube-system/cilium-wgmff" Feb 9 10:06:53.141307 kubelet[1970]: I0209 10:06:53.140809 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-bpf-maps\") pod \"cilium-wgmff\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " pod="kube-system/cilium-wgmff" Feb 9 10:06:53.151416 sshd[3743]: Accepted publickey for core from 10.0.0.1 port 56686 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:06:53.152677 sshd[3743]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:06:53.156271 systemd-logind[1133]: New session 24 of user core. Feb 9 10:06:53.156647 systemd[1]: Started session-24.scope. Feb 9 10:06:53.275825 kubelet[1970]: E0209 10:06:53.275725 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:53.276299 env[1144]: time="2024-02-09T10:06:53.276223289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wgmff,Uid:2fb86247-b672-421f-a56a-44c2dbec702f,Namespace:kube-system,Attempt:0,}" Feb 9 10:06:53.287559 env[1144]: time="2024-02-09T10:06:53.287489267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:06:53.287559 env[1144]: time="2024-02-09T10:06:53.287533787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:06:53.287559 env[1144]: time="2024-02-09T10:06:53.287543828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:06:53.287930 env[1144]: time="2024-02-09T10:06:53.287883552Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6ce93a194335421f445d239baa06db1e24a138ee2a47e9acd0473bd6f173b63 pid=3765 runtime=io.containerd.runc.v2 Feb 9 10:06:53.298398 systemd[1]: Started cri-containerd-f6ce93a194335421f445d239baa06db1e24a138ee2a47e9acd0473bd6f173b63.scope. Feb 9 10:06:53.338247 env[1144]: time="2024-02-09T10:06:53.338204447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wgmff,Uid:2fb86247-b672-421f-a56a-44c2dbec702f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6ce93a194335421f445d239baa06db1e24a138ee2a47e9acd0473bd6f173b63\"" Feb 9 10:06:53.339180 kubelet[1970]: E0209 10:06:53.338965 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:53.340842 env[1144]: time="2024-02-09T10:06:53.340804759Z" level=info msg="CreateContainer within sandbox \"f6ce93a194335421f445d239baa06db1e24a138ee2a47e9acd0473bd6f173b63\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:06:53.350414 env[1144]: time="2024-02-09T10:06:53.350373596Z" level=info msg="CreateContainer within sandbox \"f6ce93a194335421f445d239baa06db1e24a138ee2a47e9acd0473bd6f173b63\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56\"" Feb 9 10:06:53.350962 env[1144]: time="2024-02-09T10:06:53.350905363Z" level=info msg="StartContainer for \"43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56\"" Feb 9 10:06:53.366875 systemd[1]: Started cri-containerd-43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56.scope. Feb 9 10:06:53.386146 systemd[1]: cri-containerd-43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56.scope: Deactivated successfully. Feb 9 10:06:53.400085 env[1144]: time="2024-02-09T10:06:53.400033964Z" level=info msg="shim disconnected" id=43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56 Feb 9 10:06:53.400261 env[1144]: time="2024-02-09T10:06:53.400088404Z" level=warning msg="cleaning up after shim disconnected" id=43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56 namespace=k8s.io Feb 9 10:06:53.400261 env[1144]: time="2024-02-09T10:06:53.400102364Z" level=info msg="cleaning up dead shim" Feb 9 10:06:53.407226 env[1144]: time="2024-02-09T10:06:53.407157891Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3822 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T10:06:53Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 10:06:53.407558 env[1144]: time="2024-02-09T10:06:53.407452334Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Feb 9 10:06:53.408575 env[1144]: time="2024-02-09T10:06:53.408526988Z" level=error msg="Failed to pipe stdout of container \"43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56\"" error="reading from a closed fifo" Feb 9 10:06:53.408639 env[1144]: time="2024-02-09T10:06:53.408606949Z" level=error msg="Failed to pipe stderr of container \"43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56\"" error="reading from a closed fifo" Feb 9 10:06:53.410559 env[1144]: time="2024-02-09T10:06:53.410509172Z" level=error msg="StartContainer for \"43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 10:06:53.410931 kubelet[1970]: E0209 10:06:53.410905 1970 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56" Feb 9 10:06:53.411402 kubelet[1970]: E0209 10:06:53.411252 1970 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 10:06:53.411402 kubelet[1970]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 10:06:53.411402 kubelet[1970]: rm /hostbin/cilium-mount Feb 9 10:06:53.411540 kubelet[1970]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hfz94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-wgmff_kube-system(2fb86247-b672-421f-a56a-44c2dbec702f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 10:06:53.411540 kubelet[1970]: E0209 10:06:53.411319 1970 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wgmff" podUID=2fb86247-b672-421f-a56a-44c2dbec702f Feb 9 10:06:53.474142 env[1144]: time="2024-02-09T10:06:53.474106950Z" level=info msg="StopPodSandbox for \"f6ce93a194335421f445d239baa06db1e24a138ee2a47e9acd0473bd6f173b63\"" Feb 9 10:06:53.474264 env[1144]: time="2024-02-09T10:06:53.474165550Z" level=info msg="Container to stop \"43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:06:53.481148 systemd[1]: cri-containerd-f6ce93a194335421f445d239baa06db1e24a138ee2a47e9acd0473bd6f173b63.scope: Deactivated successfully. Feb 9 10:06:53.505788 env[1144]: time="2024-02-09T10:06:53.505737697Z" level=info msg="shim disconnected" id=f6ce93a194335421f445d239baa06db1e24a138ee2a47e9acd0473bd6f173b63 Feb 9 10:06:53.505788 env[1144]: time="2024-02-09T10:06:53.505786737Z" level=warning msg="cleaning up after shim disconnected" id=f6ce93a194335421f445d239baa06db1e24a138ee2a47e9acd0473bd6f173b63 namespace=k8s.io Feb 9 10:06:53.505971 env[1144]: time="2024-02-09T10:06:53.505796937Z" level=info msg="cleaning up dead shim" Feb 9 10:06:53.513037 env[1144]: time="2024-02-09T10:06:53.512981945Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3854 runtime=io.containerd.runc.v2\n" Feb 9 10:06:53.513280 env[1144]: time="2024-02-09T10:06:53.513253989Z" level=info msg="TearDown network for sandbox \"f6ce93a194335421f445d239baa06db1e24a138ee2a47e9acd0473bd6f173b63\" successfully" Feb 9 10:06:53.513312 env[1144]: time="2024-02-09T10:06:53.513280309Z" level=info msg="StopPodSandbox for \"f6ce93a194335421f445d239baa06db1e24a138ee2a47e9acd0473bd6f173b63\" returns successfully" Feb 9 10:06:53.643888 kubelet[1970]: I0209 10:06:53.643845 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-bpf-maps\") pod \"2fb86247-b672-421f-a56a-44c2dbec702f\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " Feb 9 10:06:53.643888 kubelet[1970]: I0209 10:06:53.643893 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-cni-path\") pod \"2fb86247-b672-421f-a56a-44c2dbec702f\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " Feb 9 10:06:53.644062 kubelet[1970]: I0209 10:06:53.643917 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fb86247-b672-421f-a56a-44c2dbec702f-hubble-tls\") pod \"2fb86247-b672-421f-a56a-44c2dbec702f\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " Feb 9 10:06:53.644062 kubelet[1970]: I0209 10:06:53.643934 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-cilium-run\") pod \"2fb86247-b672-421f-a56a-44c2dbec702f\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " Feb 9 10:06:53.644062 kubelet[1970]: I0209 10:06:53.643964 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2fb86247-b672-421f-a56a-44c2dbec702f-cilium-ipsec-secrets\") pod \"2fb86247-b672-421f-a56a-44c2dbec702f\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " Feb 9 10:06:53.644062 kubelet[1970]: I0209 10:06:53.643983 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-host-proc-sys-kernel\") pod \"2fb86247-b672-421f-a56a-44c2dbec702f\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " Feb 9 10:06:53.644062 kubelet[1970]: I0209 10:06:53.644006 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfz94\" (UniqueName: \"kubernetes.io/projected/2fb86247-b672-421f-a56a-44c2dbec702f-kube-api-access-hfz94\") pod \"2fb86247-b672-421f-a56a-44c2dbec702f\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " Feb 9 10:06:53.644062 kubelet[1970]: I0209 10:06:53.644031 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-lib-modules\") pod \"2fb86247-b672-421f-a56a-44c2dbec702f\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " Feb 9 10:06:53.644062 kubelet[1970]: I0209 10:06:53.644048 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-xtables-lock\") pod \"2fb86247-b672-421f-a56a-44c2dbec702f\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " Feb 9 10:06:53.644234 kubelet[1970]: I0209 10:06:53.644091 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-etc-cni-netd\") pod \"2fb86247-b672-421f-a56a-44c2dbec702f\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " Feb 9 10:06:53.644234 kubelet[1970]: I0209 10:06:53.644115 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-hostproc\") pod \"2fb86247-b672-421f-a56a-44c2dbec702f\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " Feb 9 10:06:53.644234 kubelet[1970]: I0209 10:06:53.644136 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fb86247-b672-421f-a56a-44c2dbec702f-clustermesh-secrets\") pod \"2fb86247-b672-421f-a56a-44c2dbec702f\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " Feb 9 10:06:53.644234 kubelet[1970]: I0209 10:06:53.644154 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-cilium-cgroup\") pod \"2fb86247-b672-421f-a56a-44c2dbec702f\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " Feb 9 10:06:53.644234 kubelet[1970]: I0209 10:06:53.644181 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fb86247-b672-421f-a56a-44c2dbec702f-cilium-config-path\") pod \"2fb86247-b672-421f-a56a-44c2dbec702f\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " Feb 9 10:06:53.644234 kubelet[1970]: I0209 10:06:53.644201 1970 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-host-proc-sys-net\") pod \"2fb86247-b672-421f-a56a-44c2dbec702f\" (UID: \"2fb86247-b672-421f-a56a-44c2dbec702f\") " Feb 9 10:06:53.644367 kubelet[1970]: I0209 10:06:53.644268 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2fb86247-b672-421f-a56a-44c2dbec702f" (UID: "2fb86247-b672-421f-a56a-44c2dbec702f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:53.644367 kubelet[1970]: I0209 10:06:53.644295 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2fb86247-b672-421f-a56a-44c2dbec702f" (UID: "2fb86247-b672-421f-a56a-44c2dbec702f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:53.644367 kubelet[1970]: I0209 10:06:53.644311 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-cni-path" (OuterVolumeSpecName: "cni-path") pod "2fb86247-b672-421f-a56a-44c2dbec702f" (UID: "2fb86247-b672-421f-a56a-44c2dbec702f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:53.645183 kubelet[1970]: I0209 10:06:53.644477 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2fb86247-b672-421f-a56a-44c2dbec702f" (UID: "2fb86247-b672-421f-a56a-44c2dbec702f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:53.645183 kubelet[1970]: I0209 10:06:53.644519 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2fb86247-b672-421f-a56a-44c2dbec702f" (UID: "2fb86247-b672-421f-a56a-44c2dbec702f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:53.645183 kubelet[1970]: I0209 10:06:53.644933 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2fb86247-b672-421f-a56a-44c2dbec702f" (UID: "2fb86247-b672-421f-a56a-44c2dbec702f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:53.645183 kubelet[1970]: I0209 10:06:53.644980 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-hostproc" (OuterVolumeSpecName: "hostproc") pod "2fb86247-b672-421f-a56a-44c2dbec702f" (UID: "2fb86247-b672-421f-a56a-44c2dbec702f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:53.645183 kubelet[1970]: I0209 10:06:53.645011 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2fb86247-b672-421f-a56a-44c2dbec702f" (UID: "2fb86247-b672-421f-a56a-44c2dbec702f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:53.645183 kubelet[1970]: I0209 10:06:53.645042 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2fb86247-b672-421f-a56a-44c2dbec702f" (UID: "2fb86247-b672-421f-a56a-44c2dbec702f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:53.645183 kubelet[1970]: I0209 10:06:53.645046 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2fb86247-b672-421f-a56a-44c2dbec702f" (UID: "2fb86247-b672-421f-a56a-44c2dbec702f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:53.645183 kubelet[1970]: W0209 10:06:53.645157 1970 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/2fb86247-b672-421f-a56a-44c2dbec702f/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 10:06:53.646956 kubelet[1970]: I0209 10:06:53.646906 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fb86247-b672-421f-a56a-44c2dbec702f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2fb86247-b672-421f-a56a-44c2dbec702f" (UID: "2fb86247-b672-421f-a56a-44c2dbec702f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:06:53.647080 kubelet[1970]: I0209 10:06:53.647050 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fb86247-b672-421f-a56a-44c2dbec702f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2fb86247-b672-421f-a56a-44c2dbec702f" (UID: "2fb86247-b672-421f-a56a-44c2dbec702f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:06:53.647622 kubelet[1970]: I0209 10:06:53.647597 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fb86247-b672-421f-a56a-44c2dbec702f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2fb86247-b672-421f-a56a-44c2dbec702f" (UID: "2fb86247-b672-421f-a56a-44c2dbec702f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:06:53.648180 kubelet[1970]: I0209 10:06:53.648155 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fb86247-b672-421f-a56a-44c2dbec702f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2fb86247-b672-421f-a56a-44c2dbec702f" (UID: "2fb86247-b672-421f-a56a-44c2dbec702f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:06:53.648999 kubelet[1970]: I0209 10:06:53.648966 1970 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fb86247-b672-421f-a56a-44c2dbec702f-kube-api-access-hfz94" (OuterVolumeSpecName: "kube-api-access-hfz94") pod "2fb86247-b672-421f-a56a-44c2dbec702f" (UID: "2fb86247-b672-421f-a56a-44c2dbec702f"). InnerVolumeSpecName "kube-api-access-hfz94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:06:53.744723 kubelet[1970]: I0209 10:06:53.744674 1970 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:53.744723 kubelet[1970]: I0209 10:06:53.744719 1970 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fb86247-b672-421f-a56a-44c2dbec702f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:53.744723 kubelet[1970]: I0209 10:06:53.744730 1970 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:53.744908 kubelet[1970]: I0209 10:06:53.744743 1970 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fb86247-b672-421f-a56a-44c2dbec702f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:53.744908 kubelet[1970]: I0209 10:06:53.744753 1970 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:53.744908 kubelet[1970]: I0209 10:06:53.744762 1970 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:53.744908 kubelet[1970]: I0209 10:06:53.744770 1970 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:53.744908 kubelet[1970]: I0209 10:06:53.744779 1970 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fb86247-b672-421f-a56a-44c2dbec702f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:53.744908 kubelet[1970]: I0209 10:06:53.744788 1970 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:53.744908 kubelet[1970]: I0209 10:06:53.744796 1970 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2fb86247-b672-421f-a56a-44c2dbec702f-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:53.744908 kubelet[1970]: I0209 10:06:53.744805 1970 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:53.744908 kubelet[1970]: I0209 10:06:53.744814 1970 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:53.744908 kubelet[1970]: I0209 10:06:53.744823 1970 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:53.744908 kubelet[1970]: I0209 10:06:53.744832 1970 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hfz94\" (UniqueName: \"kubernetes.io/projected/2fb86247-b672-421f-a56a-44c2dbec702f-kube-api-access-hfz94\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:53.744908 kubelet[1970]: I0209 10:06:53.744842 1970 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fb86247-b672-421f-a56a-44c2dbec702f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 10:06:54.245551 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6ce93a194335421f445d239baa06db1e24a138ee2a47e9acd0473bd6f173b63-shm.mount: Deactivated successfully. Feb 9 10:06:54.245656 systemd[1]: var-lib-kubelet-pods-2fb86247\x2db672\x2d421f\x2da56a\x2d44c2dbec702f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhfz94.mount: Deactivated successfully. Feb 9 10:06:54.245734 systemd[1]: var-lib-kubelet-pods-2fb86247\x2db672\x2d421f\x2da56a\x2d44c2dbec702f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 10:06:54.245785 systemd[1]: var-lib-kubelet-pods-2fb86247\x2db672\x2d421f\x2da56a\x2d44c2dbec702f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 10:06:54.245835 systemd[1]: var-lib-kubelet-pods-2fb86247\x2db672\x2d421f\x2da56a\x2d44c2dbec702f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 10:06:54.311678 systemd[1]: Removed slice kubepods-burstable-pod2fb86247_b672_421f_a56a_44c2dbec702f.slice. Feb 9 10:06:54.477036 kubelet[1970]: I0209 10:06:54.476994 1970 scope.go:115] "RemoveContainer" containerID="43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56" Feb 9 10:06:54.479442 env[1144]: time="2024-02-09T10:06:54.479154919Z" level=info msg="RemoveContainer for \"43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56\"" Feb 9 10:06:54.481877 env[1144]: time="2024-02-09T10:06:54.481843791Z" level=info msg="RemoveContainer for \"43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56\" returns successfully" Feb 9 10:06:54.532755 kubelet[1970]: I0209 10:06:54.532641 1970 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:06:54.532755 kubelet[1970]: E0209 10:06:54.532706 1970 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2fb86247-b672-421f-a56a-44c2dbec702f" containerName="mount-cgroup" Feb 9 10:06:54.532755 kubelet[1970]: I0209 10:06:54.532743 1970 memory_manager.go:346] "RemoveStaleState removing state" podUID="2fb86247-b672-421f-a56a-44c2dbec702f" containerName="mount-cgroup" Feb 9 10:06:54.537751 systemd[1]: Created slice kubepods-burstable-pod4699820a_4f32_4b88_a7df_9e04aff5b1da.slice. Feb 9 10:06:54.649962 kubelet[1970]: I0209 10:06:54.649902 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4699820a-4f32-4b88-a7df-9e04aff5b1da-hubble-tls\") pod \"cilium-bxsrz\" (UID: \"4699820a-4f32-4b88-a7df-9e04aff5b1da\") " pod="kube-system/cilium-bxsrz" Feb 9 10:06:54.649962 kubelet[1970]: I0209 10:06:54.649972 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4699820a-4f32-4b88-a7df-9e04aff5b1da-cilium-config-path\") pod \"cilium-bxsrz\" (UID: \"4699820a-4f32-4b88-a7df-9e04aff5b1da\") " pod="kube-system/cilium-bxsrz" Feb 9 10:06:54.650128 kubelet[1970]: I0209 10:06:54.650011 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4699820a-4f32-4b88-a7df-9e04aff5b1da-etc-cni-netd\") pod \"cilium-bxsrz\" (UID: \"4699820a-4f32-4b88-a7df-9e04aff5b1da\") " pod="kube-system/cilium-bxsrz" Feb 9 10:06:54.650128 kubelet[1970]: I0209 10:06:54.650032 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4699820a-4f32-4b88-a7df-9e04aff5b1da-host-proc-sys-kernel\") pod \"cilium-bxsrz\" (UID: \"4699820a-4f32-4b88-a7df-9e04aff5b1da\") " pod="kube-system/cilium-bxsrz" Feb 9 10:06:54.650128 kubelet[1970]: I0209 10:06:54.650069 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4699820a-4f32-4b88-a7df-9e04aff5b1da-cilium-run\") pod \"cilium-bxsrz\" (UID: \"4699820a-4f32-4b88-a7df-9e04aff5b1da\") " pod="kube-system/cilium-bxsrz" Feb 9 10:06:54.650207 kubelet[1970]: I0209 10:06:54.650112 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4699820a-4f32-4b88-a7df-9e04aff5b1da-hostproc\") pod \"cilium-bxsrz\" (UID: \"4699820a-4f32-4b88-a7df-9e04aff5b1da\") " pod="kube-system/cilium-bxsrz" Feb 9 10:06:54.650233 kubelet[1970]: I0209 10:06:54.650211 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgtqt\" (UniqueName: \"kubernetes.io/projected/4699820a-4f32-4b88-a7df-9e04aff5b1da-kube-api-access-fgtqt\") pod \"cilium-bxsrz\" (UID: \"4699820a-4f32-4b88-a7df-9e04aff5b1da\") " pod="kube-system/cilium-bxsrz" Feb 9 10:06:54.650266 kubelet[1970]: I0209 10:06:54.650236 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4699820a-4f32-4b88-a7df-9e04aff5b1da-lib-modules\") pod \"cilium-bxsrz\" (UID: \"4699820a-4f32-4b88-a7df-9e04aff5b1da\") " pod="kube-system/cilium-bxsrz" Feb 9 10:06:54.650297 kubelet[1970]: I0209 10:06:54.650281 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4699820a-4f32-4b88-a7df-9e04aff5b1da-host-proc-sys-net\") pod \"cilium-bxsrz\" (UID: \"4699820a-4f32-4b88-a7df-9e04aff5b1da\") " pod="kube-system/cilium-bxsrz" Feb 9 10:06:54.650324 kubelet[1970]: I0209 10:06:54.650304 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4699820a-4f32-4b88-a7df-9e04aff5b1da-bpf-maps\") pod \"cilium-bxsrz\" (UID: \"4699820a-4f32-4b88-a7df-9e04aff5b1da\") " pod="kube-system/cilium-bxsrz" Feb 9 10:06:54.650367 kubelet[1970]: I0209 10:06:54.650344 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4699820a-4f32-4b88-a7df-9e04aff5b1da-cilium-cgroup\") pod \"cilium-bxsrz\" (UID: \"4699820a-4f32-4b88-a7df-9e04aff5b1da\") " pod="kube-system/cilium-bxsrz" Feb 9 10:06:54.650394 kubelet[1970]: I0209 10:06:54.650371 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4699820a-4f32-4b88-a7df-9e04aff5b1da-cni-path\") pod \"cilium-bxsrz\" (UID: \"4699820a-4f32-4b88-a7df-9e04aff5b1da\") " pod="kube-system/cilium-bxsrz" Feb 9 10:06:54.650462 kubelet[1970]: I0209 10:06:54.650422 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4699820a-4f32-4b88-a7df-9e04aff5b1da-xtables-lock\") pod \"cilium-bxsrz\" (UID: \"4699820a-4f32-4b88-a7df-9e04aff5b1da\") " pod="kube-system/cilium-bxsrz" Feb 9 10:06:54.650462 kubelet[1970]: I0209 10:06:54.650449 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4699820a-4f32-4b88-a7df-9e04aff5b1da-clustermesh-secrets\") pod \"cilium-bxsrz\" (UID: \"4699820a-4f32-4b88-a7df-9e04aff5b1da\") " pod="kube-system/cilium-bxsrz" Feb 9 10:06:54.650531 kubelet[1970]: I0209 10:06:54.650489 1970 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4699820a-4f32-4b88-a7df-9e04aff5b1da-cilium-ipsec-secrets\") pod \"cilium-bxsrz\" (UID: \"4699820a-4f32-4b88-a7df-9e04aff5b1da\") " pod="kube-system/cilium-bxsrz" Feb 9 10:06:54.840229 kubelet[1970]: E0209 10:06:54.840192 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:54.841214 env[1144]: time="2024-02-09T10:06:54.840930750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxsrz,Uid:4699820a-4f32-4b88-a7df-9e04aff5b1da,Namespace:kube-system,Attempt:0,}" Feb 9 10:06:54.852411 env[1144]: time="2024-02-09T10:06:54.852242763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:06:54.852411 env[1144]: time="2024-02-09T10:06:54.852283484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:06:54.852411 env[1144]: time="2024-02-09T10:06:54.852294124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:06:54.852550 env[1144]: time="2024-02-09T10:06:54.852440005Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd1c42434d5800009fe18794eb8c7117f10d5b77002b27cfeca91eb793c87490 pid=3882 runtime=io.containerd.runc.v2 Feb 9 10:06:54.863665 systemd[1]: Started cri-containerd-fd1c42434d5800009fe18794eb8c7117f10d5b77002b27cfeca91eb793c87490.scope. Feb 9 10:06:54.886736 env[1144]: time="2024-02-09T10:06:54.886676490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxsrz,Uid:4699820a-4f32-4b88-a7df-9e04aff5b1da,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd1c42434d5800009fe18794eb8c7117f10d5b77002b27cfeca91eb793c87490\"" Feb 9 10:06:54.887535 kubelet[1970]: E0209 10:06:54.887515 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:54.889330 env[1144]: time="2024-02-09T10:06:54.889294681Z" level=info msg="CreateContainer within sandbox \"fd1c42434d5800009fe18794eb8c7117f10d5b77002b27cfeca91eb793c87490\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:06:54.897552 env[1144]: time="2024-02-09T10:06:54.897507577Z" level=info msg="CreateContainer within sandbox \"fd1c42434d5800009fe18794eb8c7117f10d5b77002b27cfeca91eb793c87490\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5de57965767ff50817efbd560db06383c81cd7b33e8eb847363f965a11de2c80\"" Feb 9 10:06:54.897983 env[1144]: time="2024-02-09T10:06:54.897956863Z" level=info msg="StartContainer for \"5de57965767ff50817efbd560db06383c81cd7b33e8eb847363f965a11de2c80\"" Feb 9 10:06:54.910514 systemd[1]: Started cri-containerd-5de57965767ff50817efbd560db06383c81cd7b33e8eb847363f965a11de2c80.scope. Feb 9 10:06:54.942430 env[1144]: time="2024-02-09T10:06:54.942383387Z" level=info msg="StartContainer for \"5de57965767ff50817efbd560db06383c81cd7b33e8eb847363f965a11de2c80\" returns successfully" Feb 9 10:06:54.946510 systemd[1]: cri-containerd-5de57965767ff50817efbd560db06383c81cd7b33e8eb847363f965a11de2c80.scope: Deactivated successfully. Feb 9 10:06:54.969450 env[1144]: time="2024-02-09T10:06:54.969391506Z" level=info msg="shim disconnected" id=5de57965767ff50817efbd560db06383c81cd7b33e8eb847363f965a11de2c80 Feb 9 10:06:54.969754 env[1144]: time="2024-02-09T10:06:54.969710430Z" level=warning msg="cleaning up after shim disconnected" id=5de57965767ff50817efbd560db06383c81cd7b33e8eb847363f965a11de2c80 namespace=k8s.io Feb 9 10:06:54.969841 env[1144]: time="2024-02-09T10:06:54.969825671Z" level=info msg="cleaning up dead shim" Feb 9 10:06:54.977035 env[1144]: time="2024-02-09T10:06:54.976999036Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3966 runtime=io.containerd.runc.v2\n" Feb 9 10:06:55.312603 kubelet[1970]: E0209 10:06:55.310305 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:55.481348 kubelet[1970]: E0209 10:06:55.480822 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:55.484782 env[1144]: time="2024-02-09T10:06:55.484714228Z" level=info msg="CreateContainer within sandbox \"fd1c42434d5800009fe18794eb8c7117f10d5b77002b27cfeca91eb793c87490\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 10:06:55.497904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676008772.mount: Deactivated successfully. Feb 9 10:06:55.504538 env[1144]: time="2024-02-09T10:06:55.504497973Z" level=info msg="CreateContainer within sandbox \"fd1c42434d5800009fe18794eb8c7117f10d5b77002b27cfeca91eb793c87490\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3ef8a3b4b79a78e21b55d2f5596570e196563a8459027b00cdc4ffb8a2db018d\"" Feb 9 10:06:55.506422 env[1144]: time="2024-02-09T10:06:55.505835828Z" level=info msg="StartContainer for \"3ef8a3b4b79a78e21b55d2f5596570e196563a8459027b00cdc4ffb8a2db018d\"" Feb 9 10:06:55.534841 systemd[1]: Started cri-containerd-3ef8a3b4b79a78e21b55d2f5596570e196563a8459027b00cdc4ffb8a2db018d.scope. Feb 9 10:06:55.563809 env[1144]: time="2024-02-09T10:06:55.562921319Z" level=info msg="StartContainer for \"3ef8a3b4b79a78e21b55d2f5596570e196563a8459027b00cdc4ffb8a2db018d\" returns successfully" Feb 9 10:06:55.569332 systemd[1]: cri-containerd-3ef8a3b4b79a78e21b55d2f5596570e196563a8459027b00cdc4ffb8a2db018d.scope: Deactivated successfully. Feb 9 10:06:55.588909 env[1144]: time="2024-02-09T10:06:55.588861254Z" level=info msg="shim disconnected" id=3ef8a3b4b79a78e21b55d2f5596570e196563a8459027b00cdc4ffb8a2db018d Feb 9 10:06:55.588909 env[1144]: time="2024-02-09T10:06:55.588907775Z" level=warning msg="cleaning up after shim disconnected" id=3ef8a3b4b79a78e21b55d2f5596570e196563a8459027b00cdc4ffb8a2db018d namespace=k8s.io Feb 9 10:06:55.589188 env[1144]: time="2024-02-09T10:06:55.588918135Z" level=info msg="cleaning up dead shim" Feb 9 10:06:55.595830 env[1144]: time="2024-02-09T10:06:55.595794093Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4027 runtime=io.containerd.runc.v2\n" Feb 9 10:06:56.245659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ef8a3b4b79a78e21b55d2f5596570e196563a8459027b00cdc4ffb8a2db018d-rootfs.mount: Deactivated successfully. Feb 9 10:06:56.309292 kubelet[1970]: I0209 10:06:56.309249 1970 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=2fb86247-b672-421f-a56a-44c2dbec702f path="/var/lib/kubelet/pods/2fb86247-b672-421f-a56a-44c2dbec702f/volumes" Feb 9 10:06:56.363407 kubelet[1970]: E0209 10:06:56.363369 1970 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:06:56.487220 kubelet[1970]: E0209 10:06:56.487185 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:56.489088 env[1144]: time="2024-02-09T10:06:56.489043830Z" level=info msg="CreateContainer within sandbox \"fd1c42434d5800009fe18794eb8c7117f10d5b77002b27cfeca91eb793c87490\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 10:06:56.503716 env[1144]: time="2024-02-09T10:06:56.503321307Z" level=info msg="CreateContainer within sandbox \"fd1c42434d5800009fe18794eb8c7117f10d5b77002b27cfeca91eb793c87490\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3cee60a0af6522c0156aa00e5bef95fca1b748dfbcc114239f0fdaccaa839154\"" Feb 9 10:06:56.504255 kubelet[1970]: W0209 10:06:56.504216 1970 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fb86247_b672_421f_a56a_44c2dbec702f.slice/cri-containerd-43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56.scope WatchSource:0}: container "43aa23659fac80bf8c47eb0f4db8c4d69c0d19ba9967ae09b189c9ded4a8ca56" in namespace "k8s.io": not found Feb 9 10:06:56.504334 env[1144]: time="2024-02-09T10:06:56.504237997Z" level=info msg="StartContainer for \"3cee60a0af6522c0156aa00e5bef95fca1b748dfbcc114239f0fdaccaa839154\"" Feb 9 10:06:56.527317 systemd[1]: Started cri-containerd-3cee60a0af6522c0156aa00e5bef95fca1b748dfbcc114239f0fdaccaa839154.scope. Feb 9 10:06:56.558674 systemd[1]: cri-containerd-3cee60a0af6522c0156aa00e5bef95fca1b748dfbcc114239f0fdaccaa839154.scope: Deactivated successfully. Feb 9 10:06:56.558916 env[1144]: time="2024-02-09T10:06:56.558652155Z" level=info msg="StartContainer for \"3cee60a0af6522c0156aa00e5bef95fca1b748dfbcc114239f0fdaccaa839154\" returns successfully" Feb 9 10:06:56.583307 env[1144]: time="2024-02-09T10:06:56.583249145Z" level=info msg="shim disconnected" id=3cee60a0af6522c0156aa00e5bef95fca1b748dfbcc114239f0fdaccaa839154 Feb 9 10:06:56.583307 env[1144]: time="2024-02-09T10:06:56.583292146Z" level=warning msg="cleaning up after shim disconnected" id=3cee60a0af6522c0156aa00e5bef95fca1b748dfbcc114239f0fdaccaa839154 namespace=k8s.io Feb 9 10:06:56.583307 env[1144]: time="2024-02-09T10:06:56.583302866Z" level=info msg="cleaning up dead shim" Feb 9 10:06:56.592162 env[1144]: time="2024-02-09T10:06:56.592114603Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4085 runtime=io.containerd.runc.v2\n" Feb 9 10:06:57.245779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cee60a0af6522c0156aa00e5bef95fca1b748dfbcc114239f0fdaccaa839154-rootfs.mount: Deactivated successfully. Feb 9 10:06:57.307132 kubelet[1970]: E0209 10:06:57.307076 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:57.490352 kubelet[1970]: E0209 10:06:57.490302 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:57.493068 env[1144]: time="2024-02-09T10:06:57.493001789Z" level=info msg="CreateContainer within sandbox \"fd1c42434d5800009fe18794eb8c7117f10d5b77002b27cfeca91eb793c87490\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 10:06:57.506067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548513583.mount: Deactivated successfully. Feb 9 10:06:57.508898 env[1144]: time="2024-02-09T10:06:57.508846157Z" level=info msg="CreateContainer within sandbox \"fd1c42434d5800009fe18794eb8c7117f10d5b77002b27cfeca91eb793c87490\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8e666620a815bb727769bb07485a7e45dcb7e42cbd33eb753e51cb4ac49e0e9a\"" Feb 9 10:06:57.509489 env[1144]: time="2024-02-09T10:06:57.509458444Z" level=info msg="StartContainer for \"8e666620a815bb727769bb07485a7e45dcb7e42cbd33eb753e51cb4ac49e0e9a\"" Feb 9 10:06:57.529024 systemd[1]: Started cri-containerd-8e666620a815bb727769bb07485a7e45dcb7e42cbd33eb753e51cb4ac49e0e9a.scope. Feb 9 10:06:57.556410 systemd[1]: cri-containerd-8e666620a815bb727769bb07485a7e45dcb7e42cbd33eb753e51cb4ac49e0e9a.scope: Deactivated successfully. Feb 9 10:06:57.558514 env[1144]: time="2024-02-09T10:06:57.558464963Z" level=info msg="StartContainer for \"8e666620a815bb727769bb07485a7e45dcb7e42cbd33eb753e51cb4ac49e0e9a\" returns successfully" Feb 9 10:06:57.579565 env[1144]: time="2024-02-09T10:06:57.579517826Z" level=info msg="shim disconnected" id=8e666620a815bb727769bb07485a7e45dcb7e42cbd33eb753e51cb4ac49e0e9a Feb 9 10:06:57.579565 env[1144]: time="2024-02-09T10:06:57.579558866Z" level=warning msg="cleaning up after shim disconnected" id=8e666620a815bb727769bb07485a7e45dcb7e42cbd33eb753e51cb4ac49e0e9a namespace=k8s.io Feb 9 10:06:57.579565 env[1144]: time="2024-02-09T10:06:57.579569706Z" level=info msg="cleaning up dead shim" Feb 9 10:06:57.586327 env[1144]: time="2024-02-09T10:06:57.586294418Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4140 runtime=io.containerd.runc.v2\n" Feb 9 10:06:58.308093 kubelet[1970]: I0209 10:06:58.308063 1970 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 10:06:58.30800483 +0000 UTC m=+82.101950156 LastTransitionTime:2024-02-09 10:06:58.30800483 +0000 UTC m=+82.101950156 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 10:06:58.494961 kubelet[1970]: E0209 10:06:58.494932 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:58.497449 env[1144]: time="2024-02-09T10:06:58.497406526Z" level=info msg="CreateContainer within sandbox \"fd1c42434d5800009fe18794eb8c7117f10d5b77002b27cfeca91eb793c87490\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 10:06:58.512355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4266587588.mount: Deactivated successfully. Feb 9 10:06:58.516287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount852199594.mount: Deactivated successfully. Feb 9 10:06:58.525463 env[1144]: time="2024-02-09T10:06:58.525410652Z" level=info msg="CreateContainer within sandbox \"fd1c42434d5800009fe18794eb8c7117f10d5b77002b27cfeca91eb793c87490\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9bed0a0f4b43728c88c632c9c79cd35ce8c980119bbb9dc95f42e0966c417b81\"" Feb 9 10:06:58.526125 env[1144]: time="2024-02-09T10:06:58.526080659Z" level=info msg="StartContainer for \"9bed0a0f4b43728c88c632c9c79cd35ce8c980119bbb9dc95f42e0966c417b81\"" Feb 9 10:06:58.539204 systemd[1]: Started cri-containerd-9bed0a0f4b43728c88c632c9c79cd35ce8c980119bbb9dc95f42e0966c417b81.scope. Feb 9 10:06:58.571465 env[1144]: time="2024-02-09T10:06:58.571379282Z" level=info msg="StartContainer for \"9bed0a0f4b43728c88c632c9c79cd35ce8c980119bbb9dc95f42e0966c417b81\" returns successfully" Feb 9 10:06:58.794746 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 10:06:59.499870 kubelet[1970]: E0209 10:06:59.499826 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:59.513176 kubelet[1970]: I0209 10:06:59.513128 1970 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bxsrz" podStartSLOduration=5.513096439 podCreationTimestamp="2024-02-09 10:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:06:59.512244271 +0000 UTC m=+83.306189637" watchObservedRunningTime="2024-02-09 10:06:59.513096439 +0000 UTC m=+83.307041805" Feb 9 10:06:59.617391 kubelet[1970]: W0209 10:06:59.617345 1970 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4699820a_4f32_4b88_a7df_9e04aff5b1da.slice/cri-containerd-5de57965767ff50817efbd560db06383c81cd7b33e8eb847363f965a11de2c80.scope WatchSource:0}: task 5de57965767ff50817efbd560db06383c81cd7b33e8eb847363f965a11de2c80 not found: not found Feb 9 10:07:00.841534 kubelet[1970]: E0209 10:07:00.841503 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:07:01.441634 systemd-networkd[1044]: lxc_health: Link UP Feb 9 10:07:01.445470 systemd[1]: run-containerd-runc-k8s.io-9bed0a0f4b43728c88c632c9c79cd35ce8c980119bbb9dc95f42e0966c417b81-runc.1QNK67.mount: Deactivated successfully. Feb 9 10:07:01.448735 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:07:01.449230 systemd-networkd[1044]: lxc_health: Gained carrier Feb 9 10:07:02.307132 kubelet[1970]: E0209 10:07:02.307097 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:07:02.725223 kubelet[1970]: W0209 10:07:02.725115 1970 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4699820a_4f32_4b88_a7df_9e04aff5b1da.slice/cri-containerd-3ef8a3b4b79a78e21b55d2f5596570e196563a8459027b00cdc4ffb8a2db018d.scope WatchSource:0}: task 3ef8a3b4b79a78e21b55d2f5596570e196563a8459027b00cdc4ffb8a2db018d not found: not found Feb 9 10:07:02.812813 systemd-networkd[1044]: lxc_health: Gained IPv6LL Feb 9 10:07:02.841639 kubelet[1970]: E0209 10:07:02.841597 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:07:03.505927 kubelet[1970]: E0209 10:07:03.505892 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:07:03.611431 systemd[1]: run-containerd-runc-k8s.io-9bed0a0f4b43728c88c632c9c79cd35ce8c980119bbb9dc95f42e0966c417b81-runc.qC3wsr.mount: Deactivated successfully. Feb 9 10:07:04.508025 kubelet[1970]: E0209 10:07:04.507995 1970 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:07:05.742056 systemd[1]: run-containerd-runc-k8s.io-9bed0a0f4b43728c88c632c9c79cd35ce8c980119bbb9dc95f42e0966c417b81-runc.cgU8Fw.mount: Deactivated successfully. Feb 9 10:07:05.833228 kubelet[1970]: W0209 10:07:05.833183 1970 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4699820a_4f32_4b88_a7df_9e04aff5b1da.slice/cri-containerd-3cee60a0af6522c0156aa00e5bef95fca1b748dfbcc114239f0fdaccaa839154.scope WatchSource:0}: task 3cee60a0af6522c0156aa00e5bef95fca1b748dfbcc114239f0fdaccaa839154 not found: not found Feb 9 10:07:07.995595 sshd[3743]: pam_unix(sshd:session): session closed for user core Feb 9 10:07:07.998205 systemd[1]: sshd@23-10.0.0.120:22-10.0.0.1:56686.service: Deactivated successfully. Feb 9 10:07:07.998936 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 10:07:07.999491 systemd-logind[1133]: Session 24 logged out. Waiting for processes to exit. Feb 9 10:07:08.000217 systemd-logind[1133]: Removed session 24. Feb 9 10:07:08.939257 kubelet[1970]: W0209 10:07:08.939213 1970 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4699820a_4f32_4b88_a7df_9e04aff5b1da.slice/cri-containerd-8e666620a815bb727769bb07485a7e45dcb7e42cbd33eb753e51cb4ac49e0e9a.scope WatchSource:0}: task 8e666620a815bb727769bb07485a7e45dcb7e42cbd33eb753e51cb4ac49e0e9a not found: not found