Feb 9 10:09:06.721198 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 10:09:06.721230 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 10:09:06.721238 kernel: efi: EFI v2.70 by EDK II Feb 9 10:09:06.721244 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 10:09:06.721249 kernel: random: crng init done Feb 9 10:09:06.721254 kernel: ACPI: Early table checksum verification disabled Feb 9 10:09:06.721261 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 10:09:06.721275 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 10:09:06.721281 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:09:06.721286 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:09:06.721292 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:09:06.721297 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:09:06.721303 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:09:06.721308 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:09:06.721316 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:09:06.721322 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:09:06.721328 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:09:06.721334 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 10:09:06.721339 kernel: NUMA: Failed to initialise from firmware Feb 9 10:09:06.721345 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 10:09:06.721351 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 9 10:09:06.721357 kernel: Zone ranges: Feb 9 10:09:06.721363 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 10:09:06.721369 kernel: DMA32 empty Feb 9 10:09:06.721375 kernel: Normal empty Feb 9 10:09:06.721381 kernel: Movable zone start for each node Feb 9 10:09:06.721386 kernel: Early memory node ranges Feb 9 10:09:06.721392 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 10:09:06.721398 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 10:09:06.721403 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 10:09:06.721409 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 10:09:06.721415 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 10:09:06.721420 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 10:09:06.721426 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 10:09:06.721432 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 10:09:06.721439 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 10:09:06.721444 kernel: psci: probing for conduit method from ACPI. Feb 9 10:09:06.721450 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 10:09:06.721456 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 10:09:06.721462 kernel: psci: Trusted OS migration not required Feb 9 10:09:06.721470 kernel: psci: SMC Calling Convention v1.1 Feb 9 10:09:06.721476 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 10:09:06.721484 kernel: ACPI: SRAT not present Feb 9 10:09:06.721490 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 10:09:06.721496 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 10:09:06.721503 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 10:09:06.721509 kernel: Detected PIPT I-cache on CPU0 Feb 9 10:09:06.721515 kernel: CPU features: detected: GIC system register CPU interface Feb 9 10:09:06.721521 kernel: CPU features: detected: Hardware dirty bit management Feb 9 10:09:06.721527 kernel: CPU features: detected: Spectre-v4 Feb 9 10:09:06.721533 kernel: CPU features: detected: Spectre-BHB Feb 9 10:09:06.721540 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 10:09:06.721546 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 10:09:06.721552 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 10:09:06.721558 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 10:09:06.721565 kernel: Policy zone: DMA Feb 9 10:09:06.721572 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 10:09:06.721579 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 10:09:06.721585 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 10:09:06.721591 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 10:09:06.721597 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 10:09:06.721604 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 9 10:09:06.721611 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 10:09:06.721617 kernel: trace event string verifier disabled Feb 9 10:09:06.721623 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 10:09:06.721630 kernel: rcu: RCU event tracing is enabled. Feb 9 10:09:06.721636 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 10:09:06.721643 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 10:09:06.721649 kernel: Tracing variant of Tasks RCU enabled. Feb 9 10:09:06.721655 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 10:09:06.721661 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 10:09:06.721667 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 10:09:06.721673 kernel: GICv3: 256 SPIs implemented Feb 9 10:09:06.721680 kernel: GICv3: 0 Extended SPIs implemented Feb 9 10:09:06.721687 kernel: GICv3: Distributor has no Range Selector support Feb 9 10:09:06.721693 kernel: Root IRQ handler: gic_handle_irq Feb 9 10:09:06.721699 kernel: GICv3: 16 PPIs implemented Feb 9 10:09:06.721705 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 10:09:06.721711 kernel: ACPI: SRAT not present Feb 9 10:09:06.721716 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 10:09:06.721723 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 10:09:06.721729 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 10:09:06.721735 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 10:09:06.721741 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 10:09:06.721748 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:09:06.721755 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 10:09:06.721762 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 10:09:06.721768 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 10:09:06.721774 kernel: arm-pv: using stolen time PV Feb 9 10:09:06.721781 kernel: Console: colour dummy device 80x25 Feb 9 10:09:06.721787 kernel: ACPI: Core revision 20210730 Feb 9 10:09:06.721794 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 10:09:06.721800 kernel: pid_max: default: 32768 minimum: 301 Feb 9 10:09:06.721806 kernel: LSM: Security Framework initializing Feb 9 10:09:06.721826 kernel: SELinux: Initializing. Feb 9 10:09:06.721834 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 10:09:06.721840 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 10:09:06.721847 kernel: rcu: Hierarchical SRCU implementation. Feb 9 10:09:06.721853 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 10:09:06.721860 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 10:09:06.721866 kernel: Remapping and enabling EFI services. Feb 9 10:09:06.721873 kernel: smp: Bringing up secondary CPUs ... Feb 9 10:09:06.721879 kernel: Detected PIPT I-cache on CPU1 Feb 9 10:09:06.721886 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 10:09:06.721893 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 10:09:06.721900 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:09:06.721906 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 10:09:06.721913 kernel: Detected PIPT I-cache on CPU2 Feb 9 10:09:06.721919 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 10:09:06.721926 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 10:09:06.721933 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:09:06.721939 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 10:09:06.721945 kernel: Detected PIPT I-cache on CPU3 Feb 9 10:09:06.721952 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 10:09:06.721960 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 10:09:06.721966 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:09:06.721972 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 10:09:06.721979 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 10:09:06.721989 kernel: SMP: Total of 4 processors activated. Feb 9 10:09:06.721997 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 10:09:06.722004 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 10:09:06.722011 kernel: CPU features: detected: Common not Private translations Feb 9 10:09:06.722017 kernel: CPU features: detected: CRC32 instructions Feb 9 10:09:06.722025 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 10:09:06.722032 kernel: CPU features: detected: LSE atomic instructions Feb 9 10:09:06.722038 kernel: CPU features: detected: Privileged Access Never Feb 9 10:09:06.722046 kernel: CPU features: detected: RAS Extension Support Feb 9 10:09:06.722053 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 10:09:06.722060 kernel: CPU: All CPU(s) started at EL1 Feb 9 10:09:06.722067 kernel: alternatives: patching kernel code Feb 9 10:09:06.722075 kernel: devtmpfs: initialized Feb 9 10:09:06.722081 kernel: KASLR enabled Feb 9 10:09:06.722088 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 10:09:06.722095 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 10:09:06.722102 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 10:09:06.722108 kernel: SMBIOS 3.0.0 present. Feb 9 10:09:06.722115 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 10:09:06.722122 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 10:09:06.722129 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 10:09:06.722136 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 10:09:06.722144 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 10:09:06.722150 kernel: audit: initializing netlink subsys (disabled) Feb 9 10:09:06.722157 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 Feb 9 10:09:06.722164 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 10:09:06.722171 kernel: cpuidle: using governor menu Feb 9 10:09:06.722178 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 10:09:06.722185 kernel: ASID allocator initialised with 32768 entries Feb 9 10:09:06.722203 kernel: ACPI: bus type PCI registered Feb 9 10:09:06.722210 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 10:09:06.722218 kernel: Serial: AMBA PL011 UART driver Feb 9 10:09:06.722225 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 10:09:06.722232 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 10:09:06.722239 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 10:09:06.722246 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 10:09:06.722252 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 10:09:06.722259 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 10:09:06.722271 kernel: ACPI: Added _OSI(Module Device) Feb 9 10:09:06.722279 kernel: ACPI: Added _OSI(Processor Device) Feb 9 10:09:06.722287 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 10:09:06.722294 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 10:09:06.722300 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 10:09:06.722307 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 10:09:06.722313 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 10:09:06.722320 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 10:09:06.722327 kernel: ACPI: Interpreter enabled Feb 9 10:09:06.722334 kernel: ACPI: Using GIC for interrupt routing Feb 9 10:09:06.722341 kernel: ACPI: MCFG table detected, 1 entries Feb 9 10:09:06.722349 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 10:09:06.722356 kernel: printk: console [ttyAMA0] enabled Feb 9 10:09:06.722363 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 10:09:06.722490 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 10:09:06.722558 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 10:09:06.722621 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 10:09:06.722684 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 10:09:06.722749 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 10:09:06.722758 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 10:09:06.722765 kernel: PCI host bridge to bus 0000:00 Feb 9 10:09:06.722837 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 10:09:06.722895 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 10:09:06.722956 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 10:09:06.723016 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 10:09:06.723092 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 10:09:06.723171 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 10:09:06.723264 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 10:09:06.723340 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 10:09:06.723405 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 10:09:06.723468 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 10:09:06.723531 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 10:09:06.723599 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 10:09:06.723662 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 10:09:06.723721 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 10:09:06.723779 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 10:09:06.723788 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 10:09:06.723795 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 10:09:06.723802 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 10:09:06.723811 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 10:09:06.723821 kernel: iommu: Default domain type: Translated Feb 9 10:09:06.723827 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 10:09:06.723834 kernel: vgaarb: loaded Feb 9 10:09:06.723841 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 10:09:06.723849 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 10:09:06.723856 kernel: PTP clock support registered Feb 9 10:09:06.723863 kernel: Registered efivars operations Feb 9 10:09:06.723869 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 10:09:06.723876 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 10:09:06.723886 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 10:09:06.723893 kernel: pnp: PnP ACPI init Feb 9 10:09:06.723977 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 10:09:06.723987 kernel: pnp: PnP ACPI: found 1 devices Feb 9 10:09:06.723994 kernel: NET: Registered PF_INET protocol family Feb 9 10:09:06.724001 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 10:09:06.724008 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 10:09:06.724017 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 10:09:06.724025 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 10:09:06.724032 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 10:09:06.724039 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 10:09:06.724046 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 10:09:06.724054 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 10:09:06.724061 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 10:09:06.724067 kernel: PCI: CLS 0 bytes, default 64 Feb 9 10:09:06.724074 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 10:09:06.724081 kernel: kvm [1]: HYP mode not available Feb 9 10:09:06.724091 kernel: Initialise system trusted keyrings Feb 9 10:09:06.724097 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 10:09:06.724104 kernel: Key type asymmetric registered Feb 9 10:09:06.724110 kernel: Asymmetric key parser 'x509' registered Feb 9 10:09:06.724119 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 10:09:06.724126 kernel: io scheduler mq-deadline registered Feb 9 10:09:06.724132 kernel: io scheduler kyber registered Feb 9 10:09:06.724141 kernel: io scheduler bfq registered Feb 9 10:09:06.724148 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 10:09:06.724156 kernel: ACPI: button: Power Button [PWRB] Feb 9 10:09:06.724164 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 10:09:06.724261 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 10:09:06.724278 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 10:09:06.724285 kernel: thunder_xcv, ver 1.0 Feb 9 10:09:06.724291 kernel: thunder_bgx, ver 1.0 Feb 9 10:09:06.724298 kernel: nicpf, ver 1.0 Feb 9 10:09:06.724304 kernel: nicvf, ver 1.0 Feb 9 10:09:06.724373 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 10:09:06.724467 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T10:09:06 UTC (1707473346) Feb 9 10:09:06.724478 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 10:09:06.724485 kernel: NET: Registered PF_INET6 protocol family Feb 9 10:09:06.724492 kernel: Segment Routing with IPv6 Feb 9 10:09:06.724498 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 10:09:06.724505 kernel: NET: Registered PF_PACKET protocol family Feb 9 10:09:06.724512 kernel: Key type dns_resolver registered Feb 9 10:09:06.724518 kernel: registered taskstats version 1 Feb 9 10:09:06.724528 kernel: Loading compiled-in X.509 certificates Feb 9 10:09:06.724534 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 10:09:06.724541 kernel: Key type .fscrypt registered Feb 9 10:09:06.724547 kernel: Key type fscrypt-provisioning registered Feb 9 10:09:06.724554 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 10:09:06.724561 kernel: ima: Allocated hash algorithm: sha1 Feb 9 10:09:06.724567 kernel: ima: No architecture policies found Feb 9 10:09:06.724583 kernel: Freeing unused kernel memory: 34688K Feb 9 10:09:06.724590 kernel: Run /init as init process Feb 9 10:09:06.724598 kernel: with arguments: Feb 9 10:09:06.724605 kernel: /init Feb 9 10:09:06.724611 kernel: with environment: Feb 9 10:09:06.724617 kernel: HOME=/ Feb 9 10:09:06.724624 kernel: TERM=linux Feb 9 10:09:06.724630 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 10:09:06.724639 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 10:09:06.724647 systemd[1]: Detected virtualization kvm. Feb 9 10:09:06.724656 systemd[1]: Detected architecture arm64. Feb 9 10:09:06.724663 systemd[1]: Running in initrd. Feb 9 10:09:06.724670 systemd[1]: No hostname configured, using default hostname. Feb 9 10:09:06.724677 systemd[1]: Hostname set to . Feb 9 10:09:06.724685 systemd[1]: Initializing machine ID from VM UUID. Feb 9 10:09:06.724692 systemd[1]: Queued start job for default target initrd.target. Feb 9 10:09:06.724699 systemd[1]: Started systemd-ask-password-console.path. Feb 9 10:09:06.724706 systemd[1]: Reached target cryptsetup.target. Feb 9 10:09:06.724714 systemd[1]: Reached target paths.target. Feb 9 10:09:06.724721 systemd[1]: Reached target slices.target. Feb 9 10:09:06.724728 systemd[1]: Reached target swap.target. Feb 9 10:09:06.724736 systemd[1]: Reached target timers.target. Feb 9 10:09:06.724743 systemd[1]: Listening on iscsid.socket. Feb 9 10:09:06.724750 systemd[1]: Listening on iscsiuio.socket. Feb 9 10:09:06.724758 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 10:09:06.724766 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 10:09:06.724773 systemd[1]: Listening on systemd-journald.socket. Feb 9 10:09:06.724781 systemd[1]: Listening on systemd-networkd.socket. Feb 9 10:09:06.724788 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 10:09:06.724795 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 10:09:06.724802 systemd[1]: Reached target sockets.target. Feb 9 10:09:06.724815 systemd[1]: Starting kmod-static-nodes.service... Feb 9 10:09:06.724828 systemd[1]: Finished network-cleanup.service. Feb 9 10:09:06.724835 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 10:09:06.724844 systemd[1]: Starting systemd-journald.service... Feb 9 10:09:06.724851 systemd[1]: Starting systemd-modules-load.service... Feb 9 10:09:06.724858 systemd[1]: Starting systemd-resolved.service... Feb 9 10:09:06.724865 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 10:09:06.724872 systemd[1]: Finished kmod-static-nodes.service. Feb 9 10:09:06.724879 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 10:09:06.724887 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 10:09:06.724894 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 10:09:06.724902 kernel: audit: type=1130 audit(1707473346.720:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:06.724910 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 10:09:06.724917 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 10:09:06.724927 systemd-journald[289]: Journal started Feb 9 10:09:06.724967 systemd-journald[289]: Runtime Journal (/run/log/journal/4715bf354a4d459d822134ea0f046e8f) is 6.0M, max 48.7M, 42.6M free. Feb 9 10:09:06.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:06.717375 systemd-modules-load[290]: Inserted module 'overlay' Feb 9 10:09:06.728813 kernel: audit: type=1130 audit(1707473346.725:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:06.728833 systemd[1]: Started systemd-journald.service. Feb 9 10:09:06.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:06.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:06.733886 kernel: audit: type=1130 audit(1707473346.729:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:06.738222 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 10:09:06.739671 systemd-modules-load[290]: Inserted module 'br_netfilter' Feb 9 10:09:06.740322 kernel: Bridge firewalling registered Feb 9 10:09:06.741058 systemd-resolved[291]: Positive Trust Anchors: Feb 9 10:09:06.741071 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 10:09:06.741097 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 10:09:06.747481 systemd-resolved[291]: Defaulting to hostname 'linux'. Feb 9 10:09:06.750406 kernel: SCSI subsystem initialized Feb 9 10:09:06.750025 systemd[1]: Started systemd-resolved.service. Feb 9 10:09:06.753198 kernel: audit: type=1130 audit(1707473346.750:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:06.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:06.750839 systemd[1]: Reached target nss-lookup.target. Feb 9 10:09:06.756931 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 10:09:06.761220 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 10:09:06.761240 kernel: device-mapper: uevent: version 1.0.3 Feb 9 10:09:06.761249 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 10:09:06.761257 kernel: audit: type=1130 audit(1707473346.758:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:06.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:06.759407 systemd[1]: Starting dracut-cmdline.service... Feb 9 10:09:06.762480 systemd-modules-load[290]: Inserted module 'dm_multipath' Feb 9 10:09:06.763162 systemd[1]: Finished systemd-modules-load.service. Feb 9 10:09:06.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:06.766327 systemd[1]: Starting systemd-sysctl.service... Feb 9 10:09:06.768550 kernel: audit: type=1130 audit(1707473346.764:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:06.769450 dracut-cmdline[307]: dracut-dracut-053 Feb 9 10:09:06.771664 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 10:09:06.772925 systemd[1]: Finished systemd-sysctl.service. Feb 9 10:09:06.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:06.778227 kernel: audit: type=1130 audit(1707473346.775:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:06.831210 kernel: Loading iSCSI transport class v2.0-870. Feb 9 10:09:06.839216 kernel: iscsi: registered transport (tcp) Feb 9 10:09:06.854494 kernel: iscsi: registered transport (qla4xxx) Feb 9 10:09:06.854519 kernel: QLogic iSCSI HBA Driver Feb 9 10:09:06.888156 systemd[1]: Finished dracut-cmdline.service. Feb 9 10:09:06.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:06.889593 systemd[1]: Starting dracut-pre-udev.service... Feb 9 10:09:06.891848 kernel: audit: type=1130 audit(1707473346.887:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:06.933209 kernel: raid6: neonx8 gen() 13741 MB/s Feb 9 10:09:06.950212 kernel: raid6: neonx8 xor() 10794 MB/s Feb 9 10:09:06.967215 kernel: raid6: neonx4 gen() 13491 MB/s Feb 9 10:09:06.984217 kernel: raid6: neonx4 xor() 11267 MB/s Feb 9 10:09:07.001224 kernel: raid6: neonx2 gen() 12974 MB/s Feb 9 10:09:07.018213 kernel: raid6: neonx2 xor() 10323 MB/s Feb 9 10:09:07.035211 kernel: raid6: neonx1 gen() 10494 MB/s Feb 9 10:09:07.052217 kernel: raid6: neonx1 xor() 8765 MB/s Feb 9 10:09:07.069225 kernel: raid6: int64x8 gen() 5538 MB/s Feb 9 10:09:07.086227 kernel: raid6: int64x8 xor() 3547 MB/s Feb 9 10:09:07.103243 kernel: raid6: int64x4 gen() 7212 MB/s Feb 9 10:09:07.120226 kernel: raid6: int64x4 xor() 3848 MB/s Feb 9 10:09:07.137226 kernel: raid6: int64x2 gen() 6150 MB/s Feb 9 10:09:07.154224 kernel: raid6: int64x2 xor() 3321 MB/s Feb 9 10:09:07.171225 kernel: raid6: int64x1 gen() 5044 MB/s Feb 9 10:09:07.188422 kernel: raid6: int64x1 xor() 2646 MB/s Feb 9 10:09:07.188469 kernel: raid6: using algorithm neonx8 gen() 13741 MB/s Feb 9 10:09:07.188500 kernel: raid6: .... xor() 10794 MB/s, rmw enabled Feb 9 10:09:07.188519 kernel: raid6: using neon recovery algorithm Feb 9 10:09:07.199292 kernel: xor: measuring software checksum speed Feb 9 10:09:07.199331 kernel: 8regs : 17279 MB/sec Feb 9 10:09:07.200217 kernel: 32regs : 20760 MB/sec Feb 9 10:09:07.201255 kernel: arm64_neon : 27854 MB/sec Feb 9 10:09:07.201282 kernel: xor: using function: arm64_neon (27854 MB/sec) Feb 9 10:09:07.254231 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 10:09:07.264523 systemd[1]: Finished dracut-pre-udev.service. Feb 9 10:09:07.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:07.266241 systemd[1]: Starting systemd-udevd.service... Feb 9 10:09:07.268896 kernel: audit: type=1130 audit(1707473347.265:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:07.265000 audit: BPF prog-id=7 op=LOAD Feb 9 10:09:07.265000 audit: BPF prog-id=8 op=LOAD Feb 9 10:09:07.281987 systemd-udevd[490]: Using default interface naming scheme 'v252'. Feb 9 10:09:07.285293 systemd[1]: Started systemd-udevd.service. Feb 9 10:09:07.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:07.287113 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 10:09:07.298580 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Feb 9 10:09:07.325902 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 10:09:07.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:07.327430 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 10:09:07.366133 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 10:09:07.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:07.394883 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 10:09:07.398531 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 10:09:07.398562 kernel: GPT:9289727 != 19775487 Feb 9 10:09:07.398571 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 10:09:07.398580 kernel: GPT:9289727 != 19775487 Feb 9 10:09:07.398588 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 10:09:07.398596 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 10:09:07.409212 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (557) Feb 9 10:09:07.411504 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 10:09:07.412462 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 10:09:07.418307 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 10:09:07.421733 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 10:09:07.425161 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 10:09:07.428884 systemd[1]: Starting disk-uuid.service... Feb 9 10:09:07.434636 disk-uuid[565]: Primary Header is updated. Feb 9 10:09:07.434636 disk-uuid[565]: Secondary Entries is updated. Feb 9 10:09:07.434636 disk-uuid[565]: Secondary Header is updated. Feb 9 10:09:07.439212 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 10:09:08.450205 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 10:09:08.450253 disk-uuid[566]: The operation has completed successfully. Feb 9 10:09:08.475620 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 10:09:08.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:08.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:08.475714 systemd[1]: Finished disk-uuid.service. Feb 9 10:09:08.477247 systemd[1]: Starting verity-setup.service... Feb 9 10:09:08.494229 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 10:09:08.517558 systemd[1]: Found device dev-mapper-usr.device. Feb 9 10:09:08.519760 systemd[1]: Mounting sysusr-usr.mount... Feb 9 10:09:08.521579 systemd[1]: Finished verity-setup.service. Feb 9 10:09:08.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:08.569014 systemd[1]: Mounted sysusr-usr.mount. Feb 9 10:09:08.570252 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 10:09:08.569813 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 10:09:08.570554 systemd[1]: Starting ignition-setup.service... Feb 9 10:09:08.572462 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 10:09:08.579393 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 10:09:08.579427 kernel: BTRFS info (device vda6): using free space tree Feb 9 10:09:08.579437 kernel: BTRFS info (device vda6): has skinny extents Feb 9 10:09:08.586598 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 10:09:08.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:08.592109 systemd[1]: Finished ignition-setup.service. Feb 9 10:09:08.593586 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 10:09:08.660124 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 10:09:08.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:08.661000 audit: BPF prog-id=9 op=LOAD Feb 9 10:09:08.662117 systemd[1]: Starting systemd-networkd.service... Feb 9 10:09:08.674526 ignition[652]: Ignition 2.14.0 Feb 9 10:09:08.675296 ignition[652]: Stage: fetch-offline Feb 9 10:09:08.675985 ignition[652]: no configs at "/usr/lib/ignition/base.d" Feb 9 10:09:08.676803 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:09:08.677822 ignition[652]: parsed url from cmdline: "" Feb 9 10:09:08.677891 ignition[652]: no config URL provided Feb 9 10:09:08.678540 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 10:09:08.679559 ignition[652]: no config at "/usr/lib/ignition/user.ign" Feb 9 10:09:08.680392 ignition[652]: op(1): [started] loading QEMU firmware config module Feb 9 10:09:08.681322 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 10:09:08.684553 systemd-networkd[742]: lo: Link UP Feb 9 10:09:08.684564 systemd-networkd[742]: lo: Gained carrier Feb 9 10:09:08.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:08.685130 systemd-networkd[742]: Enumeration completed Feb 9 10:09:08.685240 systemd[1]: Started systemd-networkd.service. Feb 9 10:09:08.687242 ignition[652]: op(1): [finished] loading QEMU firmware config module Feb 9 10:09:08.685500 systemd-networkd[742]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 10:09:08.686045 systemd[1]: Reached target network.target. Feb 9 10:09:08.686889 systemd-networkd[742]: eth0: Link UP Feb 9 10:09:08.686893 systemd-networkd[742]: eth0: Gained carrier Feb 9 10:09:08.687884 systemd[1]: Starting iscsiuio.service... Feb 9 10:09:08.696989 systemd[1]: Started iscsiuio.service. Feb 9 10:09:08.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:08.698482 systemd[1]: Starting iscsid.service... Feb 9 10:09:08.700255 systemd-networkd[742]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 10:09:08.701764 iscsid[749]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 10:09:08.701764 iscsid[749]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 10:09:08.701764 iscsid[749]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 10:09:08.701764 iscsid[749]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 10:09:08.701764 iscsid[749]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 10:09:08.701764 iscsid[749]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 10:09:08.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:08.704702 systemd[1]: Started iscsid.service. Feb 9 10:09:08.708922 systemd[1]: Starting dracut-initqueue.service... Feb 9 10:09:08.718995 systemd[1]: Finished dracut-initqueue.service. Feb 9 10:09:08.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:08.719980 systemd[1]: Reached target remote-fs-pre.target. Feb 9 10:09:08.721198 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 10:09:08.722475 systemd[1]: Reached target remote-fs.target. Feb 9 10:09:08.724597 systemd[1]: Starting dracut-pre-mount.service... Feb 9 10:09:08.729497 ignition[652]: parsing config with SHA512: f58257274a78c70c1f5e96ca1095b886aa21f49f9c4b4cefa639375844af31752678edc00e98bda0327e2fbb558c7bc5b600faea517bfcd2e7d9f589a80e7378 Feb 9 10:09:08.733251 systemd[1]: Finished dracut-pre-mount.service. Feb 9 10:09:08.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:08.753688 unknown[652]: fetched base config from "system" Feb 9 10:09:08.753698 unknown[652]: fetched user config from "qemu" Feb 9 10:09:08.754886 ignition[652]: fetch-offline: fetch-offline passed Feb 9 10:09:08.754957 ignition[652]: Ignition finished successfully Feb 9 10:09:08.756026 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 10:09:08.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:08.756788 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 10:09:08.757539 systemd[1]: Starting ignition-kargs.service... Feb 9 10:09:08.766203 ignition[764]: Ignition 2.14.0 Feb 9 10:09:08.766213 ignition[764]: Stage: kargs Feb 9 10:09:08.766315 ignition[764]: no configs at "/usr/lib/ignition/base.d" Feb 9 10:09:08.768144 systemd[1]: Finished ignition-kargs.service. Feb 9 10:09:08.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:08.766324 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:09:08.767101 ignition[764]: kargs: kargs passed Feb 9 10:09:08.770072 systemd[1]: Starting ignition-disks.service... Feb 9 10:09:08.767140 ignition[764]: Ignition finished successfully Feb 9 10:09:08.776946 ignition[770]: Ignition 2.14.0 Feb 9 10:09:08.776955 ignition[770]: Stage: disks Feb 9 10:09:08.777048 ignition[770]: no configs at "/usr/lib/ignition/base.d" Feb 9 10:09:08.777057 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:09:08.777947 ignition[770]: disks: disks passed Feb 9 10:09:08.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:08.779336 systemd[1]: Finished ignition-disks.service. Feb 9 10:09:08.777992 ignition[770]: Ignition finished successfully Feb 9 10:09:08.780242 systemd[1]: Reached target initrd-root-device.target. Feb 9 10:09:08.781083 systemd[1]: Reached target local-fs-pre.target. Feb 9 10:09:08.782037 systemd[1]: Reached target local-fs.target. Feb 9 10:09:08.782978 systemd[1]: Reached target sysinit.target. Feb 9 10:09:08.783979 systemd[1]: Reached target basic.target. Feb 9 10:09:08.785750 systemd[1]: Starting systemd-fsck-root.service... Feb 9 10:09:08.796160 systemd-fsck[778]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 10:09:08.800645 systemd[1]: Finished systemd-fsck-root.service. Feb 9 10:09:08.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:08.802178 systemd[1]: Mounting sysroot.mount... Feb 9 10:09:08.814212 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 10:09:08.814795 systemd[1]: Mounted sysroot.mount. Feb 9 10:09:08.815538 systemd[1]: Reached target initrd-root-fs.target. Feb 9 10:09:08.817499 systemd[1]: Mounting sysroot-usr.mount... Feb 9 10:09:08.818220 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 10:09:08.818268 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 10:09:08.818294 systemd[1]: Reached target ignition-diskful.target. Feb 9 10:09:08.820534 systemd[1]: Mounted sysroot-usr.mount. Feb 9 10:09:08.822696 systemd[1]: Starting initrd-setup-root.service... Feb 9 10:09:08.827163 initrd-setup-root[788]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 10:09:08.831764 initrd-setup-root[796]: cut: /sysroot/etc/group: No such file or directory Feb 9 10:09:08.835458 initrd-setup-root[804]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 10:09:08.839353 initrd-setup-root[812]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 10:09:08.872269 systemd[1]: Finished initrd-setup-root.service. Feb 9 10:09:08.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:08.873810 systemd[1]: Starting ignition-mount.service... Feb 9 10:09:08.875062 systemd[1]: Starting sysroot-boot.service... Feb 9 10:09:08.879676 bash[829]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 10:09:08.888029 ignition[831]: INFO : Ignition 2.14.0 Feb 9 10:09:08.888029 ignition[831]: INFO : Stage: mount Feb 9 10:09:08.889388 ignition[831]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 10:09:08.889388 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:09:08.889388 ignition[831]: INFO : mount: mount passed Feb 9 10:09:08.889388 ignition[831]: INFO : Ignition finished successfully Feb 9 10:09:08.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:08.891173 systemd[1]: Finished ignition-mount.service. Feb 9 10:09:08.895365 systemd[1]: Finished sysroot-boot.service. Feb 9 10:09:08.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:09.531293 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 10:09:09.537852 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (840) Feb 9 10:09:09.537884 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 10:09:09.537894 kernel: BTRFS info (device vda6): using free space tree Feb 9 10:09:09.538324 kernel: BTRFS info (device vda6): has skinny extents Feb 9 10:09:09.541534 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 10:09:09.543103 systemd[1]: Starting ignition-files.service... Feb 9 10:09:09.556628 ignition[860]: INFO : Ignition 2.14.0 Feb 9 10:09:09.556628 ignition[860]: INFO : Stage: files Feb 9 10:09:09.558177 ignition[860]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 10:09:09.558177 ignition[860]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:09:09.558177 ignition[860]: DEBUG : files: compiled without relabeling support, skipping Feb 9 10:09:09.561868 ignition[860]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 10:09:09.561868 ignition[860]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 10:09:09.565540 ignition[860]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 10:09:09.566579 ignition[860]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 10:09:09.566579 ignition[860]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 10:09:09.566306 unknown[860]: wrote ssh authorized keys file for user: core Feb 9 10:09:09.570221 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 10:09:09.570221 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 10:09:09.846840 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 10:09:10.089065 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 10:09:10.091789 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 10:09:10.091789 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 10:09:10.091789 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 10:09:10.264441 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 10:09:10.392337 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 10:09:10.392337 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 10:09:10.395938 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 10:09:10.395938 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubeadm: attempt #1 Feb 9 10:09:10.398650 systemd-networkd[742]: eth0: Gained IPv6LL Feb 9 10:09:10.446423 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 10:09:10.725777 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5a08b81f9cc82d3cce21130856ca63b8dafca9149d9775dd25b376eb0f18209aa0e4a47c0a6d7e6fb1316aacd5d59dec770f26c09120c866949d70bc415518b3 Feb 9 10:09:10.728048 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 10:09:10.728048 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 10:09:10.728048 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubelet: attempt #1 Feb 9 10:09:10.749195 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 10:09:11.364089 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 5a898ef543a6482895101ea58e33602e3c0a7682d322aaf08ac3dc8a5a3c8da8f09600d577024549288f8cebb1a86f9c79927796b69a3d8fe989ca8f12b147d6 Feb 9 10:09:11.366414 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 10:09:11.366414 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 9 10:09:11.366414 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 10:09:11.366414 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 10:09:11.366414 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 10:09:11.366414 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 10:09:11.366414 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 10:09:11.366414 ignition[860]: INFO : files: op(a): [started] processing unit "prepare-cni-plugins.service" Feb 9 10:09:11.366414 ignition[860]: INFO : files: op(a): op(b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 10:09:11.366414 ignition[860]: INFO : files: op(a): op(b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 10:09:11.366414 ignition[860]: INFO : files: op(a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 10:09:11.366414 ignition[860]: INFO : files: op(c): [started] processing unit "prepare-critools.service" Feb 9 10:09:11.366414 ignition[860]: INFO : files: op(c): op(d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 10:09:11.366414 ignition[860]: INFO : files: op(c): op(d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 10:09:11.366414 ignition[860]: INFO : files: op(c): [finished] processing unit "prepare-critools.service" Feb 9 10:09:11.366414 ignition[860]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 9 10:09:11.366414 ignition[860]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 10:09:11.389757 ignition[860]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 10:09:11.389757 ignition[860]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 9 10:09:11.389757 ignition[860]: INFO : files: op(10): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 10:09:11.389757 ignition[860]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 10:09:11.389757 ignition[860]: INFO : files: op(11): [started] setting preset to enabled for "prepare-critools.service" Feb 9 10:09:11.389757 ignition[860]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 10:09:11.389757 ignition[860]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 10:09:11.389757 ignition[860]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 10:09:11.413052 ignition[860]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 10:09:11.414157 ignition[860]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 10:09:11.414157 ignition[860]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 10:09:11.414157 ignition[860]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 10:09:11.414157 ignition[860]: INFO : files: files passed Feb 9 10:09:11.414157 ignition[860]: INFO : Ignition finished successfully Feb 9 10:09:11.422815 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 10:09:11.422836 kernel: audit: type=1130 audit(1707473351.415:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.414458 systemd[1]: Finished ignition-files.service. Feb 9 10:09:11.416882 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 10:09:11.417774 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 10:09:11.418384 systemd[1]: Starting ignition-quench.service... Feb 9 10:09:11.430435 kernel: audit: type=1130 audit(1707473351.426:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.430452 kernel: audit: type=1131 audit(1707473351.426:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.430514 initrd-setup-root-after-ignition[886]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 10:09:11.433898 kernel: audit: type=1130 audit(1707473351.431:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.424377 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 10:09:11.435236 initrd-setup-root-after-ignition[888]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 10:09:11.424451 systemd[1]: Finished ignition-quench.service. Feb 9 10:09:11.430402 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 10:09:11.431344 systemd[1]: Reached target ignition-complete.target. Feb 9 10:09:11.435224 systemd[1]: Starting initrd-parse-etc.service... Feb 9 10:09:11.450719 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 10:09:11.450804 systemd[1]: Finished initrd-parse-etc.service. Feb 9 10:09:11.455980 kernel: audit: type=1130 audit(1707473351.451:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.455997 kernel: audit: type=1131 audit(1707473351.451:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.452215 systemd[1]: Reached target initrd-fs.target. Feb 9 10:09:11.456655 systemd[1]: Reached target initrd.target. Feb 9 10:09:11.457776 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 10:09:11.458450 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 10:09:11.469114 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 10:09:11.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.470557 systemd[1]: Starting initrd-cleanup.service... Feb 9 10:09:11.473179 kernel: audit: type=1130 audit(1707473351.469:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.477986 systemd[1]: Stopped target nss-lookup.target. Feb 9 10:09:11.478832 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 10:09:11.480087 systemd[1]: Stopped target timers.target. Feb 9 10:09:11.481152 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 10:09:11.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.481277 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 10:09:11.485464 kernel: audit: type=1131 audit(1707473351.481:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.482312 systemd[1]: Stopped target initrd.target. Feb 9 10:09:11.485083 systemd[1]: Stopped target basic.target. Feb 9 10:09:11.486141 systemd[1]: Stopped target ignition-complete.target. Feb 9 10:09:11.487303 systemd[1]: Stopped target ignition-diskful.target. Feb 9 10:09:11.488385 systemd[1]: Stopped target initrd-root-device.target. Feb 9 10:09:11.489561 systemd[1]: Stopped target remote-fs.target. Feb 9 10:09:11.490683 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 10:09:11.491858 systemd[1]: Stopped target sysinit.target. Feb 9 10:09:11.492935 systemd[1]: Stopped target local-fs.target. Feb 9 10:09:11.494003 systemd[1]: Stopped target local-fs-pre.target. Feb 9 10:09:11.494927 systemd[1]: Stopped target swap.target. Feb 9 10:09:11.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.495784 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 10:09:11.500106 kernel: audit: type=1131 audit(1707473351.496:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.495881 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 10:09:11.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.497006 systemd[1]: Stopped target cryptsetup.target. Feb 9 10:09:11.503989 kernel: audit: type=1131 audit(1707473351.500:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.499609 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 10:09:11.499703 systemd[1]: Stopped dracut-initqueue.service. Feb 9 10:09:11.500888 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 10:09:11.500977 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 10:09:11.503702 systemd[1]: Stopped target paths.target. Feb 9 10:09:11.504657 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 10:09:11.508258 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 10:09:11.509138 systemd[1]: Stopped target slices.target. Feb 9 10:09:11.510249 systemd[1]: Stopped target sockets.target. Feb 9 10:09:11.511280 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 10:09:11.511350 systemd[1]: Closed iscsid.socket. Feb 9 10:09:11.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.512263 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 10:09:11.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.512358 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 10:09:11.513377 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 10:09:11.513461 systemd[1]: Stopped ignition-files.service. Feb 9 10:09:11.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.523550 ignition[901]: INFO : Ignition 2.14.0 Feb 9 10:09:11.523550 ignition[901]: INFO : Stage: umount Feb 9 10:09:11.523550 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 10:09:11.523550 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:09:11.523550 ignition[901]: INFO : umount: umount passed Feb 9 10:09:11.523550 ignition[901]: INFO : Ignition finished successfully Feb 9 10:09:11.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.515089 systemd[1]: Stopping ignition-mount.service... Feb 9 10:09:11.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.516381 systemd[1]: Stopping iscsiuio.service... Feb 9 10:09:11.518769 systemd[1]: Stopping sysroot-boot.service... Feb 9 10:09:11.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.519289 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 10:09:11.519412 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 10:09:11.520083 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 10:09:11.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.520172 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 10:09:11.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.522098 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 10:09:11.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.522185 systemd[1]: Stopped iscsiuio.service. Feb 9 10:09:11.523205 systemd[1]: Stopped target network.target. Feb 9 10:09:11.524151 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 10:09:11.524271 systemd[1]: Closed iscsiuio.socket. Feb 9 10:09:11.526352 systemd[1]: Stopping systemd-networkd.service... Feb 9 10:09:11.527468 systemd[1]: Stopping systemd-resolved.service... Feb 9 10:09:11.529316 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 10:09:11.529779 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 10:09:11.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.529858 systemd[1]: Finished initrd-cleanup.service. Feb 9 10:09:11.531213 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 10:09:11.531304 systemd[1]: Stopped ignition-mount.service. Feb 9 10:09:11.533002 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 10:09:11.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.533051 systemd[1]: Stopped ignition-disks.service. Feb 9 10:09:11.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.550000 audit: BPF prog-id=6 op=UNLOAD Feb 9 10:09:11.534228 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 10:09:11.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.534287 systemd[1]: Stopped ignition-kargs.service. Feb 9 10:09:11.536978 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 10:09:11.537023 systemd[1]: Stopped ignition-setup.service. Feb 9 10:09:11.538395 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 10:09:11.538498 systemd[1]: Stopped systemd-resolved.service. Feb 9 10:09:11.542249 systemd-networkd[742]: eth0: DHCPv6 lease lost Feb 9 10:09:11.543588 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 10:09:11.559000 audit: BPF prog-id=9 op=UNLOAD Feb 9 10:09:11.543686 systemd[1]: Stopped systemd-networkd.service. Feb 9 10:09:11.544792 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 10:09:11.544821 systemd[1]: Closed systemd-networkd.socket. Feb 9 10:09:11.547019 systemd[1]: Stopping network-cleanup.service... Feb 9 10:09:11.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.548019 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 10:09:11.548076 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 10:09:11.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.549140 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 10:09:11.549178 systemd[1]: Stopped systemd-sysctl.service. Feb 9 10:09:11.550797 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 10:09:11.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.550836 systemd[1]: Stopped systemd-modules-load.service. Feb 9 10:09:11.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.551928 systemd[1]: Stopping systemd-udevd.service... Feb 9 10:09:11.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.557386 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 10:09:11.560764 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 10:09:11.560866 systemd[1]: Stopped network-cleanup.service. Feb 9 10:09:11.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.562905 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 10:09:11.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.563019 systemd[1]: Stopped systemd-udevd.service. Feb 9 10:09:11.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.564610 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 10:09:11.564646 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 10:09:11.565847 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 10:09:11.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.565878 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 10:09:11.567022 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 10:09:11.567065 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 10:09:11.568260 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 10:09:11.568302 systemd[1]: Stopped dracut-cmdline.service. Feb 9 10:09:11.569626 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 10:09:11.569665 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 10:09:11.571638 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 10:09:11.572900 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 10:09:11.572958 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 10:09:11.574583 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 10:09:11.574621 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 10:09:11.575386 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 10:09:11.575425 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 10:09:11.577424 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 10:09:11.577834 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 10:09:11.577913 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 10:09:11.615477 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 10:09:11.615578 systemd[1]: Stopped sysroot-boot.service. Feb 9 10:09:11.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.616914 systemd[1]: Reached target initrd-switch-root.target. Feb 9 10:09:11.618000 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 10:09:11.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.618052 systemd[1]: Stopped initrd-setup-root.service. Feb 9 10:09:11.619912 systemd[1]: Starting initrd-switch-root.service... Feb 9 10:09:11.626402 systemd[1]: Switching root. Feb 9 10:09:11.645450 iscsid[749]: iscsid shutting down. Feb 9 10:09:11.645930 systemd-journald[289]: Journal stopped Feb 9 10:09:13.716728 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Feb 9 10:09:13.716785 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 10:09:13.716797 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 10:09:13.716807 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 10:09:13.716820 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 10:09:13.716833 kernel: SELinux: policy capability open_perms=1 Feb 9 10:09:13.716844 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 10:09:13.716855 kernel: SELinux: policy capability always_check_network=0 Feb 9 10:09:13.716865 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 10:09:13.716874 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 10:09:13.716884 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 10:09:13.716893 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 10:09:13.716903 systemd[1]: Successfully loaded SELinux policy in 36.784ms. Feb 9 10:09:13.716919 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.002ms. Feb 9 10:09:13.716932 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 10:09:13.716947 systemd[1]: Detected virtualization kvm. Feb 9 10:09:13.716958 systemd[1]: Detected architecture arm64. Feb 9 10:09:13.716968 systemd[1]: Detected first boot. Feb 9 10:09:13.716979 systemd[1]: Initializing machine ID from VM UUID. Feb 9 10:09:13.716992 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 10:09:13.717002 systemd[1]: Populated /etc with preset unit settings. Feb 9 10:09:13.717012 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:09:13.717024 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:09:13.717037 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:09:13.717048 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 10:09:13.717058 systemd[1]: Stopped iscsid.service. Feb 9 10:09:13.717068 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 10:09:13.717079 systemd[1]: Stopped initrd-switch-root.service. Feb 9 10:09:13.717089 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 10:09:13.717101 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 10:09:13.717112 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 10:09:13.717123 systemd[1]: Created slice system-getty.slice. Feb 9 10:09:13.717134 systemd[1]: Created slice system-modprobe.slice. Feb 9 10:09:13.717144 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 10:09:13.717154 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 10:09:13.717166 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 10:09:13.717176 systemd[1]: Created slice user.slice. Feb 9 10:09:13.717197 systemd[1]: Started systemd-ask-password-console.path. Feb 9 10:09:13.717208 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 10:09:13.717220 systemd[1]: Set up automount boot.automount. Feb 9 10:09:13.717231 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 10:09:13.717241 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 10:09:13.717259 systemd[1]: Stopped target initrd-fs.target. Feb 9 10:09:13.717270 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 10:09:13.717280 systemd[1]: Reached target integritysetup.target. Feb 9 10:09:13.717290 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 10:09:13.717302 systemd[1]: Reached target remote-fs.target. Feb 9 10:09:13.717313 systemd[1]: Reached target slices.target. Feb 9 10:09:13.717323 systemd[1]: Reached target swap.target. Feb 9 10:09:13.717333 systemd[1]: Reached target torcx.target. Feb 9 10:09:13.717343 systemd[1]: Reached target veritysetup.target. Feb 9 10:09:13.717353 systemd[1]: Listening on systemd-coredump.socket. Feb 9 10:09:13.717364 systemd[1]: Listening on systemd-initctl.socket. Feb 9 10:09:13.717374 systemd[1]: Listening on systemd-networkd.socket. Feb 9 10:09:13.717384 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 10:09:13.717394 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 10:09:13.717408 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 10:09:13.717419 systemd[1]: Mounting dev-hugepages.mount... Feb 9 10:09:13.717429 systemd[1]: Mounting dev-mqueue.mount... Feb 9 10:09:13.717439 systemd[1]: Mounting media.mount... Feb 9 10:09:13.717450 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 10:09:13.717460 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 10:09:13.717471 systemd[1]: Mounting tmp.mount... Feb 9 10:09:13.717481 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 10:09:13.717491 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 10:09:13.717503 systemd[1]: Starting kmod-static-nodes.service... Feb 9 10:09:13.717513 systemd[1]: Starting modprobe@configfs.service... Feb 9 10:09:13.717523 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 10:09:13.717534 systemd[1]: Starting modprobe@drm.service... Feb 9 10:09:13.717544 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 10:09:13.717554 systemd[1]: Starting modprobe@fuse.service... Feb 9 10:09:13.717565 systemd[1]: Starting modprobe@loop.service... Feb 9 10:09:13.717576 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 10:09:13.717586 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 10:09:13.717598 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 10:09:13.717608 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 10:09:13.717618 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 10:09:13.717630 systemd[1]: Stopped systemd-journald.service. Feb 9 10:09:13.717639 kernel: loop: module loaded Feb 9 10:09:13.717649 kernel: fuse: init (API version 7.34) Feb 9 10:09:13.717661 systemd[1]: Starting systemd-journald.service... Feb 9 10:09:13.717673 systemd[1]: Starting systemd-modules-load.service... Feb 9 10:09:13.717683 systemd[1]: Starting systemd-network-generator.service... Feb 9 10:09:13.717694 systemd[1]: Starting systemd-remount-fs.service... Feb 9 10:09:13.717704 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 10:09:13.717715 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 10:09:13.717726 systemd[1]: Stopped verity-setup.service. Feb 9 10:09:13.717736 systemd[1]: Mounted dev-hugepages.mount. Feb 9 10:09:13.717746 systemd[1]: Mounted dev-mqueue.mount. Feb 9 10:09:13.717757 systemd[1]: Mounted media.mount. Feb 9 10:09:13.717769 systemd-journald[995]: Journal started Feb 9 10:09:13.717808 systemd-journald[995]: Runtime Journal (/run/log/journal/4715bf354a4d459d822134ea0f046e8f) is 6.0M, max 48.7M, 42.6M free. Feb 9 10:09:11.714000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 10:09:11.866000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 10:09:11.866000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 10:09:11.866000 audit: BPF prog-id=10 op=LOAD Feb 9 10:09:11.866000 audit: BPF prog-id=10 op=UNLOAD Feb 9 10:09:11.866000 audit: BPF prog-id=11 op=LOAD Feb 9 10:09:11.866000 audit: BPF prog-id=11 op=UNLOAD Feb 9 10:09:11.908000 audit[934]: AVC avc: denied { associate } for pid=934 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 10:09:11.908000 audit[934]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001cd8b2 a1=4000150de0 a2=40001570c0 a3=32 items=0 ppid=917 pid=934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:09:11.908000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 10:09:11.908000 audit[934]: AVC avc: denied { associate } for pid=934 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 10:09:11.908000 audit[934]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001cd989 a2=1ed a3=0 items=2 ppid=917 pid=934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:09:11.908000 audit: CWD cwd="/" Feb 9 10:09:11.908000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:09:11.908000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:09:11.908000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 10:09:13.592000 audit: BPF prog-id=12 op=LOAD Feb 9 10:09:13.592000 audit: BPF prog-id=3 op=UNLOAD Feb 9 10:09:13.592000 audit: BPF prog-id=13 op=LOAD Feb 9 10:09:13.592000 audit: BPF prog-id=14 op=LOAD Feb 9 10:09:13.592000 audit: BPF prog-id=4 op=UNLOAD Feb 9 10:09:13.592000 audit: BPF prog-id=5 op=UNLOAD Feb 9 10:09:13.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.608000 audit: BPF prog-id=12 op=UNLOAD Feb 9 10:09:13.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.691000 audit: BPF prog-id=15 op=LOAD Feb 9 10:09:13.692000 audit: BPF prog-id=16 op=LOAD Feb 9 10:09:13.692000 audit: BPF prog-id=17 op=LOAD Feb 9 10:09:13.692000 audit: BPF prog-id=13 op=UNLOAD Feb 9 10:09:13.692000 audit: BPF prog-id=14 op=UNLOAD Feb 9 10:09:13.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.715000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 10:09:13.715000 audit[995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffc73db1f0 a2=4000 a3=1 items=0 ppid=1 pid=995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:09:13.715000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 10:09:11.906366 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:09:13.590894 systemd[1]: Queued start job for default target multi-user.target. Feb 9 10:09:11.906980 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:11Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 10:09:13.590906 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 10:09:11.907001 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:11Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 10:09:13.593619 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 10:09:11.907031 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:11Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 10:09:11.907041 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:11Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 10:09:11.907073 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:11Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 10:09:11.907085 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:11Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 10:09:13.719413 systemd[1]: Started systemd-journald.service. Feb 9 10:09:11.907303 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:11Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 10:09:11.907339 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:11Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 10:09:13.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:11.907350 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:11Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 10:09:11.907785 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:11Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 10:09:11.907818 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:11Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 10:09:11.907836 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:11Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 10:09:13.719853 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 10:09:11.907850 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:11Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 10:09:11.907867 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:11Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 10:09:11.907880 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:11Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 10:09:13.326292 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:13Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:09:13.326556 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:13Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:09:13.326665 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:13Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:09:13.326873 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:13Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:09:13.326961 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:13Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 10:09:13.327033 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-02-09T10:09:13Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 10:09:13.720852 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 10:09:13.721728 systemd[1]: Mounted tmp.mount. Feb 9 10:09:13.722648 systemd[1]: Finished kmod-static-nodes.service. Feb 9 10:09:13.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.723667 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 10:09:13.723825 systemd[1]: Finished modprobe@configfs.service. Feb 9 10:09:13.724871 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 10:09:13.725036 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 10:09:13.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.726042 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 10:09:13.726227 systemd[1]: Finished modprobe@drm.service. Feb 9 10:09:13.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.727208 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 10:09:13.727385 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 10:09:13.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.728462 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 10:09:13.728601 systemd[1]: Finished modprobe@fuse.service. Feb 9 10:09:13.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.729608 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 10:09:13.729759 systemd[1]: Finished modprobe@loop.service. Feb 9 10:09:13.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.731115 systemd[1]: Finished systemd-modules-load.service. Feb 9 10:09:13.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.733341 systemd[1]: Finished systemd-network-generator.service. Feb 9 10:09:13.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.734486 systemd[1]: Finished systemd-remount-fs.service. Feb 9 10:09:13.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.735672 systemd[1]: Reached target network-pre.target. Feb 9 10:09:13.737513 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 10:09:13.739155 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 10:09:13.739837 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 10:09:13.742944 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 10:09:13.744661 systemd[1]: Starting systemd-journal-flush.service... Feb 9 10:09:13.745470 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 10:09:13.746419 systemd[1]: Starting systemd-random-seed.service... Feb 9 10:09:13.747225 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 10:09:13.748483 systemd[1]: Starting systemd-sysctl.service... Feb 9 10:09:13.751655 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 10:09:13.752568 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 10:09:13.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.756065 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 10:09:13.758319 systemd[1]: Starting systemd-sysusers.service... Feb 9 10:09:13.759165 systemd-journald[995]: Time spent on flushing to /var/log/journal/4715bf354a4d459d822134ea0f046e8f is 13.985ms for 1002 entries. Feb 9 10:09:13.759165 systemd-journald[995]: System Journal (/var/log/journal/4715bf354a4d459d822134ea0f046e8f) is 8.0M, max 195.6M, 187.6M free. Feb 9 10:09:13.792356 systemd-journald[995]: Received client request to flush runtime journal. Feb 9 10:09:13.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.759493 systemd[1]: Finished systemd-random-seed.service. Feb 9 10:09:13.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:13.762199 systemd[1]: Reached target first-boot-complete.target. Feb 9 10:09:13.795467 udevadm[1035]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 10:09:13.768109 systemd[1]: Finished systemd-sysctl.service. Feb 9 10:09:13.769452 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 10:09:13.771535 systemd[1]: Starting systemd-udev-settle.service... Feb 9 10:09:13.782522 systemd[1]: Finished systemd-sysusers.service. Feb 9 10:09:13.784564 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 10:09:13.793180 systemd[1]: Finished systemd-journal-flush.service. Feb 9 10:09:13.803564 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 10:09:13.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.159161 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 10:09:14.160000 audit: BPF prog-id=18 op=LOAD Feb 9 10:09:14.160000 audit: BPF prog-id=19 op=LOAD Feb 9 10:09:14.160000 audit: BPF prog-id=7 op=UNLOAD Feb 9 10:09:14.160000 audit: BPF prog-id=8 op=UNLOAD Feb 9 10:09:14.162027 systemd[1]: Starting systemd-udevd.service... Feb 9 10:09:14.185793 systemd-udevd[1039]: Using default interface naming scheme 'v252'. Feb 9 10:09:14.208652 systemd[1]: Started systemd-udevd.service. Feb 9 10:09:14.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.215000 audit: BPF prog-id=20 op=LOAD Feb 9 10:09:14.216480 systemd[1]: Starting systemd-networkd.service... Feb 9 10:09:14.234138 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 10:09:14.235000 audit: BPF prog-id=21 op=LOAD Feb 9 10:09:14.235000 audit: BPF prog-id=22 op=LOAD Feb 9 10:09:14.235000 audit: BPF prog-id=23 op=LOAD Feb 9 10:09:14.236019 systemd[1]: Starting systemd-userdbd.service... Feb 9 10:09:14.265989 systemd[1]: Started systemd-userdbd.service. Feb 9 10:09:14.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.298741 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 10:09:14.306579 systemd[1]: Finished systemd-udev-settle.service. Feb 9 10:09:14.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.308404 systemd[1]: Starting lvm2-activation-early.service... Feb 9 10:09:14.322905 lvm[1072]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 10:09:14.326640 systemd-networkd[1059]: lo: Link UP Feb 9 10:09:14.326650 systemd-networkd[1059]: lo: Gained carrier Feb 9 10:09:14.326980 systemd-networkd[1059]: Enumeration completed Feb 9 10:09:14.327075 systemd[1]: Started systemd-networkd.service. Feb 9 10:09:14.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.327953 systemd-networkd[1059]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 10:09:14.329102 systemd-networkd[1059]: eth0: Link UP Feb 9 10:09:14.329113 systemd-networkd[1059]: eth0: Gained carrier Feb 9 10:09:14.354121 systemd[1]: Finished lvm2-activation-early.service. Feb 9 10:09:14.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.355202 systemd[1]: Reached target cryptsetup.target. Feb 9 10:09:14.357081 systemd[1]: Starting lvm2-activation.service... Feb 9 10:09:14.358043 systemd-networkd[1059]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 10:09:14.360296 lvm[1073]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 10:09:14.392104 systemd[1]: Finished lvm2-activation.service. Feb 9 10:09:14.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.392904 systemd[1]: Reached target local-fs-pre.target. Feb 9 10:09:14.393540 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 10:09:14.393566 systemd[1]: Reached target local-fs.target. Feb 9 10:09:14.394102 systemd[1]: Reached target machines.target. Feb 9 10:09:14.395934 systemd[1]: Starting ldconfig.service... Feb 9 10:09:14.396871 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 10:09:14.396926 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 10:09:14.397950 systemd[1]: Starting systemd-boot-update.service... Feb 9 10:09:14.399783 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 10:09:14.401728 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 10:09:14.403055 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 10:09:14.403099 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 10:09:14.404441 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 10:09:14.406964 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1075 (bootctl) Feb 9 10:09:14.409372 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 10:09:14.411886 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 10:09:14.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.416593 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 10:09:14.418492 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 10:09:14.422366 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 10:09:14.512381 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 10:09:14.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.522841 systemd-fsck[1084]: fsck.fat 4.2 (2021-01-31) Feb 9 10:09:14.522841 systemd-fsck[1084]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 10:09:14.527648 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 10:09:14.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.591570 ldconfig[1074]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 10:09:14.595363 systemd[1]: Finished ldconfig.service. Feb 9 10:09:14.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.714524 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 10:09:14.715923 systemd[1]: Mounting boot.mount... Feb 9 10:09:14.722403 systemd[1]: Mounted boot.mount. Feb 9 10:09:14.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.729321 systemd[1]: Finished systemd-boot-update.service. Feb 9 10:09:14.779866 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 10:09:14.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.781764 systemd[1]: Starting audit-rules.service... Feb 9 10:09:14.783273 systemd[1]: Starting clean-ca-certificates.service... Feb 9 10:09:14.784826 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 10:09:14.786000 audit: BPF prog-id=24 op=LOAD Feb 9 10:09:14.787095 systemd[1]: Starting systemd-resolved.service... Feb 9 10:09:14.789000 audit: BPF prog-id=25 op=LOAD Feb 9 10:09:14.790753 systemd[1]: Starting systemd-timesyncd.service... Feb 9 10:09:14.792377 systemd[1]: Starting systemd-update-utmp.service... Feb 9 10:09:14.794263 systemd[1]: Finished clean-ca-certificates.service. Feb 9 10:09:14.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.795411 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 10:09:14.796000 audit[1093]: SYSTEM_BOOT pid=1093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.800689 systemd[1]: Finished systemd-update-utmp.service. Feb 9 10:09:14.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.804572 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 10:09:14.806462 systemd[1]: Starting systemd-update-done.service... Feb 9 10:09:14.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.811893 systemd[1]: Finished systemd-update-done.service. Feb 9 10:09:14.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:09:14.819000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 10:09:14.819000 audit[1108]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd9950c70 a2=420 a3=0 items=0 ppid=1087 pid=1108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:09:14.819000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 10:09:14.819725 augenrules[1108]: No rules Feb 9 10:09:14.820466 systemd[1]: Finished audit-rules.service. Feb 9 10:09:14.838038 systemd-resolved[1091]: Positive Trust Anchors: Feb 9 10:09:14.838050 systemd-resolved[1091]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 10:09:14.838077 systemd-resolved[1091]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 10:09:14.839048 systemd[1]: Started systemd-timesyncd.service. Feb 9 10:09:14.839883 systemd-timesyncd[1092]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 10:09:14.839937 systemd-timesyncd[1092]: Initial clock synchronization to Fri 2024-02-09 10:09:14.717838 UTC. Feb 9 10:09:14.840142 systemd[1]: Reached target time-set.target. Feb 9 10:09:14.850281 systemd-resolved[1091]: Defaulting to hostname 'linux'. Feb 9 10:09:14.851634 systemd[1]: Started systemd-resolved.service. Feb 9 10:09:14.852259 systemd[1]: Reached target network.target. Feb 9 10:09:14.852793 systemd[1]: Reached target nss-lookup.target. Feb 9 10:09:14.853398 systemd[1]: Reached target sysinit.target. Feb 9 10:09:14.854027 systemd[1]: Started motdgen.path. Feb 9 10:09:14.854655 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 10:09:14.855538 systemd[1]: Started logrotate.timer. Feb 9 10:09:14.856282 systemd[1]: Started mdadm.timer. Feb 9 10:09:14.856873 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 10:09:14.857620 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 10:09:14.857650 systemd[1]: Reached target paths.target. Feb 9 10:09:14.858302 systemd[1]: Reached target timers.target. Feb 9 10:09:14.859301 systemd[1]: Listening on dbus.socket. Feb 9 10:09:14.860913 systemd[1]: Starting docker.socket... Feb 9 10:09:14.863729 systemd[1]: Listening on sshd.socket. Feb 9 10:09:14.864525 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 10:09:14.865036 systemd[1]: Listening on docker.socket. Feb 9 10:09:14.865728 systemd[1]: Reached target sockets.target. Feb 9 10:09:14.866418 systemd[1]: Reached target basic.target. Feb 9 10:09:14.867135 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 10:09:14.867173 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 10:09:14.868058 systemd[1]: Starting containerd.service... Feb 9 10:09:14.869702 systemd[1]: Starting dbus.service... Feb 9 10:09:14.871167 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 10:09:14.872977 systemd[1]: Starting extend-filesystems.service... Feb 9 10:09:14.873786 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 10:09:14.875219 systemd[1]: Starting motdgen.service... Feb 9 10:09:14.876857 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 10:09:14.881365 systemd[1]: Starting prepare-critools.service... Feb 9 10:09:14.883133 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 10:09:14.884745 systemd[1]: Starting sshd-keygen.service... Feb 9 10:09:14.886364 jq[1118]: false Feb 9 10:09:14.887137 systemd[1]: Starting systemd-logind.service... Feb 9 10:09:14.888145 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 10:09:14.888216 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 10:09:14.888589 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 10:09:14.889206 systemd[1]: Starting update-engine.service... Feb 9 10:09:14.890972 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 10:09:14.894160 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 10:09:14.894363 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 10:09:14.896002 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 10:09:14.896165 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 10:09:14.900958 jq[1133]: true Feb 9 10:09:14.910441 jq[1140]: true Feb 9 10:09:14.921944 extend-filesystems[1119]: Found vda Feb 9 10:09:14.922998 extend-filesystems[1119]: Found vda1 Feb 9 10:09:14.922998 extend-filesystems[1119]: Found vda2 Feb 9 10:09:14.922998 extend-filesystems[1119]: Found vda3 Feb 9 10:09:14.922998 extend-filesystems[1119]: Found usr Feb 9 10:09:14.922998 extend-filesystems[1119]: Found vda4 Feb 9 10:09:14.922998 extend-filesystems[1119]: Found vda6 Feb 9 10:09:14.922998 extend-filesystems[1119]: Found vda7 Feb 9 10:09:14.922998 extend-filesystems[1119]: Found vda9 Feb 9 10:09:14.922998 extend-filesystems[1119]: Checking size of /dev/vda9 Feb 9 10:09:14.929498 tar[1136]: crictl Feb 9 10:09:14.929668 tar[1135]: ./ Feb 9 10:09:14.929668 tar[1135]: ./loopback Feb 9 10:09:14.931565 dbus-daemon[1117]: [system] SELinux support is enabled Feb 9 10:09:14.931715 systemd[1]: Started dbus.service. Feb 9 10:09:14.934410 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 10:09:14.934555 systemd[1]: Finished motdgen.service. Feb 9 10:09:14.935468 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 10:09:14.935492 systemd[1]: Reached target system-config.target. Feb 9 10:09:14.936283 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 10:09:14.936300 systemd[1]: Reached target user-config.target. Feb 9 10:09:14.953024 extend-filesystems[1119]: Resized partition /dev/vda9 Feb 9 10:09:14.962899 extend-filesystems[1169]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 10:09:14.970072 systemd-logind[1128]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 10:09:14.970297 systemd-logind[1128]: New seat seat0. Feb 9 10:09:14.978582 bash[1166]: Updated "/home/core/.ssh/authorized_keys" Feb 9 10:09:14.979362 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 10:09:14.981516 systemd[1]: Started systemd-logind.service. Feb 9 10:09:14.995223 tar[1135]: ./bandwidth Feb 9 10:09:15.000204 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 10:09:15.016204 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 10:09:15.019506 update_engine[1132]: I0209 10:09:15.019274 1132 main.cc:92] Flatcar Update Engine starting Feb 9 10:09:15.021849 systemd[1]: Started update-engine.service. Feb 9 10:09:15.024439 systemd[1]: Started locksmithd.service. Feb 9 10:09:15.029650 update_engine[1132]: I0209 10:09:15.021858 1132 update_check_scheduler.cc:74] Next update check in 7m52s Feb 9 10:09:15.029753 extend-filesystems[1169]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 10:09:15.029753 extend-filesystems[1169]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 10:09:15.029753 extend-filesystems[1169]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 10:09:15.033513 extend-filesystems[1119]: Resized filesystem in /dev/vda9 Feb 9 10:09:15.031698 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 10:09:15.031868 systemd[1]: Finished extend-filesystems.service. Feb 9 10:09:15.044702 env[1137]: time="2024-02-09T10:09:15.043804443Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 10:09:15.063407 tar[1135]: ./ptp Feb 9 10:09:15.066315 env[1137]: time="2024-02-09T10:09:15.066279243Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 10:09:15.066427 env[1137]: time="2024-02-09T10:09:15.066408679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:09:15.069280 env[1137]: time="2024-02-09T10:09:15.069243628Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 10:09:15.069280 env[1137]: time="2024-02-09T10:09:15.069277503Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:09:15.069506 env[1137]: time="2024-02-09T10:09:15.069484026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 10:09:15.069506 env[1137]: time="2024-02-09T10:09:15.069504627Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 10:09:15.069557 env[1137]: time="2024-02-09T10:09:15.069518414Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 10:09:15.069557 env[1137]: time="2024-02-09T10:09:15.069527867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 10:09:15.069625 env[1137]: time="2024-02-09T10:09:15.069596012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:09:15.069815 env[1137]: time="2024-02-09T10:09:15.069794775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:09:15.069929 env[1137]: time="2024-02-09T10:09:15.069908928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 10:09:15.069961 env[1137]: time="2024-02-09T10:09:15.069927914Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 10:09:15.069997 env[1137]: time="2024-02-09T10:09:15.069980499Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 10:09:15.070031 env[1137]: time="2024-02-09T10:09:15.069995428Z" level=info msg="metadata content store policy set" policy=shared Feb 9 10:09:15.075601 env[1137]: time="2024-02-09T10:09:15.075569608Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 10:09:15.075601 env[1137]: time="2024-02-09T10:09:15.075604350Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 10:09:15.075687 env[1137]: time="2024-02-09T10:09:15.075617742Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 10:09:15.075687 env[1137]: time="2024-02-09T10:09:15.075647955Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 10:09:15.075687 env[1137]: time="2024-02-09T10:09:15.075663435Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 10:09:15.075687 env[1137]: time="2024-02-09T10:09:15.075677143Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 10:09:15.075769 env[1137]: time="2024-02-09T10:09:15.075690614Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 10:09:15.076042 env[1137]: time="2024-02-09T10:09:15.076023343Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 10:09:15.076120 env[1137]: time="2024-02-09T10:09:15.076046386Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 10:09:15.076120 env[1137]: time="2024-02-09T10:09:15.076060330Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 10:09:15.076120 env[1137]: time="2024-02-09T10:09:15.076082940Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 10:09:15.076120 env[1137]: time="2024-02-09T10:09:15.076096175Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 10:09:15.076251 env[1137]: time="2024-02-09T10:09:15.076229984Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 10:09:15.076326 env[1137]: time="2024-02-09T10:09:15.076310182Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 10:09:15.076552 env[1137]: time="2024-02-09T10:09:15.076534863Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 10:09:15.076581 env[1137]: time="2024-02-09T10:09:15.076563303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 10:09:15.076602 env[1137]: time="2024-02-09T10:09:15.076580280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 10:09:15.076696 env[1137]: time="2024-02-09T10:09:15.076682931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 10:09:15.076725 env[1137]: time="2024-02-09T10:09:15.076697978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 10:09:15.076725 env[1137]: time="2024-02-09T10:09:15.076710662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 10:09:15.076725 env[1137]: time="2024-02-09T10:09:15.076722794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 10:09:15.076781 env[1137]: time="2024-02-09T10:09:15.076734453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 10:09:15.076781 env[1137]: time="2024-02-09T10:09:15.076746979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 10:09:15.076781 env[1137]: time="2024-02-09T10:09:15.076757339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 10:09:15.076781 env[1137]: time="2024-02-09T10:09:15.076768959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 10:09:15.076855 env[1137]: time="2024-02-09T10:09:15.076781170Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 10:09:15.076913 env[1137]: time="2024-02-09T10:09:15.076896189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 10:09:15.076945 env[1137]: time="2024-02-09T10:09:15.076915569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 10:09:15.076945 env[1137]: time="2024-02-09T10:09:15.076927741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 10:09:15.076945 env[1137]: time="2024-02-09T10:09:15.076939006Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 10:09:15.076999 env[1137]: time="2024-02-09T10:09:15.076953305Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 10:09:15.076999 env[1137]: time="2024-02-09T10:09:15.076963861Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 10:09:15.076999 env[1137]: time="2024-02-09T10:09:15.076980130Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 10:09:15.077075 env[1137]: time="2024-02-09T10:09:15.077011366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 10:09:15.077317 env[1137]: time="2024-02-09T10:09:15.077232424Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 10:09:15.077317 env[1137]: time="2024-02-09T10:09:15.077292887Z" level=info msg="Connect containerd service" Feb 9 10:09:15.077925 env[1137]: time="2024-02-09T10:09:15.077322627Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 10:09:15.077925 env[1137]: time="2024-02-09T10:09:15.077895084Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 10:09:15.078072 env[1137]: time="2024-02-09T10:09:15.078022196Z" level=info msg="Start subscribing containerd event" Feb 9 10:09:15.078111 env[1137]: time="2024-02-09T10:09:15.078091089Z" level=info msg="Start recovering state" Feb 9 10:09:15.078173 env[1137]: time="2024-02-09T10:09:15.078155650Z" level=info msg="Start event monitor" Feb 9 10:09:15.078212 env[1137]: time="2024-02-09T10:09:15.078194883Z" level=info msg="Start snapshots syncer" Feb 9 10:09:15.078212 env[1137]: time="2024-02-09T10:09:15.078205912Z" level=info msg="Start cni network conf syncer for default" Feb 9 10:09:15.078250 env[1137]: time="2024-02-09T10:09:15.078213081Z" level=info msg="Start streaming server" Feb 9 10:09:15.078348 env[1137]: time="2024-02-09T10:09:15.078327391Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 10:09:15.078377 env[1137]: time="2024-02-09T10:09:15.078370641Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 10:09:15.078439 env[1137]: time="2024-02-09T10:09:15.078425866Z" level=info msg="containerd successfully booted in 0.035971s" Feb 9 10:09:15.078499 systemd[1]: Started containerd.service. Feb 9 10:09:15.093848 locksmithd[1172]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 10:09:15.104541 tar[1135]: ./vlan Feb 9 10:09:15.132941 tar[1135]: ./host-device Feb 9 10:09:15.160430 tar[1135]: ./tuning Feb 9 10:09:15.185320 tar[1135]: ./vrf Feb 9 10:09:15.210649 tar[1135]: ./sbr Feb 9 10:09:15.235651 tar[1135]: ./tap Feb 9 10:09:15.264422 tar[1135]: ./dhcp Feb 9 10:09:15.334724 tar[1135]: ./static Feb 9 10:09:15.355337 tar[1135]: ./firewall Feb 9 10:09:15.364518 systemd[1]: Finished prepare-critools.service. Feb 9 10:09:15.386644 tar[1135]: ./macvlan Feb 9 10:09:15.415105 tar[1135]: ./dummy Feb 9 10:09:15.443116 tar[1135]: ./bridge Feb 9 10:09:15.473649 tar[1135]: ./ipvlan Feb 9 10:09:15.501665 tar[1135]: ./portmap Feb 9 10:09:15.528335 tar[1135]: ./host-local Feb 9 10:09:15.564676 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 10:09:15.772442 systemd-networkd[1059]: eth0: Gained IPv6LL Feb 9 10:09:16.075036 sshd_keygen[1141]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 10:09:16.091710 systemd[1]: Finished sshd-keygen.service. Feb 9 10:09:16.093820 systemd[1]: Starting issuegen.service... Feb 9 10:09:16.098021 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 10:09:16.098152 systemd[1]: Finished issuegen.service. Feb 9 10:09:16.100180 systemd[1]: Starting systemd-user-sessions.service... Feb 9 10:09:16.105747 systemd[1]: Finished systemd-user-sessions.service. Feb 9 10:09:16.107793 systemd[1]: Started getty@tty1.service. Feb 9 10:09:16.109563 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 10:09:16.110511 systemd[1]: Reached target getty.target. Feb 9 10:09:16.111297 systemd[1]: Reached target multi-user.target. Feb 9 10:09:16.113064 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 10:09:16.119378 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 10:09:16.119514 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 10:09:16.120514 systemd[1]: Startup finished in 571ms (kernel) + 5.090s (initrd) + 4.446s (userspace) = 10.108s. Feb 9 10:09:19.052752 systemd[1]: Created slice system-sshd.slice. Feb 9 10:09:19.053747 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:48742.service. Feb 9 10:09:19.104470 sshd[1201]: Accepted publickey for core from 10.0.0.1 port 48742 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:19.105885 sshd[1201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:19.116154 systemd-logind[1128]: New session 1 of user core. Feb 9 10:09:19.117031 systemd[1]: Created slice user-500.slice. Feb 9 10:09:19.118056 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 10:09:19.125500 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 10:09:19.126662 systemd[1]: Starting user@500.service... Feb 9 10:09:19.129105 (systemd)[1204]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:19.184868 systemd[1204]: Queued start job for default target default.target. Feb 9 10:09:19.185338 systemd[1204]: Reached target paths.target. Feb 9 10:09:19.185364 systemd[1204]: Reached target sockets.target. Feb 9 10:09:19.185375 systemd[1204]: Reached target timers.target. Feb 9 10:09:19.185385 systemd[1204]: Reached target basic.target. Feb 9 10:09:19.185434 systemd[1204]: Reached target default.target. Feb 9 10:09:19.185455 systemd[1204]: Startup finished in 51ms. Feb 9 10:09:19.185873 systemd[1]: Started user@500.service. Feb 9 10:09:19.186893 systemd[1]: Started session-1.scope. Feb 9 10:09:19.236802 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:48750.service. Feb 9 10:09:19.275762 sshd[1213]: Accepted publickey for core from 10.0.0.1 port 48750 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:19.277335 sshd[1213]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:19.281380 systemd-logind[1128]: New session 2 of user core. Feb 9 10:09:19.281456 systemd[1]: Started session-2.scope. Feb 9 10:09:19.335215 sshd[1213]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:19.338043 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:48750.service: Deactivated successfully. Feb 9 10:09:19.338698 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 10:09:19.339559 systemd-logind[1128]: Session 2 logged out. Waiting for processes to exit. Feb 9 10:09:19.340968 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:48752.service. Feb 9 10:09:19.341522 systemd-logind[1128]: Removed session 2. Feb 9 10:09:19.379663 sshd[1219]: Accepted publickey for core from 10.0.0.1 port 48752 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:19.380799 sshd[1219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:19.383954 systemd-logind[1128]: New session 3 of user core. Feb 9 10:09:19.384761 systemd[1]: Started session-3.scope. Feb 9 10:09:19.433708 sshd[1219]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:19.436447 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:48752.service: Deactivated successfully. Feb 9 10:09:19.437020 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 10:09:19.437455 systemd-logind[1128]: Session 3 logged out. Waiting for processes to exit. Feb 9 10:09:19.438376 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:48764.service. Feb 9 10:09:19.438883 systemd-logind[1128]: Removed session 3. Feb 9 10:09:19.478174 sshd[1225]: Accepted publickey for core from 10.0.0.1 port 48764 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:19.479275 sshd[1225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:19.482335 systemd-logind[1128]: New session 4 of user core. Feb 9 10:09:19.483063 systemd[1]: Started session-4.scope. Feb 9 10:09:19.535112 sshd[1225]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:19.537603 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:48764.service: Deactivated successfully. Feb 9 10:09:19.538216 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 10:09:19.538651 systemd-logind[1128]: Session 4 logged out. Waiting for processes to exit. Feb 9 10:09:19.539627 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:48776.service. Feb 9 10:09:19.540364 systemd-logind[1128]: Removed session 4. Feb 9 10:09:19.578776 sshd[1231]: Accepted publickey for core from 10.0.0.1 port 48776 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:09:19.579829 sshd[1231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:09:19.582725 systemd-logind[1128]: New session 5 of user core. Feb 9 10:09:19.583467 systemd[1]: Started session-5.scope. Feb 9 10:09:19.640432 sudo[1234]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 10:09:19.641237 sudo[1234]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 10:09:20.179109 systemd[1]: Reloading. Feb 9 10:09:20.230426 /usr/lib/systemd/system-generators/torcx-generator[1264]: time="2024-02-09T10:09:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:09:20.230730 /usr/lib/systemd/system-generators/torcx-generator[1264]: time="2024-02-09T10:09:20Z" level=info msg="torcx already run" Feb 9 10:09:20.289992 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:09:20.290009 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:09:20.306915 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:09:20.374357 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 10:09:20.463151 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 10:09:20.463768 systemd[1]: Reached target network-online.target. Feb 9 10:09:20.465364 systemd[1]: Started kubelet.service. Feb 9 10:09:20.476009 systemd[1]: Starting coreos-metadata.service... Feb 9 10:09:20.483410 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 9 10:09:20.483614 systemd[1]: Finished coreos-metadata.service. Feb 9 10:09:20.637176 kubelet[1302]: E0209 10:09:20.637113 1302 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 10:09:20.639901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 10:09:20.640040 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 10:09:20.783961 systemd[1]: Stopped kubelet.service. Feb 9 10:09:20.798214 systemd[1]: Reloading. Feb 9 10:09:20.845589 /usr/lib/systemd/system-generators/torcx-generator[1371]: time="2024-02-09T10:09:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:09:20.845616 /usr/lib/systemd/system-generators/torcx-generator[1371]: time="2024-02-09T10:09:20Z" level=info msg="torcx already run" Feb 9 10:09:20.896255 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:09:20.896276 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:09:20.912970 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:09:20.974525 systemd[1]: Started kubelet.service. Feb 9 10:09:21.016490 kubelet[1407]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:09:21.016490 kubelet[1407]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 10:09:21.016490 kubelet[1407]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:09:21.016799 kubelet[1407]: I0209 10:09:21.016535 1407 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 10:09:21.713407 kubelet[1407]: I0209 10:09:21.713366 1407 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 10:09:21.713407 kubelet[1407]: I0209 10:09:21.713398 1407 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 10:09:21.713604 kubelet[1407]: I0209 10:09:21.713591 1407 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 10:09:21.716997 kubelet[1407]: I0209 10:09:21.716964 1407 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 10:09:21.721943 kubelet[1407]: W0209 10:09:21.721921 1407 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 10:09:21.722554 kubelet[1407]: I0209 10:09:21.722537 1407 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 10:09:21.722716 kubelet[1407]: I0209 10:09:21.722707 1407 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 10:09:21.722869 kubelet[1407]: I0209 10:09:21.722847 1407 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 10:09:21.722869 kubelet[1407]: I0209 10:09:21.722871 1407 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 10:09:21.722956 kubelet[1407]: I0209 10:09:21.722880 1407 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 10:09:21.722956 kubelet[1407]: I0209 10:09:21.722955 1407 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:09:21.723171 kubelet[1407]: I0209 10:09:21.723159 1407 kubelet.go:393] "Attempting to sync node with API server" Feb 9 10:09:21.723211 kubelet[1407]: I0209 10:09:21.723175 1407 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 10:09:21.723211 kubelet[1407]: I0209 10:09:21.723201 1407 kubelet.go:309] "Adding apiserver pod source" Feb 9 10:09:21.723211 kubelet[1407]: I0209 10:09:21.723211 1407 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 10:09:21.723339 kubelet[1407]: E0209 10:09:21.723314 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:21.723373 kubelet[1407]: E0209 10:09:21.723356 1407 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:21.723971 kubelet[1407]: I0209 10:09:21.723954 1407 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 10:09:21.724384 kubelet[1407]: W0209 10:09:21.724367 1407 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 10:09:21.725034 kubelet[1407]: I0209 10:09:21.725020 1407 server.go:1232] "Started kubelet" Feb 9 10:09:21.725616 kubelet[1407]: I0209 10:09:21.725564 1407 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 10:09:21.725750 kubelet[1407]: I0209 10:09:21.725728 1407 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 10:09:21.725976 kubelet[1407]: I0209 10:09:21.725959 1407 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 10:09:21.726385 kubelet[1407]: I0209 10:09:21.726364 1407 server.go:462] "Adding debug handlers to kubelet server" Feb 9 10:09:21.726907 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 10:09:21.727115 kubelet[1407]: I0209 10:09:21.727014 1407 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 10:09:21.727205 kubelet[1407]: E0209 10:09:21.727158 1407 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 10:09:21.727235 kubelet[1407]: E0209 10:09:21.727216 1407 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 10:09:21.727335 kubelet[1407]: I0209 10:09:21.727323 1407 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 10:09:21.727434 kubelet[1407]: I0209 10:09:21.727405 1407 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 10:09:21.727466 kubelet[1407]: I0209 10:09:21.727457 1407 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 10:09:21.749299 kubelet[1407]: I0209 10:09:21.749261 1407 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 10:09:21.749299 kubelet[1407]: I0209 10:09:21.749280 1407 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 10:09:21.749299 kubelet[1407]: I0209 10:09:21.749298 1407 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:09:21.750311 kubelet[1407]: E0209 10:09:21.750178 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.134.17b229f9d50f8a97", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.134", UID:"10.0.0.134", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.134"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 725000343, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 725000343, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.134"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:09:21.750979 kubelet[1407]: W0209 10:09:21.750954 1407 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.134" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 10:09:21.751817 kubelet[1407]: E0209 10:09:21.751789 1407 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.134" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 10:09:21.751817 kubelet[1407]: I0209 10:09:21.750988 1407 policy_none.go:49] "None policy: Start" Feb 9 10:09:21.752970 kubelet[1407]: E0209 10:09:21.752933 1407 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.134\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 9 10:09:21.753045 kubelet[1407]: I0209 10:09:21.753008 1407 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 10:09:21.753045 kubelet[1407]: I0209 10:09:21.753030 1407 state_mem.go:35] "Initializing new in-memory state store" Feb 9 10:09:21.753160 kubelet[1407]: W0209 10:09:21.753129 1407 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 10:09:21.753160 kubelet[1407]: E0209 10:09:21.753158 1407 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 10:09:21.753243 kubelet[1407]: W0209 10:09:21.753216 1407 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 10:09:21.753243 kubelet[1407]: E0209 10:09:21.753227 1407 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 10:09:21.753361 kubelet[1407]: E0209 10:09:21.753292 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.134.17b229f9d530b5ef", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.134", UID:"10.0.0.134", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.134"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 727174127, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 727174127, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.134"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:09:21.754133 kubelet[1407]: E0209 10:09:21.754055 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.134.17b229f9d6589314", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.134", UID:"10.0.0.134", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.134 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.134"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 746563860, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 746563860, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.134"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:09:21.755466 kubelet[1407]: E0209 10:09:21.755354 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.134.17b229f9d658a3af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.134", UID:"10.0.0.134", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.134 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.134"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 746568111, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 746568111, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.134"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:09:21.756380 kubelet[1407]: E0209 10:09:21.756307 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.134.17b229f9d658b55f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.134", UID:"10.0.0.134", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.134 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.134"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 746572639, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 746572639, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.134"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:09:21.758293 systemd[1]: Created slice kubepods.slice. Feb 9 10:09:21.762132 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 10:09:21.764455 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 10:09:21.773768 kubelet[1407]: I0209 10:09:21.773736 1407 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 10:09:21.774471 kubelet[1407]: I0209 10:09:21.774428 1407 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 10:09:21.774820 kubelet[1407]: E0209 10:09:21.774796 1407 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.134\" not found" Feb 9 10:09:21.775902 kubelet[1407]: E0209 10:09:21.775834 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.134.17b229f9d8076be7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.134", UID:"10.0.0.134", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.134"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 774799847, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 774799847, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.134"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:09:21.804868 kubelet[1407]: I0209 10:09:21.804848 1407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 10:09:21.806001 kubelet[1407]: I0209 10:09:21.805978 1407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 10:09:21.806069 kubelet[1407]: I0209 10:09:21.806007 1407 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 10:09:21.806069 kubelet[1407]: I0209 10:09:21.806026 1407 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 10:09:21.806069 kubelet[1407]: E0209 10:09:21.806065 1407 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 10:09:21.807194 kubelet[1407]: W0209 10:09:21.807164 1407 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 10:09:21.807194 kubelet[1407]: E0209 10:09:21.807198 1407 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 10:09:21.828898 kubelet[1407]: I0209 10:09:21.828859 1407 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.134" Feb 9 10:09:21.829779 kubelet[1407]: E0209 10:09:21.829753 1407 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.134" Feb 9 10:09:21.830077 kubelet[1407]: E0209 10:09:21.830016 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.134.17b229f9d6589314", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.134", UID:"10.0.0.134", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.134 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.134"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 746563860, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 828814549, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.134"}': 'events "10.0.0.134.17b229f9d6589314" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:09:21.830999 kubelet[1407]: E0209 10:09:21.830948 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.134.17b229f9d658a3af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.134", UID:"10.0.0.134", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.134 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.134"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 746568111, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 828832942, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.134"}': 'events "10.0.0.134.17b229f9d658a3af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:09:21.831764 kubelet[1407]: E0209 10:09:21.831701 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.134.17b229f9d658b55f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.134", UID:"10.0.0.134", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.134 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.134"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 746572639, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 828836001, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.134"}': 'events "10.0.0.134.17b229f9d658b55f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:09:21.954137 kubelet[1407]: E0209 10:09:21.954110 1407 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.134\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 9 10:09:22.031611 kubelet[1407]: I0209 10:09:22.031588 1407 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.134" Feb 9 10:09:22.032899 kubelet[1407]: E0209 10:09:22.032820 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.134.17b229f9d6589314", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.134", UID:"10.0.0.134", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.134 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.134"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 746563860, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 9, 22, 31542496, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.134"}': 'events "10.0.0.134.17b229f9d6589314" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:09:22.033443 kubelet[1407]: E0209 10:09:22.033393 1407 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.134" Feb 9 10:09:22.034057 kubelet[1407]: E0209 10:09:22.033989 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.134.17b229f9d658a3af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.134", UID:"10.0.0.134", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.134 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.134"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 746568111, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 9, 22, 31555498, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.134"}': 'events "10.0.0.134.17b229f9d658a3af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:09:22.035093 kubelet[1407]: E0209 10:09:22.035038 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.134.17b229f9d658b55f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.134", UID:"10.0.0.134", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.134 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.134"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 746572639, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 9, 22, 31559872, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.134"}': 'events "10.0.0.134.17b229f9d658b55f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:09:22.356279 kubelet[1407]: E0209 10:09:22.356168 1407 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.134\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 9 10:09:22.434196 kubelet[1407]: I0209 10:09:22.434163 1407 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.134" Feb 9 10:09:22.435442 kubelet[1407]: E0209 10:09:22.435423 1407 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.134" Feb 9 10:09:22.435509 kubelet[1407]: E0209 10:09:22.435418 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.134.17b229f9d6589314", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.134", UID:"10.0.0.134", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.134 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.134"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 746563860, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 9, 22, 434109087, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.134"}': 'events "10.0.0.134.17b229f9d6589314" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:09:22.436266 kubelet[1407]: E0209 10:09:22.436212 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.134.17b229f9d658a3af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.134", UID:"10.0.0.134", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.134 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.134"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 746568111, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 9, 22, 434123162, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.134"}': 'events "10.0.0.134.17b229f9d658a3af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:09:22.437006 kubelet[1407]: E0209 10:09:22.436946 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.134.17b229f9d658b55f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.134", UID:"10.0.0.134", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.134 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.134"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 9, 21, 746572639, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 9, 22, 434125985, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.134"}': 'events "10.0.0.134.17b229f9d658b55f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:09:22.715812 kubelet[1407]: I0209 10:09:22.715721 1407 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 10:09:22.724018 kubelet[1407]: E0209 10:09:22.723985 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:23.096587 kubelet[1407]: E0209 10:09:23.096549 1407 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.134" not found Feb 9 10:09:23.165054 kubelet[1407]: E0209 10:09:23.165020 1407 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.134\" not found" node="10.0.0.134" Feb 9 10:09:23.237180 kubelet[1407]: I0209 10:09:23.237153 1407 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.134" Feb 9 10:09:23.241992 kubelet[1407]: I0209 10:09:23.241945 1407 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.134" Feb 9 10:09:23.260176 kubelet[1407]: I0209 10:09:23.260122 1407 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 10:09:23.260460 env[1137]: time="2024-02-09T10:09:23.260393833Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 10:09:23.260703 kubelet[1407]: I0209 10:09:23.260553 1407 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 10:09:23.490962 sudo[1234]: pam_unix(sudo:session): session closed for user root Feb 9 10:09:23.492882 sshd[1231]: pam_unix(sshd:session): session closed for user core Feb 9 10:09:23.494950 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 10:09:23.495650 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:48776.service: Deactivated successfully. Feb 9 10:09:23.495760 systemd-logind[1128]: Session 5 logged out. Waiting for processes to exit. Feb 9 10:09:23.496637 systemd-logind[1128]: Removed session 5. Feb 9 10:09:23.724988 kubelet[1407]: I0209 10:09:23.724911 1407 apiserver.go:52] "Watching apiserver" Feb 9 10:09:23.725216 kubelet[1407]: E0209 10:09:23.725167 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:23.728841 kubelet[1407]: I0209 10:09:23.728812 1407 topology_manager.go:215] "Topology Admit Handler" podUID="d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" podNamespace="kube-system" podName="cilium-47sl2" Feb 9 10:09:23.728953 kubelet[1407]: I0209 10:09:23.728929 1407 topology_manager.go:215] "Topology Admit Handler" podUID="af7d3a7a-d2aa-4a20-a993-7ee6d8cc2d22" podNamespace="kube-system" podName="kube-proxy-bxhv9" Feb 9 10:09:23.733242 systemd[1]: Created slice kubepods-besteffort-podaf7d3a7a_d2aa_4a20_a993_7ee6d8cc2d22.slice. Feb 9 10:09:23.753551 systemd[1]: Created slice kubepods-burstable-podd3f05c51_6c3c_42d8_a4f2_4f9049e18ac4.slice. Feb 9 10:09:23.828582 kubelet[1407]: I0209 10:09:23.828527 1407 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 10:09:23.836877 kubelet[1407]: I0209 10:09:23.836856 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-bpf-maps\") pod \"cilium-47sl2\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " pod="kube-system/cilium-47sl2" Feb 9 10:09:23.836921 kubelet[1407]: I0209 10:09:23.836887 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-cni-path\") pod \"cilium-47sl2\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " pod="kube-system/cilium-47sl2" Feb 9 10:09:23.836921 kubelet[1407]: I0209 10:09:23.836913 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-etc-cni-netd\") pod \"cilium-47sl2\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " pod="kube-system/cilium-47sl2" Feb 9 10:09:23.836972 kubelet[1407]: I0209 10:09:23.836933 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-host-proc-sys-kernel\") pod \"cilium-47sl2\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " pod="kube-system/cilium-47sl2" Feb 9 10:09:23.836972 kubelet[1407]: I0209 10:09:23.836953 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-cilium-run\") pod \"cilium-47sl2\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " pod="kube-system/cilium-47sl2" Feb 9 10:09:23.837033 kubelet[1407]: I0209 10:09:23.837004 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-hostproc\") pod \"cilium-47sl2\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " pod="kube-system/cilium-47sl2" Feb 9 10:09:23.837070 kubelet[1407]: I0209 10:09:23.837053 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-cilium-cgroup\") pod \"cilium-47sl2\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " pod="kube-system/cilium-47sl2" Feb 9 10:09:23.837095 kubelet[1407]: I0209 10:09:23.837081 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af7d3a7a-d2aa-4a20-a993-7ee6d8cc2d22-lib-modules\") pod \"kube-proxy-bxhv9\" (UID: \"af7d3a7a-d2aa-4a20-a993-7ee6d8cc2d22\") " pod="kube-system/kube-proxy-bxhv9" Feb 9 10:09:23.837116 kubelet[1407]: I0209 10:09:23.837106 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxn9x\" (UniqueName: \"kubernetes.io/projected/af7d3a7a-d2aa-4a20-a993-7ee6d8cc2d22-kube-api-access-gxn9x\") pod \"kube-proxy-bxhv9\" (UID: \"af7d3a7a-d2aa-4a20-a993-7ee6d8cc2d22\") " pod="kube-system/kube-proxy-bxhv9" Feb 9 10:09:23.837141 kubelet[1407]: I0209 10:09:23.837124 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-lib-modules\") pod \"cilium-47sl2\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " pod="kube-system/cilium-47sl2" Feb 9 10:09:23.837163 kubelet[1407]: I0209 10:09:23.837142 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-xtables-lock\") pod \"cilium-47sl2\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " pod="kube-system/cilium-47sl2" Feb 9 10:09:23.837195 kubelet[1407]: I0209 10:09:23.837163 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-cilium-config-path\") pod \"cilium-47sl2\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " pod="kube-system/cilium-47sl2" Feb 9 10:09:23.837218 kubelet[1407]: I0209 10:09:23.837210 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-host-proc-sys-net\") pod \"cilium-47sl2\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " pod="kube-system/cilium-47sl2" Feb 9 10:09:23.837239 kubelet[1407]: I0209 10:09:23.837232 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af7d3a7a-d2aa-4a20-a993-7ee6d8cc2d22-kube-proxy\") pod \"kube-proxy-bxhv9\" (UID: \"af7d3a7a-d2aa-4a20-a993-7ee6d8cc2d22\") " pod="kube-system/kube-proxy-bxhv9" Feb 9 10:09:23.837266 kubelet[1407]: I0209 10:09:23.837251 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-clustermesh-secrets\") pod \"cilium-47sl2\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " pod="kube-system/cilium-47sl2" Feb 9 10:09:23.837288 kubelet[1407]: I0209 10:09:23.837268 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-hubble-tls\") pod \"cilium-47sl2\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " pod="kube-system/cilium-47sl2" Feb 9 10:09:23.837288 kubelet[1407]: I0209 10:09:23.837286 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbbgv\" (UniqueName: \"kubernetes.io/projected/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-kube-api-access-rbbgv\") pod \"cilium-47sl2\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " pod="kube-system/cilium-47sl2" Feb 9 10:09:23.837327 kubelet[1407]: I0209 10:09:23.837305 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af7d3a7a-d2aa-4a20-a993-7ee6d8cc2d22-xtables-lock\") pod \"kube-proxy-bxhv9\" (UID: \"af7d3a7a-d2aa-4a20-a993-7ee6d8cc2d22\") " pod="kube-system/kube-proxy-bxhv9" Feb 9 10:09:24.053129 kubelet[1407]: E0209 10:09:24.053090 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:24.054383 env[1137]: time="2024-02-09T10:09:24.054328540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bxhv9,Uid:af7d3a7a-d2aa-4a20-a993-7ee6d8cc2d22,Namespace:kube-system,Attempt:0,}" Feb 9 10:09:24.065917 kubelet[1407]: E0209 10:09:24.065887 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:24.066877 env[1137]: time="2024-02-09T10:09:24.066591867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-47sl2,Uid:d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4,Namespace:kube-system,Attempt:0,}" Feb 9 10:09:24.577791 env[1137]: time="2024-02-09T10:09:24.577737286Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:24.580009 env[1137]: time="2024-02-09T10:09:24.579963833Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:24.581424 env[1137]: time="2024-02-09T10:09:24.581387835Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:24.584048 env[1137]: time="2024-02-09T10:09:24.584000604Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:24.585069 env[1137]: time="2024-02-09T10:09:24.585041928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:24.586667 env[1137]: time="2024-02-09T10:09:24.586635390Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:24.587401 env[1137]: time="2024-02-09T10:09:24.587366703Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:24.589749 env[1137]: time="2024-02-09T10:09:24.589724127Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:24.620860 env[1137]: time="2024-02-09T10:09:24.620782344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:09:24.620860 env[1137]: time="2024-02-09T10:09:24.620841312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:09:24.620860 env[1137]: time="2024-02-09T10:09:24.620853019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:09:24.621134 env[1137]: time="2024-02-09T10:09:24.621100957Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2141e8d406eab9e0443a1d91b163f27fad2970525b58b0f5b3d861ad0e6f201c pid=1470 runtime=io.containerd.runc.v2 Feb 9 10:09:24.621293 env[1137]: time="2024-02-09T10:09:24.621235457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:09:24.621386 env[1137]: time="2024-02-09T10:09:24.621285985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:09:24.621386 env[1137]: time="2024-02-09T10:09:24.621364941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:09:24.621652 env[1137]: time="2024-02-09T10:09:24.621611725Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba pid=1469 runtime=io.containerd.runc.v2 Feb 9 10:09:24.644047 systemd[1]: Started cri-containerd-2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba.scope. Feb 9 10:09:24.647080 systemd[1]: Started cri-containerd-2141e8d406eab9e0443a1d91b163f27fad2970525b58b0f5b3d861ad0e6f201c.scope. Feb 9 10:09:24.684627 env[1137]: time="2024-02-09T10:09:24.684576771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-47sl2,Uid:d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\"" Feb 9 10:09:24.685580 kubelet[1407]: E0209 10:09:24.685558 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:24.686871 env[1137]: time="2024-02-09T10:09:24.686824023Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 10:09:24.690863 env[1137]: time="2024-02-09T10:09:24.690828383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bxhv9,Uid:af7d3a7a-d2aa-4a20-a993-7ee6d8cc2d22,Namespace:kube-system,Attempt:0,} returns sandbox id \"2141e8d406eab9e0443a1d91b163f27fad2970525b58b0f5b3d861ad0e6f201c\"" Feb 9 10:09:24.691463 kubelet[1407]: E0209 10:09:24.691443 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:24.725909 kubelet[1407]: E0209 10:09:24.725861 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:24.945129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3473928937.mount: Deactivated successfully. Feb 9 10:09:25.726588 kubelet[1407]: E0209 10:09:25.726534 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:26.726860 kubelet[1407]: E0209 10:09:26.726820 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:27.727155 kubelet[1407]: E0209 10:09:27.727116 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:27.799772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1290222677.mount: Deactivated successfully. Feb 9 10:09:28.727481 kubelet[1407]: E0209 10:09:28.727440 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:29.728326 kubelet[1407]: E0209 10:09:29.728279 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:30.028706 env[1137]: time="2024-02-09T10:09:30.028598195Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:30.030047 env[1137]: time="2024-02-09T10:09:30.030022100Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:30.032241 env[1137]: time="2024-02-09T10:09:30.032210908Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:30.032805 env[1137]: time="2024-02-09T10:09:30.032756583Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 10:09:30.033771 env[1137]: time="2024-02-09T10:09:30.033742750Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 10:09:30.035143 env[1137]: time="2024-02-09T10:09:30.034992214Z" level=info msg="CreateContainer within sandbox \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:09:30.050121 env[1137]: time="2024-02-09T10:09:30.050072527Z" level=info msg="CreateContainer within sandbox \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37\"" Feb 9 10:09:30.051401 env[1137]: time="2024-02-09T10:09:30.051372048Z" level=info msg="StartContainer for \"b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37\"" Feb 9 10:09:30.066172 systemd[1]: run-containerd-runc-k8s.io-b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37-runc.MOWDJB.mount: Deactivated successfully. Feb 9 10:09:30.068254 systemd[1]: Started cri-containerd-b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37.scope. Feb 9 10:09:30.107450 env[1137]: time="2024-02-09T10:09:30.107400308Z" level=info msg="StartContainer for \"b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37\" returns successfully" Feb 9 10:09:30.134591 systemd[1]: cri-containerd-b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37.scope: Deactivated successfully. Feb 9 10:09:30.241168 env[1137]: time="2024-02-09T10:09:30.241108872Z" level=info msg="shim disconnected" id=b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37 Feb 9 10:09:30.241168 env[1137]: time="2024-02-09T10:09:30.241157492Z" level=warning msg="cleaning up after shim disconnected" id=b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37 namespace=k8s.io Feb 9 10:09:30.241168 env[1137]: time="2024-02-09T10:09:30.241166952Z" level=info msg="cleaning up dead shim" Feb 9 10:09:30.248918 env[1137]: time="2024-02-09T10:09:30.248871070Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:09:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1586 runtime=io.containerd.runc.v2\n" Feb 9 10:09:30.729325 kubelet[1407]: E0209 10:09:30.729288 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:30.822613 kubelet[1407]: E0209 10:09:30.822427 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:30.824425 env[1137]: time="2024-02-09T10:09:30.824391973Z" level=info msg="CreateContainer within sandbox \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 10:09:30.867135 env[1137]: time="2024-02-09T10:09:30.867074744Z" level=info msg="CreateContainer within sandbox \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7\"" Feb 9 10:09:30.867620 env[1137]: time="2024-02-09T10:09:30.867593914Z" level=info msg="StartContainer for \"d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7\"" Feb 9 10:09:30.883346 systemd[1]: Started cri-containerd-d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7.scope. Feb 9 10:09:30.920246 env[1137]: time="2024-02-09T10:09:30.920197873Z" level=info msg="StartContainer for \"d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7\" returns successfully" Feb 9 10:09:30.939797 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 10:09:30.939989 systemd[1]: Stopped systemd-sysctl.service. Feb 9 10:09:30.940159 systemd[1]: Stopping systemd-sysctl.service... Feb 9 10:09:30.941583 systemd[1]: Starting systemd-sysctl.service... Feb 9 10:09:30.943269 systemd[1]: cri-containerd-d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7.scope: Deactivated successfully. Feb 9 10:09:30.948857 systemd[1]: Finished systemd-sysctl.service. Feb 9 10:09:30.977192 env[1137]: time="2024-02-09T10:09:30.977139889Z" level=info msg="shim disconnected" id=d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7 Feb 9 10:09:30.977354 env[1137]: time="2024-02-09T10:09:30.977192740Z" level=warning msg="cleaning up after shim disconnected" id=d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7 namespace=k8s.io Feb 9 10:09:30.977354 env[1137]: time="2024-02-09T10:09:30.977221241Z" level=info msg="cleaning up dead shim" Feb 9 10:09:30.988132 env[1137]: time="2024-02-09T10:09:30.988045687Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:09:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1651 runtime=io.containerd.runc.v2\n" Feb 9 10:09:31.042819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37-rootfs.mount: Deactivated successfully. Feb 9 10:09:31.157115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3922485540.mount: Deactivated successfully. Feb 9 10:09:31.556907 env[1137]: time="2024-02-09T10:09:31.556861817Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:31.558170 env[1137]: time="2024-02-09T10:09:31.558146180Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:31.559534 env[1137]: time="2024-02-09T10:09:31.559498542Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:31.560758 env[1137]: time="2024-02-09T10:09:31.560727645Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:31.561242 env[1137]: time="2024-02-09T10:09:31.561216604Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74\"" Feb 9 10:09:31.563016 env[1137]: time="2024-02-09T10:09:31.562991483Z" level=info msg="CreateContainer within sandbox \"2141e8d406eab9e0443a1d91b163f27fad2970525b58b0f5b3d861ad0e6f201c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 10:09:31.574886 env[1137]: time="2024-02-09T10:09:31.574852414Z" level=info msg="CreateContainer within sandbox \"2141e8d406eab9e0443a1d91b163f27fad2970525b58b0f5b3d861ad0e6f201c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"827d8688f89ae54892d568b10a4e2ad94ddbe5239e1c434e5f4d98f354b050eb\"" Feb 9 10:09:31.575364 env[1137]: time="2024-02-09T10:09:31.575331869Z" level=info msg="StartContainer for \"827d8688f89ae54892d568b10a4e2ad94ddbe5239e1c434e5f4d98f354b050eb\"" Feb 9 10:09:31.590479 systemd[1]: Started cri-containerd-827d8688f89ae54892d568b10a4e2ad94ddbe5239e1c434e5f4d98f354b050eb.scope. Feb 9 10:09:31.624765 env[1137]: time="2024-02-09T10:09:31.624727753Z" level=info msg="StartContainer for \"827d8688f89ae54892d568b10a4e2ad94ddbe5239e1c434e5f4d98f354b050eb\" returns successfully" Feb 9 10:09:31.730119 kubelet[1407]: E0209 10:09:31.730071 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:31.825323 kubelet[1407]: E0209 10:09:31.825230 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:31.827277 env[1137]: time="2024-02-09T10:09:31.827240917Z" level=info msg="CreateContainer within sandbox \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 10:09:31.827387 kubelet[1407]: E0209 10:09:31.827260 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:31.840009 env[1137]: time="2024-02-09T10:09:31.839949639Z" level=info msg="CreateContainer within sandbox \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c\"" Feb 9 10:09:31.840543 env[1137]: time="2024-02-09T10:09:31.840511666Z" level=info msg="StartContainer for \"cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c\"" Feb 9 10:09:31.856454 systemd[1]: Started cri-containerd-cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c.scope. Feb 9 10:09:31.891315 env[1137]: time="2024-02-09T10:09:31.891272967Z" level=info msg="StartContainer for \"cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c\" returns successfully" Feb 9 10:09:31.901168 systemd[1]: cri-containerd-cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c.scope: Deactivated successfully. Feb 9 10:09:32.014663 env[1137]: time="2024-02-09T10:09:32.014621722Z" level=info msg="shim disconnected" id=cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c Feb 9 10:09:32.015005 env[1137]: time="2024-02-09T10:09:32.014983631Z" level=warning msg="cleaning up after shim disconnected" id=cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c namespace=k8s.io Feb 9 10:09:32.015103 env[1137]: time="2024-02-09T10:09:32.015088026Z" level=info msg="cleaning up dead shim" Feb 9 10:09:32.022621 env[1137]: time="2024-02-09T10:09:32.022577651Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:09:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1872 runtime=io.containerd.runc.v2\n" Feb 9 10:09:32.730561 kubelet[1407]: E0209 10:09:32.730501 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:32.830613 kubelet[1407]: E0209 10:09:32.830375 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:32.830613 kubelet[1407]: E0209 10:09:32.830428 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:32.832230 env[1137]: time="2024-02-09T10:09:32.832177948Z" level=info msg="CreateContainer within sandbox \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 10:09:32.843738 kubelet[1407]: I0209 10:09:32.843703 1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bxhv9" podStartSLOduration=2.974114125 podCreationTimestamp="2024-02-09 10:09:23 +0000 UTC" firstStartedPulling="2024-02-09 10:09:24.691845777 +0000 UTC m=+3.713993370" lastFinishedPulling="2024-02-09 10:09:31.561397278 +0000 UTC m=+10.583544831" observedRunningTime="2024-02-09 10:09:31.847900621 +0000 UTC m=+10.870048214" watchObservedRunningTime="2024-02-09 10:09:32.843665586 +0000 UTC m=+11.865813178" Feb 9 10:09:32.845000 env[1137]: time="2024-02-09T10:09:32.844949680Z" level=info msg="CreateContainer within sandbox \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68\"" Feb 9 10:09:32.845694 env[1137]: time="2024-02-09T10:09:32.845666190Z" level=info msg="StartContainer for \"2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68\"" Feb 9 10:09:32.861027 systemd[1]: Started cri-containerd-2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68.scope. Feb 9 10:09:32.897690 systemd[1]: cri-containerd-2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68.scope: Deactivated successfully. Feb 9 10:09:32.898715 env[1137]: time="2024-02-09T10:09:32.898655676Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f05c51_6c3c_42d8_a4f2_4f9049e18ac4.slice/cri-containerd-2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68.scope/memory.events\": no such file or directory" Feb 9 10:09:32.900418 env[1137]: time="2024-02-09T10:09:32.900377959Z" level=info msg="StartContainer for \"2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68\" returns successfully" Feb 9 10:09:32.917243 env[1137]: time="2024-02-09T10:09:32.917179055Z" level=info msg="shim disconnected" id=2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68 Feb 9 10:09:32.917243 env[1137]: time="2024-02-09T10:09:32.917237642Z" level=warning msg="cleaning up after shim disconnected" id=2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68 namespace=k8s.io Feb 9 10:09:32.917243 env[1137]: time="2024-02-09T10:09:32.917248106Z" level=info msg="cleaning up dead shim" Feb 9 10:09:32.923528 env[1137]: time="2024-02-09T10:09:32.923496009Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:09:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1929 runtime=io.containerd.runc.v2\n" Feb 9 10:09:33.042280 systemd[1]: run-containerd-runc-k8s.io-2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68-runc.pwAjH9.mount: Deactivated successfully. Feb 9 10:09:33.042379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68-rootfs.mount: Deactivated successfully. Feb 9 10:09:33.731391 kubelet[1407]: E0209 10:09:33.731348 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:33.833812 kubelet[1407]: E0209 10:09:33.833743 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:33.835912 env[1137]: time="2024-02-09T10:09:33.835875012Z" level=info msg="CreateContainer within sandbox \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 10:09:33.845796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2196934003.mount: Deactivated successfully. Feb 9 10:09:33.849225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257798439.mount: Deactivated successfully. Feb 9 10:09:33.853373 env[1137]: time="2024-02-09T10:09:33.853328085Z" level=info msg="CreateContainer within sandbox \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32\"" Feb 9 10:09:33.854007 env[1137]: time="2024-02-09T10:09:33.853873972Z" level=info msg="StartContainer for \"154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32\"" Feb 9 10:09:33.867160 systemd[1]: Started cri-containerd-154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32.scope. Feb 9 10:09:33.917702 env[1137]: time="2024-02-09T10:09:33.917639051Z" level=info msg="StartContainer for \"154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32\" returns successfully" Feb 9 10:09:34.044566 kubelet[1407]: I0209 10:09:34.044342 1407 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 10:09:34.166219 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 10:09:34.382219 kernel: Initializing XFRM netlink socket Feb 9 10:09:34.384215 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 10:09:34.731780 kubelet[1407]: E0209 10:09:34.731644 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:34.831512 kubelet[1407]: I0209 10:09:34.831478 1407 topology_manager.go:215] "Topology Admit Handler" podUID="a4e247bb-825f-49ba-8da2-51cbe81c0de4" podNamespace="default" podName="nginx-deployment-6d5f899847-xxvpw" Feb 9 10:09:34.836095 systemd[1]: Created slice kubepods-besteffort-poda4e247bb_825f_49ba_8da2_51cbe81c0de4.slice. Feb 9 10:09:34.839102 kubelet[1407]: E0209 10:09:34.839066 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:34.851512 kubelet[1407]: I0209 10:09:34.851473 1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-47sl2" podStartSLOduration=6.504530796 podCreationTimestamp="2024-02-09 10:09:23 +0000 UTC" firstStartedPulling="2024-02-09 10:09:24.686283313 +0000 UTC m=+3.708430905" lastFinishedPulling="2024-02-09 10:09:30.033175998 +0000 UTC m=+9.055323911" observedRunningTime="2024-02-09 10:09:34.851019171 +0000 UTC m=+13.873166764" watchObservedRunningTime="2024-02-09 10:09:34.851423802 +0000 UTC m=+13.873571395" Feb 9 10:09:34.891969 kubelet[1407]: I0209 10:09:34.891923 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqb4x\" (UniqueName: \"kubernetes.io/projected/a4e247bb-825f-49ba-8da2-51cbe81c0de4-kube-api-access-qqb4x\") pod \"nginx-deployment-6d5f899847-xxvpw\" (UID: \"a4e247bb-825f-49ba-8da2-51cbe81c0de4\") " pod="default/nginx-deployment-6d5f899847-xxvpw" Feb 9 10:09:35.138745 env[1137]: time="2024-02-09T10:09:35.138700374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-xxvpw,Uid:a4e247bb-825f-49ba-8da2-51cbe81c0de4,Namespace:default,Attempt:0,}" Feb 9 10:09:35.588345 systemd-networkd[1059]: cilium_host: Link UP Feb 9 10:09:35.589215 systemd-networkd[1059]: cilium_net: Link UP Feb 9 10:09:35.590218 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 10:09:35.590272 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 10:09:35.590911 systemd-networkd[1059]: cilium_net: Gained carrier Feb 9 10:09:35.591100 systemd-networkd[1059]: cilium_host: Gained carrier Feb 9 10:09:35.665665 systemd-networkd[1059]: cilium_vxlan: Link UP Feb 9 10:09:35.665671 systemd-networkd[1059]: cilium_vxlan: Gained carrier Feb 9 10:09:35.716305 systemd-networkd[1059]: cilium_host: Gained IPv6LL Feb 9 10:09:35.732822 kubelet[1407]: E0209 10:09:35.732774 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:35.840084 kubelet[1407]: E0209 10:09:35.839987 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:35.942216 kernel: NET: Registered PF_ALG protocol family Feb 9 10:09:35.988304 systemd-networkd[1059]: cilium_net: Gained IPv6LL Feb 9 10:09:36.493549 systemd-networkd[1059]: lxc_health: Link UP Feb 9 10:09:36.502625 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:09:36.502172 systemd-networkd[1059]: lxc_health: Gained carrier Feb 9 10:09:36.674037 systemd-networkd[1059]: lxc7e96b2f63036: Link UP Feb 9 10:09:36.686288 kernel: eth0: renamed from tmp5c90b Feb 9 10:09:36.697264 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 10:09:36.697349 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7e96b2f63036: link becomes ready Feb 9 10:09:36.697374 systemd-networkd[1059]: lxc7e96b2f63036: Gained carrier Feb 9 10:09:36.733116 kubelet[1407]: E0209 10:09:36.733058 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:36.841218 kubelet[1407]: E0209 10:09:36.841127 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:37.213628 systemd-networkd[1059]: cilium_vxlan: Gained IPv6LL Feb 9 10:09:37.733965 kubelet[1407]: E0209 10:09:37.733925 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:37.852379 systemd-networkd[1059]: lxc7e96b2f63036: Gained IPv6LL Feb 9 10:09:37.916305 systemd-networkd[1059]: lxc_health: Gained IPv6LL Feb 9 10:09:38.067771 kubelet[1407]: E0209 10:09:38.067740 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:38.735037 kubelet[1407]: E0209 10:09:38.734996 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:39.735895 kubelet[1407]: E0209 10:09:39.735859 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:40.194363 env[1137]: time="2024-02-09T10:09:40.194303356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:09:40.194695 env[1137]: time="2024-02-09T10:09:40.194671317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:09:40.194773 env[1137]: time="2024-02-09T10:09:40.194753952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:09:40.195028 env[1137]: time="2024-02-09T10:09:40.194993263Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c90b0f546019ca3c6954b0d6d1b011bb470ceb5b08a8661c34df23bd81763fb pid=2467 runtime=io.containerd.runc.v2 Feb 9 10:09:40.206263 systemd[1]: run-containerd-runc-k8s.io-5c90b0f546019ca3c6954b0d6d1b011bb470ceb5b08a8661c34df23bd81763fb-runc.owyaLA.mount: Deactivated successfully. Feb 9 10:09:40.208502 systemd[1]: Started cri-containerd-5c90b0f546019ca3c6954b0d6d1b011bb470ceb5b08a8661c34df23bd81763fb.scope. Feb 9 10:09:40.260376 systemd-resolved[1091]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 10:09:40.275692 env[1137]: time="2024-02-09T10:09:40.275645030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-xxvpw,Uid:a4e247bb-825f-49ba-8da2-51cbe81c0de4,Namespace:default,Attempt:0,} returns sandbox id \"5c90b0f546019ca3c6954b0d6d1b011bb470ceb5b08a8661c34df23bd81763fb\"" Feb 9 10:09:40.276973 env[1137]: time="2024-02-09T10:09:40.276947964Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 10:09:40.736879 kubelet[1407]: E0209 10:09:40.736834 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:41.723686 kubelet[1407]: E0209 10:09:41.723642 1407 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:41.737880 kubelet[1407]: E0209 10:09:41.737855 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:42.738503 kubelet[1407]: E0209 10:09:42.738454 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:43.012197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520811162.mount: Deactivated successfully. Feb 9 10:09:43.729429 env[1137]: time="2024-02-09T10:09:43.729376906Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:43.732517 env[1137]: time="2024-02-09T10:09:43.732476542Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:43.734374 env[1137]: time="2024-02-09T10:09:43.734342625Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:43.736257 env[1137]: time="2024-02-09T10:09:43.736220464Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:43.737050 env[1137]: time="2024-02-09T10:09:43.737014656Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 10:09:43.738664 kubelet[1407]: E0209 10:09:43.738629 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:43.738958 env[1137]: time="2024-02-09T10:09:43.738858627Z" level=info msg="CreateContainer within sandbox \"5c90b0f546019ca3c6954b0d6d1b011bb470ceb5b08a8661c34df23bd81763fb\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 10:09:43.749754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3754487985.mount: Deactivated successfully. Feb 9 10:09:43.752956 env[1137]: time="2024-02-09T10:09:43.752909891Z" level=info msg="CreateContainer within sandbox \"5c90b0f546019ca3c6954b0d6d1b011bb470ceb5b08a8661c34df23bd81763fb\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"3cda94fa9eb4a063a037462cf7cc7b4ad725e4a9292182f35dede5fb891185c7\"" Feb 9 10:09:43.753387 env[1137]: time="2024-02-09T10:09:43.753347932Z" level=info msg="StartContainer for \"3cda94fa9eb4a063a037462cf7cc7b4ad725e4a9292182f35dede5fb891185c7\"" Feb 9 10:09:43.769179 systemd[1]: Started cri-containerd-3cda94fa9eb4a063a037462cf7cc7b4ad725e4a9292182f35dede5fb891185c7.scope. Feb 9 10:09:43.803079 env[1137]: time="2024-02-09T10:09:43.803027553Z" level=info msg="StartContainer for \"3cda94fa9eb4a063a037462cf7cc7b4ad725e4a9292182f35dede5fb891185c7\" returns successfully" Feb 9 10:09:44.739026 kubelet[1407]: E0209 10:09:44.738986 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:45.739306 kubelet[1407]: E0209 10:09:45.739259 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:46.580942 kubelet[1407]: I0209 10:09:46.580900 1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-xxvpw" podStartSLOduration=9.119855848 podCreationTimestamp="2024-02-09 10:09:34 +0000 UTC" firstStartedPulling="2024-02-09 10:09:40.276669235 +0000 UTC m=+19.298816788" lastFinishedPulling="2024-02-09 10:09:43.737666979 +0000 UTC m=+22.759814572" observedRunningTime="2024-02-09 10:09:43.863310369 +0000 UTC m=+22.885457962" watchObservedRunningTime="2024-02-09 10:09:46.580853632 +0000 UTC m=+25.603001225" Feb 9 10:09:46.581124 kubelet[1407]: I0209 10:09:46.581007 1407 topology_manager.go:215] "Topology Admit Handler" podUID="d51df8ae-32ae-4e41-8e89-74d63491d405" podNamespace="default" podName="nfs-server-provisioner-0" Feb 9 10:09:46.585277 systemd[1]: Created slice kubepods-besteffort-podd51df8ae_32ae_4e41_8e89_74d63491d405.slice. Feb 9 10:09:46.652020 kubelet[1407]: I0209 10:09:46.651984 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d51df8ae-32ae-4e41-8e89-74d63491d405-data\") pod \"nfs-server-provisioner-0\" (UID: \"d51df8ae-32ae-4e41-8e89-74d63491d405\") " pod="default/nfs-server-provisioner-0" Feb 9 10:09:46.652164 kubelet[1407]: I0209 10:09:46.652032 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8npc\" (UniqueName: \"kubernetes.io/projected/d51df8ae-32ae-4e41-8e89-74d63491d405-kube-api-access-h8npc\") pod \"nfs-server-provisioner-0\" (UID: \"d51df8ae-32ae-4e41-8e89-74d63491d405\") " pod="default/nfs-server-provisioner-0" Feb 9 10:09:46.740375 kubelet[1407]: E0209 10:09:46.740335 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:46.887535 env[1137]: time="2024-02-09T10:09:46.887412314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d51df8ae-32ae-4e41-8e89-74d63491d405,Namespace:default,Attempt:0,}" Feb 9 10:09:46.909760 systemd-networkd[1059]: lxcc3a5649a3acd: Link UP Feb 9 10:09:46.922234 kernel: eth0: renamed from tmp328ae Feb 9 10:09:46.929113 systemd-networkd[1059]: lxcc3a5649a3acd: Gained carrier Feb 9 10:09:46.929276 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 10:09:46.929310 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc3a5649a3acd: link becomes ready Feb 9 10:09:47.154920 env[1137]: time="2024-02-09T10:09:47.154667317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:09:47.154920 env[1137]: time="2024-02-09T10:09:47.154706996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:09:47.154920 env[1137]: time="2024-02-09T10:09:47.154717075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:09:47.155202 env[1137]: time="2024-02-09T10:09:47.155144138Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/328ae77f9909dd3c4ebd94413427def7b66a6d10c1489b3b807a280eee16bfba pid=2602 runtime=io.containerd.runc.v2 Feb 9 10:09:47.169366 systemd[1]: Started cri-containerd-328ae77f9909dd3c4ebd94413427def7b66a6d10c1489b3b807a280eee16bfba.scope. Feb 9 10:09:47.191470 systemd-resolved[1091]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 10:09:47.207169 env[1137]: time="2024-02-09T10:09:47.207116896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d51df8ae-32ae-4e41-8e89-74d63491d405,Namespace:default,Attempt:0,} returns sandbox id \"328ae77f9909dd3c4ebd94413427def7b66a6d10c1489b3b807a280eee16bfba\"" Feb 9 10:09:47.208585 env[1137]: time="2024-02-09T10:09:47.208534199Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 10:09:47.740888 kubelet[1407]: E0209 10:09:47.740843 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:47.764925 systemd[1]: run-containerd-runc-k8s.io-328ae77f9909dd3c4ebd94413427def7b66a6d10c1489b3b807a280eee16bfba-runc.ljVkIi.mount: Deactivated successfully. Feb 9 10:09:48.540352 systemd-networkd[1059]: lxcc3a5649a3acd: Gained IPv6LL Feb 9 10:09:48.741762 kubelet[1407]: E0209 10:09:48.741724 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:49.037591 kubelet[1407]: I0209 10:09:49.037487 1407 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 9 10:09:49.038719 kubelet[1407]: E0209 10:09:49.038541 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:49.400755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517478935.mount: Deactivated successfully. Feb 9 10:09:49.742944 kubelet[1407]: E0209 10:09:49.742649 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:49.864309 kubelet[1407]: E0209 10:09:49.864282 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:09:50.743202 kubelet[1407]: E0209 10:09:50.743155 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:51.123592 env[1137]: time="2024-02-09T10:09:51.123541216Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:51.125097 env[1137]: time="2024-02-09T10:09:51.125065127Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:51.127255 env[1137]: time="2024-02-09T10:09:51.127230618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:51.129317 env[1137]: time="2024-02-09T10:09:51.129291272Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:09:51.129892 env[1137]: time="2024-02-09T10:09:51.129867894Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 9 10:09:51.132466 env[1137]: time="2024-02-09T10:09:51.132426892Z" level=info msg="CreateContainer within sandbox \"328ae77f9909dd3c4ebd94413427def7b66a6d10c1489b3b807a280eee16bfba\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 10:09:51.141949 env[1137]: time="2024-02-09T10:09:51.141911509Z" level=info msg="CreateContainer within sandbox \"328ae77f9909dd3c4ebd94413427def7b66a6d10c1489b3b807a280eee16bfba\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"b4a1d342346abd7f9bf690386b928f7cbbdd1704ede6205052ef6c6b4142e9bd\"" Feb 9 10:09:51.142470 env[1137]: time="2024-02-09T10:09:51.142440772Z" level=info msg="StartContainer for \"b4a1d342346abd7f9bf690386b928f7cbbdd1704ede6205052ef6c6b4142e9bd\"" Feb 9 10:09:51.158924 systemd[1]: run-containerd-runc-k8s.io-b4a1d342346abd7f9bf690386b928f7cbbdd1704ede6205052ef6c6b4142e9bd-runc.TXZoXq.mount: Deactivated successfully. Feb 9 10:09:51.160956 systemd[1]: Started cri-containerd-b4a1d342346abd7f9bf690386b928f7cbbdd1704ede6205052ef6c6b4142e9bd.scope. Feb 9 10:09:51.209725 env[1137]: time="2024-02-09T10:09:51.209681623Z" level=info msg="StartContainer for \"b4a1d342346abd7f9bf690386b928f7cbbdd1704ede6205052ef6c6b4142e9bd\" returns successfully" Feb 9 10:09:51.743979 kubelet[1407]: E0209 10:09:51.743945 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:51.877360 kubelet[1407]: I0209 10:09:51.877231 1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.955382849 podCreationTimestamp="2024-02-09 10:09:46 +0000 UTC" firstStartedPulling="2024-02-09 10:09:47.208298128 +0000 UTC m=+26.230445721" lastFinishedPulling="2024-02-09 10:09:51.130111286 +0000 UTC m=+30.152258879" observedRunningTime="2024-02-09 10:09:51.876564267 +0000 UTC m=+30.898711860" watchObservedRunningTime="2024-02-09 10:09:51.877196007 +0000 UTC m=+30.899343600" Feb 9 10:09:52.744197 kubelet[1407]: E0209 10:09:52.744155 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:53.744754 kubelet[1407]: E0209 10:09:53.744721 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:54.745666 kubelet[1407]: E0209 10:09:54.745625 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:55.745946 kubelet[1407]: E0209 10:09:55.745905 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:56.746583 kubelet[1407]: E0209 10:09:56.746537 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:57.746829 kubelet[1407]: E0209 10:09:57.746789 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:58.747226 kubelet[1407]: E0209 10:09:58.747180 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:09:59.748368 kubelet[1407]: E0209 10:09:59.748309 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:00.014451 update_engine[1132]: I0209 10:10:00.014045 1132 update_attempter.cc:509] Updating boot flags... Feb 9 10:10:00.748944 kubelet[1407]: E0209 10:10:00.748905 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:01.060048 kubelet[1407]: I0209 10:10:01.059696 1407 topology_manager.go:215] "Topology Admit Handler" podUID="31aac6cf-94cd-413e-9fbb-2f48a8566e2e" podNamespace="default" podName="test-pod-1" Feb 9 10:10:01.063920 systemd[1]: Created slice kubepods-besteffort-pod31aac6cf_94cd_413e_9fbb_2f48a8566e2e.slice. Feb 9 10:10:01.121112 kubelet[1407]: I0209 10:10:01.121085 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7c511d9d-456e-4619-b471-565d9136aa63\" (UniqueName: \"kubernetes.io/nfs/31aac6cf-94cd-413e-9fbb-2f48a8566e2e-pvc-7c511d9d-456e-4619-b471-565d9136aa63\") pod \"test-pod-1\" (UID: \"31aac6cf-94cd-413e-9fbb-2f48a8566e2e\") " pod="default/test-pod-1" Feb 9 10:10:01.121253 kubelet[1407]: I0209 10:10:01.121134 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdz5d\" (UniqueName: \"kubernetes.io/projected/31aac6cf-94cd-413e-9fbb-2f48a8566e2e-kube-api-access-vdz5d\") pod \"test-pod-1\" (UID: \"31aac6cf-94cd-413e-9fbb-2f48a8566e2e\") " pod="default/test-pod-1" Feb 9 10:10:01.239231 kernel: FS-Cache: Loaded Feb 9 10:10:01.264276 kernel: RPC: Registered named UNIX socket transport module. Feb 9 10:10:01.264355 kernel: RPC: Registered udp transport module. Feb 9 10:10:01.264387 kernel: RPC: Registered tcp transport module. Feb 9 10:10:01.264405 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 10:10:01.296209 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 10:10:01.427224 kernel: NFS: Registering the id_resolver key type Feb 9 10:10:01.427428 kernel: Key type id_resolver registered Feb 9 10:10:01.427457 kernel: Key type id_legacy registered Feb 9 10:10:01.446056 nfsidmap[2729]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 10:10:01.449114 nfsidmap[2732]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 10:10:01.666329 env[1137]: time="2024-02-09T10:10:01.666262797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:31aac6cf-94cd-413e-9fbb-2f48a8566e2e,Namespace:default,Attempt:0,}" Feb 9 10:10:01.687788 systemd-networkd[1059]: lxc0dc5ba4946dc: Link UP Feb 9 10:10:01.696233 kernel: eth0: renamed from tmp15ca8 Feb 9 10:10:01.702703 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 10:10:01.702770 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0dc5ba4946dc: link becomes ready Feb 9 10:10:01.702796 systemd-networkd[1059]: lxc0dc5ba4946dc: Gained carrier Feb 9 10:10:01.724035 kubelet[1407]: E0209 10:10:01.723997 1407 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:01.749443 kubelet[1407]: E0209 10:10:01.749410 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:01.964308 env[1137]: time="2024-02-09T10:10:01.958450561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:10:01.964308 env[1137]: time="2024-02-09T10:10:01.958508439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:10:01.964308 env[1137]: time="2024-02-09T10:10:01.958519759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:10:01.964308 env[1137]: time="2024-02-09T10:10:01.958689676Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/15ca8f5d54741c2357de8af5cca8117a975d46343c3ff3c88a145237f186a51f pid=2766 runtime=io.containerd.runc.v2 Feb 9 10:10:01.972016 systemd[1]: Started cri-containerd-15ca8f5d54741c2357de8af5cca8117a975d46343c3ff3c88a145237f186a51f.scope. Feb 9 10:10:01.989337 systemd-resolved[1091]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 10:10:02.006329 env[1137]: time="2024-02-09T10:10:02.006292903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:31aac6cf-94cd-413e-9fbb-2f48a8566e2e,Namespace:default,Attempt:0,} returns sandbox id \"15ca8f5d54741c2357de8af5cca8117a975d46343c3ff3c88a145237f186a51f\"" Feb 9 10:10:02.007868 env[1137]: time="2024-02-09T10:10:02.007842555Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 10:10:02.364440 env[1137]: time="2024-02-09T10:10:02.364397825Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:10:02.365966 env[1137]: time="2024-02-09T10:10:02.365932557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:10:02.370499 env[1137]: time="2024-02-09T10:10:02.370464515Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:10:02.371270 env[1137]: time="2024-02-09T10:10:02.371238422Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:10:02.371919 env[1137]: time="2024-02-09T10:10:02.371888810Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 10:10:02.373818 env[1137]: time="2024-02-09T10:10:02.373789016Z" level=info msg="CreateContainer within sandbox \"15ca8f5d54741c2357de8af5cca8117a975d46343c3ff3c88a145237f186a51f\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 10:10:02.388385 env[1137]: time="2024-02-09T10:10:02.388339754Z" level=info msg="CreateContainer within sandbox \"15ca8f5d54741c2357de8af5cca8117a975d46343c3ff3c88a145237f186a51f\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"3908f59ba0f069c5a81b93c8f8b0973c96cc12c42d88823855a99b04beb4a3aa\"" Feb 9 10:10:02.388877 env[1137]: time="2024-02-09T10:10:02.388811386Z" level=info msg="StartContainer for \"3908f59ba0f069c5a81b93c8f8b0973c96cc12c42d88823855a99b04beb4a3aa\"" Feb 9 10:10:02.407933 systemd[1]: Started cri-containerd-3908f59ba0f069c5a81b93c8f8b0973c96cc12c42d88823855a99b04beb4a3aa.scope. Feb 9 10:10:02.439251 env[1137]: time="2024-02-09T10:10:02.439212359Z" level=info msg="StartContainer for \"3908f59ba0f069c5a81b93c8f8b0973c96cc12c42d88823855a99b04beb4a3aa\" returns successfully" Feb 9 10:10:02.750535 kubelet[1407]: E0209 10:10:02.750401 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:02.891753 kubelet[1407]: I0209 10:10:02.891704 1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.5270619 podCreationTimestamp="2024-02-09 10:09:46 +0000 UTC" firstStartedPulling="2024-02-09 10:10:02.007486161 +0000 UTC m=+41.029633714" lastFinishedPulling="2024-02-09 10:10:02.372093686 +0000 UTC m=+41.394241239" observedRunningTime="2024-02-09 10:10:02.891494828 +0000 UTC m=+41.913642421" watchObservedRunningTime="2024-02-09 10:10:02.891669425 +0000 UTC m=+41.913816978" Feb 9 10:10:03.196375 systemd-networkd[1059]: lxc0dc5ba4946dc: Gained IPv6LL Feb 9 10:10:03.233908 systemd[1]: run-containerd-runc-k8s.io-3908f59ba0f069c5a81b93c8f8b0973c96cc12c42d88823855a99b04beb4a3aa-runc.snyWPg.mount: Deactivated successfully. Feb 9 10:10:03.750908 kubelet[1407]: E0209 10:10:03.750866 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:04.751560 kubelet[1407]: E0209 10:10:04.751523 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:05.752539 kubelet[1407]: E0209 10:10:05.752497 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:06.756790 kubelet[1407]: E0209 10:10:06.756739 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:07.757256 kubelet[1407]: E0209 10:10:07.757204 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:08.757773 kubelet[1407]: E0209 10:10:08.757729 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:09.298843 env[1137]: time="2024-02-09T10:10:09.298663211Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 10:10:09.310966 env[1137]: time="2024-02-09T10:10:09.310908411Z" level=info msg="StopContainer for \"154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32\" with timeout 2 (s)" Feb 9 10:10:09.315097 env[1137]: time="2024-02-09T10:10:09.311330406Z" level=info msg="Stop container \"154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32\" with signal terminated" Feb 9 10:10:09.320557 systemd-networkd[1059]: lxc_health: Link DOWN Feb 9 10:10:09.320561 systemd-networkd[1059]: lxc_health: Lost carrier Feb 9 10:10:09.356560 systemd[1]: cri-containerd-154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32.scope: Deactivated successfully. Feb 9 10:10:09.356948 systemd[1]: cri-containerd-154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32.scope: Consumed 6.472s CPU time. Feb 9 10:10:09.379056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32-rootfs.mount: Deactivated successfully. Feb 9 10:10:09.389993 env[1137]: time="2024-02-09T10:10:09.389739061Z" level=info msg="shim disconnected" id=154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32 Feb 9 10:10:09.389993 env[1137]: time="2024-02-09T10:10:09.389787381Z" level=warning msg="cleaning up after shim disconnected" id=154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32 namespace=k8s.io Feb 9 10:10:09.389993 env[1137]: time="2024-02-09T10:10:09.389798301Z" level=info msg="cleaning up dead shim" Feb 9 10:10:09.396658 env[1137]: time="2024-02-09T10:10:09.396596172Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:10:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2897 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T10:10:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Feb 9 10:10:09.398838 env[1137]: time="2024-02-09T10:10:09.398795023Z" level=info msg="StopContainer for \"154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32\" returns successfully" Feb 9 10:10:09.399469 env[1137]: time="2024-02-09T10:10:09.399439855Z" level=info msg="StopPodSandbox for \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\"" Feb 9 10:10:09.399535 env[1137]: time="2024-02-09T10:10:09.399502374Z" level=info msg="Container to stop \"d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:10:09.399535 env[1137]: time="2024-02-09T10:10:09.399517734Z" level=info msg="Container to stop \"cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:10:09.399535 env[1137]: time="2024-02-09T10:10:09.399529614Z" level=info msg="Container to stop \"2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:10:09.399612 env[1137]: time="2024-02-09T10:10:09.399542133Z" level=info msg="Container to stop \"b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:10:09.399612 env[1137]: time="2024-02-09T10:10:09.399553373Z" level=info msg="Container to stop \"154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:10:09.400871 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba-shm.mount: Deactivated successfully. Feb 9 10:10:09.406313 systemd[1]: cri-containerd-2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba.scope: Deactivated successfully. Feb 9 10:10:09.426160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba-rootfs.mount: Deactivated successfully. Feb 9 10:10:09.429780 env[1137]: time="2024-02-09T10:10:09.429736059Z" level=info msg="shim disconnected" id=2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba Feb 9 10:10:09.430002 env[1137]: time="2024-02-09T10:10:09.429982376Z" level=warning msg="cleaning up after shim disconnected" id=2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba namespace=k8s.io Feb 9 10:10:09.430075 env[1137]: time="2024-02-09T10:10:09.430061855Z" level=info msg="cleaning up dead shim" Feb 9 10:10:09.437639 env[1137]: time="2024-02-09T10:10:09.437599676Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:10:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2928 runtime=io.containerd.runc.v2\n" Feb 9 10:10:09.438058 env[1137]: time="2024-02-09T10:10:09.438028791Z" level=info msg="TearDown network for sandbox \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\" successfully" Feb 9 10:10:09.438214 env[1137]: time="2024-02-09T10:10:09.438168309Z" level=info msg="StopPodSandbox for \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\" returns successfully" Feb 9 10:10:09.467501 kubelet[1407]: I0209 10:10:09.467462 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-cilium-config-path\") pod \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " Feb 9 10:10:09.467501 kubelet[1407]: I0209 10:10:09.467506 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-cilium-run\") pod \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " Feb 9 10:10:09.467695 kubelet[1407]: I0209 10:10:09.467523 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-cni-path\") pod \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " Feb 9 10:10:09.467695 kubelet[1407]: I0209 10:10:09.467543 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-xtables-lock\") pod \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " Feb 9 10:10:09.467695 kubelet[1407]: I0209 10:10:09.467562 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-hubble-tls\") pod \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " Feb 9 10:10:09.467695 kubelet[1407]: I0209 10:10:09.467580 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbbgv\" (UniqueName: \"kubernetes.io/projected/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-kube-api-access-rbbgv\") pod \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " Feb 9 10:10:09.467695 kubelet[1407]: I0209 10:10:09.467597 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-bpf-maps\") pod \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " Feb 9 10:10:09.467695 kubelet[1407]: I0209 10:10:09.467628 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-host-proc-sys-kernel\") pod \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " Feb 9 10:10:09.467830 kubelet[1407]: I0209 10:10:09.467644 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-hostproc\") pod \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " Feb 9 10:10:09.467830 kubelet[1407]: I0209 10:10:09.467662 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-host-proc-sys-net\") pod \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " Feb 9 10:10:09.467830 kubelet[1407]: I0209 10:10:09.467681 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-clustermesh-secrets\") pod \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " Feb 9 10:10:09.467830 kubelet[1407]: I0209 10:10:09.467700 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-etc-cni-netd\") pod \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " Feb 9 10:10:09.467830 kubelet[1407]: I0209 10:10:09.467717 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-cilium-cgroup\") pod \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " Feb 9 10:10:09.467830 kubelet[1407]: I0209 10:10:09.467736 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-lib-modules\") pod \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\" (UID: \"d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4\") " Feb 9 10:10:09.468028 kubelet[1407]: I0209 10:10:09.467774 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" (UID: "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.468028 kubelet[1407]: I0209 10:10:09.467806 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" (UID: "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.468028 kubelet[1407]: I0209 10:10:09.467820 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-cni-path" (OuterVolumeSpecName: "cni-path") pod "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" (UID: "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.468028 kubelet[1407]: I0209 10:10:09.467834 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" (UID: "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.468120 kubelet[1407]: I0209 10:10:09.468066 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-hostproc" (OuterVolumeSpecName: "hostproc") pod "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" (UID: "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.469461 kubelet[1407]: I0209 10:10:09.469423 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" (UID: "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:10:09.469553 kubelet[1407]: I0209 10:10:09.469480 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" (UID: "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.469553 kubelet[1407]: I0209 10:10:09.469498 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" (UID: "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.469553 kubelet[1407]: I0209 10:10:09.469515 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" (UID: "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.469553 kubelet[1407]: I0209 10:10:09.469531 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" (UID: "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.469553 kubelet[1407]: I0209 10:10:09.469546 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" (UID: "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:09.470673 kubelet[1407]: I0209 10:10:09.470607 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-kube-api-access-rbbgv" (OuterVolumeSpecName: "kube-api-access-rbbgv") pod "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" (UID: "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4"). InnerVolumeSpecName "kube-api-access-rbbgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:10:09.470673 kubelet[1407]: I0209 10:10:09.470626 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" (UID: "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:10:09.471280 systemd[1]: var-lib-kubelet-pods-d3f05c51\x2d6c3c\x2d42d8\x2da4f2\x2d4f9049e18ac4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drbbgv.mount: Deactivated successfully. Feb 9 10:10:09.471378 systemd[1]: var-lib-kubelet-pods-d3f05c51\x2d6c3c\x2d42d8\x2da4f2\x2d4f9049e18ac4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 10:10:09.471449 kubelet[1407]: I0209 10:10:09.471383 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" (UID: "d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:10:09.568882 kubelet[1407]: I0209 10:10:09.568734 1407 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-xtables-lock\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:09.568882 kubelet[1407]: I0209 10:10:09.568772 1407 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-cilium-run\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:09.568882 kubelet[1407]: I0209 10:10:09.568785 1407 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-cni-path\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:09.568882 kubelet[1407]: I0209 10:10:09.568798 1407 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-host-proc-sys-net\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:09.568882 kubelet[1407]: I0209 10:10:09.568808 1407 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-clustermesh-secrets\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:09.568882 kubelet[1407]: I0209 10:10:09.568820 1407 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-hubble-tls\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:09.568882 kubelet[1407]: I0209 10:10:09.568831 1407 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rbbgv\" (UniqueName: \"kubernetes.io/projected/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-kube-api-access-rbbgv\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:09.568882 kubelet[1407]: I0209 10:10:09.568840 1407 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-bpf-maps\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:09.569174 kubelet[1407]: I0209 10:10:09.568850 1407 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-host-proc-sys-kernel\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:09.569174 kubelet[1407]: I0209 10:10:09.568859 1407 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-hostproc\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:09.569174 kubelet[1407]: I0209 10:10:09.568868 1407 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-etc-cni-netd\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:09.569174 kubelet[1407]: I0209 10:10:09.568877 1407 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-cilium-cgroup\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:09.569174 kubelet[1407]: I0209 10:10:09.568886 1407 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-lib-modules\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:09.569174 kubelet[1407]: I0209 10:10:09.568895 1407 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4-cilium-config-path\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:09.758821 kubelet[1407]: E0209 10:10:09.758746 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:09.812288 systemd[1]: Removed slice kubepods-burstable-podd3f05c51_6c3c_42d8_a4f2_4f9049e18ac4.slice. Feb 9 10:10:09.812379 systemd[1]: kubepods-burstable-podd3f05c51_6c3c_42d8_a4f2_4f9049e18ac4.slice: Consumed 6.660s CPU time. Feb 9 10:10:09.897855 kubelet[1407]: I0209 10:10:09.897737 1407 scope.go:117] "RemoveContainer" containerID="154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32" Feb 9 10:10:09.900112 env[1137]: time="2024-02-09T10:10:09.900075436Z" level=info msg="RemoveContainer for \"154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32\"" Feb 9 10:10:09.905063 env[1137]: time="2024-02-09T10:10:09.905020971Z" level=info msg="RemoveContainer for \"154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32\" returns successfully" Feb 9 10:10:09.905348 kubelet[1407]: I0209 10:10:09.905326 1407 scope.go:117] "RemoveContainer" containerID="2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68" Feb 9 10:10:09.906481 env[1137]: time="2024-02-09T10:10:09.906451712Z" level=info msg="RemoveContainer for \"2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68\"" Feb 9 10:10:09.908922 env[1137]: time="2024-02-09T10:10:09.908866521Z" level=info msg="RemoveContainer for \"2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68\" returns successfully" Feb 9 10:10:09.909091 kubelet[1407]: I0209 10:10:09.909047 1407 scope.go:117] "RemoveContainer" containerID="cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c" Feb 9 10:10:09.910093 env[1137]: time="2024-02-09T10:10:09.910064105Z" level=info msg="RemoveContainer for \"cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c\"" Feb 9 10:10:09.912743 env[1137]: time="2024-02-09T10:10:09.912695951Z" level=info msg="RemoveContainer for \"cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c\" returns successfully" Feb 9 10:10:09.912907 kubelet[1407]: I0209 10:10:09.912868 1407 scope.go:117] "RemoveContainer" containerID="d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7" Feb 9 10:10:09.913882 env[1137]: time="2024-02-09T10:10:09.913856896Z" level=info msg="RemoveContainer for \"d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7\"" Feb 9 10:10:09.915718 env[1137]: time="2024-02-09T10:10:09.915690632Z" level=info msg="RemoveContainer for \"d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7\" returns successfully" Feb 9 10:10:09.915891 kubelet[1407]: I0209 10:10:09.915871 1407 scope.go:117] "RemoveContainer" containerID="b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37" Feb 9 10:10:09.916930 env[1137]: time="2024-02-09T10:10:09.916897656Z" level=info msg="RemoveContainer for \"b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37\"" Feb 9 10:10:09.919030 env[1137]: time="2024-02-09T10:10:09.918994549Z" level=info msg="RemoveContainer for \"b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37\" returns successfully" Feb 9 10:10:09.919226 kubelet[1407]: I0209 10:10:09.919175 1407 scope.go:117] "RemoveContainer" containerID="154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32" Feb 9 10:10:09.919489 env[1137]: time="2024-02-09T10:10:09.919406463Z" level=error msg="ContainerStatus for \"154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32\": not found" Feb 9 10:10:09.919707 kubelet[1407]: E0209 10:10:09.919686 1407 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32\": not found" containerID="154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32" Feb 9 10:10:09.919794 kubelet[1407]: I0209 10:10:09.919780 1407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32"} err="failed to get container status \"154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32\": rpc error: code = NotFound desc = an error occurred when try to find container \"154545b992cd4e24afe4ec236cc7f174c0ce236f1a591dd36e54ed7b80c72b32\": not found" Feb 9 10:10:09.919826 kubelet[1407]: I0209 10:10:09.919797 1407 scope.go:117] "RemoveContainer" containerID="2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68" Feb 9 10:10:09.920048 env[1137]: time="2024-02-09T10:10:09.920000335Z" level=error msg="ContainerStatus for \"2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68\": not found" Feb 9 10:10:09.920182 kubelet[1407]: E0209 10:10:09.920162 1407 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68\": not found" containerID="2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68" Feb 9 10:10:09.920246 kubelet[1407]: I0209 10:10:09.920218 1407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68"} err="failed to get container status \"2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f135c05345d91dccc4549032ace129503aa7688f7b2d1d0d2d6604c60fb1b68\": not found" Feb 9 10:10:09.920246 kubelet[1407]: I0209 10:10:09.920232 1407 scope.go:117] "RemoveContainer" containerID="cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c" Feb 9 10:10:09.920456 env[1137]: time="2024-02-09T10:10:09.920410530Z" level=error msg="ContainerStatus for \"cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c\": not found" Feb 9 10:10:09.920668 kubelet[1407]: E0209 10:10:09.920648 1407 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c\": not found" containerID="cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c" Feb 9 10:10:09.920772 kubelet[1407]: I0209 10:10:09.920752 1407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c"} err="failed to get container status \"cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c\": rpc error: code = NotFound desc = an error occurred when try to find container \"cce5881350d68922d9cf5e660c342ed83e3fd1b596c9c13700b24fbfdad1299c\": not found" Feb 9 10:10:09.920847 kubelet[1407]: I0209 10:10:09.920836 1407 scope.go:117] "RemoveContainer" containerID="d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7" Feb 9 10:10:09.921173 env[1137]: time="2024-02-09T10:10:09.921115161Z" level=error msg="ContainerStatus for \"d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7\": not found" Feb 9 10:10:09.921299 kubelet[1407]: E0209 10:10:09.921283 1407 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7\": not found" containerID="d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7" Feb 9 10:10:09.921338 kubelet[1407]: I0209 10:10:09.921309 1407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7"} err="failed to get container status \"d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5658cda2ca80a699224a62f980d38d89da9c4ff1330ae4e877d64ea0eb894f7\": not found" Feb 9 10:10:09.921338 kubelet[1407]: I0209 10:10:09.921320 1407 scope.go:117] "RemoveContainer" containerID="b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37" Feb 9 10:10:09.921559 env[1137]: time="2024-02-09T10:10:09.921512836Z" level=error msg="ContainerStatus for \"b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37\": not found" Feb 9 10:10:09.921780 kubelet[1407]: E0209 10:10:09.921761 1407 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37\": not found" containerID="b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37" Feb 9 10:10:09.921834 kubelet[1407]: I0209 10:10:09.921790 1407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37"} err="failed to get container status \"b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1d61ede576eaf5501f8b8e3066ff2d1bd63450300065c9903b6a782de8ddf37\": not found" Feb 9 10:10:10.260133 systemd[1]: var-lib-kubelet-pods-d3f05c51\x2d6c3c\x2d42d8\x2da4f2\x2d4f9049e18ac4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 10:10:10.758923 kubelet[1407]: E0209 10:10:10.758873 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:11.759716 kubelet[1407]: E0209 10:10:11.759667 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:11.783564 kubelet[1407]: E0209 10:10:11.783541 1407 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:10:11.809160 kubelet[1407]: I0209 10:10:11.809131 1407 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" path="/var/lib/kubelet/pods/d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4/volumes" Feb 9 10:10:12.220992 kubelet[1407]: I0209 10:10:12.220952 1407 topology_manager.go:215] "Topology Admit Handler" podUID="9fd136f8-4dde-4388-a3b8-b322dbe3544a" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-sdxww" Feb 9 10:10:12.220992 kubelet[1407]: E0209 10:10:12.221004 1407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" containerName="clean-cilium-state" Feb 9 10:10:12.221228 kubelet[1407]: E0209 10:10:12.221015 1407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" containerName="cilium-agent" Feb 9 10:10:12.221228 kubelet[1407]: E0209 10:10:12.221022 1407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" containerName="mount-cgroup" Feb 9 10:10:12.221228 kubelet[1407]: E0209 10:10:12.221028 1407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" containerName="apply-sysctl-overwrites" Feb 9 10:10:12.221228 kubelet[1407]: E0209 10:10:12.221037 1407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" containerName="mount-bpf-fs" Feb 9 10:10:12.221228 kubelet[1407]: I0209 10:10:12.221053 1407 memory_manager.go:346] "RemoveStaleState removing state" podUID="d3f05c51-6c3c-42d8-a4f2-4f9049e18ac4" containerName="cilium-agent" Feb 9 10:10:12.225395 systemd[1]: Created slice kubepods-besteffort-pod9fd136f8_4dde_4388_a3b8_b322dbe3544a.slice. Feb 9 10:10:12.227671 kubelet[1407]: I0209 10:10:12.227645 1407 topology_manager.go:215] "Topology Admit Handler" podUID="02491839-89e5-4fab-ad4e-30f3b18a9ad8" podNamespace="kube-system" podName="cilium-5bq8f" Feb 9 10:10:12.231714 systemd[1]: Created slice kubepods-burstable-pod02491839_89e5_4fab_ad4e_30f3b18a9ad8.slice. Feb 9 10:10:12.283654 kubelet[1407]: I0209 10:10:12.283620 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-hostproc\") pod \"cilium-5bq8f\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " pod="kube-system/cilium-5bq8f" Feb 9 10:10:12.283654 kubelet[1407]: I0209 10:10:12.283663 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cni-path\") pod \"cilium-5bq8f\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " pod="kube-system/cilium-5bq8f" Feb 9 10:10:12.283838 kubelet[1407]: I0209 10:10:12.283687 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-xtables-lock\") pod \"cilium-5bq8f\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " pod="kube-system/cilium-5bq8f" Feb 9 10:10:12.283838 kubelet[1407]: I0209 10:10:12.283708 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cilium-ipsec-secrets\") pod \"cilium-5bq8f\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " pod="kube-system/cilium-5bq8f" Feb 9 10:10:12.283838 kubelet[1407]: I0209 10:10:12.283763 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cilium-run\") pod \"cilium-5bq8f\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " pod="kube-system/cilium-5bq8f" Feb 9 10:10:12.283838 kubelet[1407]: I0209 10:10:12.283800 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-host-proc-sys-kernel\") pod \"cilium-5bq8f\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " pod="kube-system/cilium-5bq8f" Feb 9 10:10:12.283838 kubelet[1407]: I0209 10:10:12.283820 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-etc-cni-netd\") pod \"cilium-5bq8f\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " pod="kube-system/cilium-5bq8f" Feb 9 10:10:12.283838 kubelet[1407]: I0209 10:10:12.283838 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-lib-modules\") pod \"cilium-5bq8f\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " pod="kube-system/cilium-5bq8f" Feb 9 10:10:12.283982 kubelet[1407]: I0209 10:10:12.283861 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fd136f8-4dde-4388-a3b8-b322dbe3544a-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-sdxww\" (UID: \"9fd136f8-4dde-4388-a3b8-b322dbe3544a\") " pod="kube-system/cilium-operator-6bc8ccdb58-sdxww" Feb 9 10:10:12.283982 kubelet[1407]: I0209 10:10:12.283915 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cilium-cgroup\") pod \"cilium-5bq8f\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " pod="kube-system/cilium-5bq8f" Feb 9 10:10:12.283982 kubelet[1407]: I0209 10:10:12.283944 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02491839-89e5-4fab-ad4e-30f3b18a9ad8-clustermesh-secrets\") pod \"cilium-5bq8f\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " pod="kube-system/cilium-5bq8f" Feb 9 10:10:12.283982 kubelet[1407]: I0209 10:10:12.283964 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02491839-89e5-4fab-ad4e-30f3b18a9ad8-hubble-tls\") pod \"cilium-5bq8f\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " pod="kube-system/cilium-5bq8f" Feb 9 10:10:12.284070 kubelet[1407]: I0209 10:10:12.284012 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq5pt\" (UniqueName: \"kubernetes.io/projected/9fd136f8-4dde-4388-a3b8-b322dbe3544a-kube-api-access-dq5pt\") pod \"cilium-operator-6bc8ccdb58-sdxww\" (UID: \"9fd136f8-4dde-4388-a3b8-b322dbe3544a\") " pod="kube-system/cilium-operator-6bc8ccdb58-sdxww" Feb 9 10:10:12.284070 kubelet[1407]: I0209 10:10:12.284033 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-bpf-maps\") pod \"cilium-5bq8f\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " pod="kube-system/cilium-5bq8f" Feb 9 10:10:12.284132 kubelet[1407]: I0209 10:10:12.284101 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-host-proc-sys-net\") pod \"cilium-5bq8f\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " pod="kube-system/cilium-5bq8f" Feb 9 10:10:12.284170 kubelet[1407]: I0209 10:10:12.284157 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkxdn\" (UniqueName: \"kubernetes.io/projected/02491839-89e5-4fab-ad4e-30f3b18a9ad8-kube-api-access-tkxdn\") pod \"cilium-5bq8f\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " pod="kube-system/cilium-5bq8f" Feb 9 10:10:12.284216 kubelet[1407]: I0209 10:10:12.284204 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cilium-config-path\") pod \"cilium-5bq8f\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " pod="kube-system/cilium-5bq8f" Feb 9 10:10:12.391683 kubelet[1407]: E0209 10:10:12.391640 1407 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls kube-api-access-tkxdn], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-5bq8f" podUID="02491839-89e5-4fab-ad4e-30f3b18a9ad8" Feb 9 10:10:12.528332 kubelet[1407]: E0209 10:10:12.527134 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:12.528983 env[1137]: time="2024-02-09T10:10:12.528936115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-sdxww,Uid:9fd136f8-4dde-4388-a3b8-b322dbe3544a,Namespace:kube-system,Attempt:0,}" Feb 9 10:10:12.540090 env[1137]: time="2024-02-09T10:10:12.540015347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:10:12.540090 env[1137]: time="2024-02-09T10:10:12.540055707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:10:12.540090 env[1137]: time="2024-02-09T10:10:12.540066147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:10:12.541103 env[1137]: time="2024-02-09T10:10:12.540218625Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/450350e1703d64142a89e7b7b5a1795665d26936d6c23f146e856f730a7a9929 pid=2958 runtime=io.containerd.runc.v2 Feb 9 10:10:12.550222 systemd[1]: Started cri-containerd-450350e1703d64142a89e7b7b5a1795665d26936d6c23f146e856f730a7a9929.scope. Feb 9 10:10:12.602428 env[1137]: time="2024-02-09T10:10:12.602369867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-sdxww,Uid:9fd136f8-4dde-4388-a3b8-b322dbe3544a,Namespace:kube-system,Attempt:0,} returns sandbox id \"450350e1703d64142a89e7b7b5a1795665d26936d6c23f146e856f730a7a9929\"" Feb 9 10:10:12.603079 kubelet[1407]: E0209 10:10:12.603056 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:12.603926 env[1137]: time="2024-02-09T10:10:12.603894850Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 10:10:12.760272 kubelet[1407]: E0209 10:10:12.760214 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:12.989822 kubelet[1407]: I0209 10:10:12.989784 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02491839-89e5-4fab-ad4e-30f3b18a9ad8-clustermesh-secrets\") pod \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " Feb 9 10:10:12.989822 kubelet[1407]: I0209 10:10:12.989823 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cilium-cgroup\") pod \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " Feb 9 10:10:12.989965 kubelet[1407]: I0209 10:10:12.989844 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02491839-89e5-4fab-ad4e-30f3b18a9ad8-hubble-tls\") pod \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " Feb 9 10:10:12.989965 kubelet[1407]: I0209 10:10:12.989862 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cni-path\") pod \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " Feb 9 10:10:12.989965 kubelet[1407]: I0209 10:10:12.989888 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cilium-config-path\") pod \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " Feb 9 10:10:12.989965 kubelet[1407]: I0209 10:10:12.989908 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkxdn\" (UniqueName: \"kubernetes.io/projected/02491839-89e5-4fab-ad4e-30f3b18a9ad8-kube-api-access-tkxdn\") pod \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " Feb 9 10:10:12.989965 kubelet[1407]: I0209 10:10:12.989901 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "02491839-89e5-4fab-ad4e-30f3b18a9ad8" (UID: "02491839-89e5-4fab-ad4e-30f3b18a9ad8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.989965 kubelet[1407]: I0209 10:10:12.989924 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-hostproc\") pod \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " Feb 9 10:10:12.990139 kubelet[1407]: I0209 10:10:12.989942 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cni-path" (OuterVolumeSpecName: "cni-path") pod "02491839-89e5-4fab-ad4e-30f3b18a9ad8" (UID: "02491839-89e5-4fab-ad4e-30f3b18a9ad8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.990139 kubelet[1407]: I0209 10:10:12.989946 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cilium-ipsec-secrets\") pod \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " Feb 9 10:10:12.990139 kubelet[1407]: I0209 10:10:12.989979 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-host-proc-sys-kernel\") pod \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " Feb 9 10:10:12.990139 kubelet[1407]: I0209 10:10:12.990003 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-lib-modules\") pod \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " Feb 9 10:10:12.990139 kubelet[1407]: I0209 10:10:12.990021 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-host-proc-sys-net\") pod \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " Feb 9 10:10:12.990139 kubelet[1407]: I0209 10:10:12.990039 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-bpf-maps\") pod \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " Feb 9 10:10:12.990303 kubelet[1407]: I0209 10:10:12.990055 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-xtables-lock\") pod \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " Feb 9 10:10:12.990303 kubelet[1407]: I0209 10:10:12.990072 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cilium-run\") pod \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " Feb 9 10:10:12.990303 kubelet[1407]: I0209 10:10:12.990099 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-etc-cni-netd\") pod \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\" (UID: \"02491839-89e5-4fab-ad4e-30f3b18a9ad8\") " Feb 9 10:10:12.990303 kubelet[1407]: I0209 10:10:12.990129 1407 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cni-path\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:12.990555 kubelet[1407]: I0209 10:10:12.990520 1407 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cilium-cgroup\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:12.990594 kubelet[1407]: I0209 10:10:12.990558 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "02491839-89e5-4fab-ad4e-30f3b18a9ad8" (UID: "02491839-89e5-4fab-ad4e-30f3b18a9ad8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.993053 kubelet[1407]: I0209 10:10:12.992997 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "02491839-89e5-4fab-ad4e-30f3b18a9ad8" (UID: "02491839-89e5-4fab-ad4e-30f3b18a9ad8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:10:12.993152 kubelet[1407]: I0209 10:10:12.993060 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02491839-89e5-4fab-ad4e-30f3b18a9ad8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "02491839-89e5-4fab-ad4e-30f3b18a9ad8" (UID: "02491839-89e5-4fab-ad4e-30f3b18a9ad8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:10:12.993152 kubelet[1407]: I0209 10:10:12.993069 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "02491839-89e5-4fab-ad4e-30f3b18a9ad8" (UID: "02491839-89e5-4fab-ad4e-30f3b18a9ad8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.993152 kubelet[1407]: I0209 10:10:12.993099 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "02491839-89e5-4fab-ad4e-30f3b18a9ad8" (UID: "02491839-89e5-4fab-ad4e-30f3b18a9ad8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.993152 kubelet[1407]: I0209 10:10:12.993119 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "02491839-89e5-4fab-ad4e-30f3b18a9ad8" (UID: "02491839-89e5-4fab-ad4e-30f3b18a9ad8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.993152 kubelet[1407]: I0209 10:10:12.993123 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "02491839-89e5-4fab-ad4e-30f3b18a9ad8" (UID: "02491839-89e5-4fab-ad4e-30f3b18a9ad8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.993290 kubelet[1407]: I0209 10:10:12.993148 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "02491839-89e5-4fab-ad4e-30f3b18a9ad8" (UID: "02491839-89e5-4fab-ad4e-30f3b18a9ad8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.993290 kubelet[1407]: I0209 10:10:12.993149 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "02491839-89e5-4fab-ad4e-30f3b18a9ad8" (UID: "02491839-89e5-4fab-ad4e-30f3b18a9ad8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.993591 kubelet[1407]: I0209 10:10:12.993561 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02491839-89e5-4fab-ad4e-30f3b18a9ad8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "02491839-89e5-4fab-ad4e-30f3b18a9ad8" (UID: "02491839-89e5-4fab-ad4e-30f3b18a9ad8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:10:12.993658 kubelet[1407]: I0209 10:10:12.993604 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-hostproc" (OuterVolumeSpecName: "hostproc") pod "02491839-89e5-4fab-ad4e-30f3b18a9ad8" (UID: "02491839-89e5-4fab-ad4e-30f3b18a9ad8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:10:12.994842 kubelet[1407]: I0209 10:10:12.994795 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "02491839-89e5-4fab-ad4e-30f3b18a9ad8" (UID: "02491839-89e5-4fab-ad4e-30f3b18a9ad8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:10:12.995726 kubelet[1407]: I0209 10:10:12.995678 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02491839-89e5-4fab-ad4e-30f3b18a9ad8-kube-api-access-tkxdn" (OuterVolumeSpecName: "kube-api-access-tkxdn") pod "02491839-89e5-4fab-ad4e-30f3b18a9ad8" (UID: "02491839-89e5-4fab-ad4e-30f3b18a9ad8"). InnerVolumeSpecName "kube-api-access-tkxdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:10:13.091163 kubelet[1407]: I0209 10:10:13.091123 1407 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02491839-89e5-4fab-ad4e-30f3b18a9ad8-hubble-tls\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:13.091163 kubelet[1407]: I0209 10:10:13.091162 1407 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02491839-89e5-4fab-ad4e-30f3b18a9ad8-clustermesh-secrets\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:13.091279 kubelet[1407]: I0209 10:10:13.091175 1407 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cilium-config-path\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:13.091279 kubelet[1407]: I0209 10:10:13.091199 1407 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-hostproc\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:13.091279 kubelet[1407]: I0209 10:10:13.091210 1407 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cilium-ipsec-secrets\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:13.091279 kubelet[1407]: I0209 10:10:13.091220 1407 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-host-proc-sys-kernel\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:13.091279 kubelet[1407]: I0209 10:10:13.091230 1407 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tkxdn\" (UniqueName: \"kubernetes.io/projected/02491839-89e5-4fab-ad4e-30f3b18a9ad8-kube-api-access-tkxdn\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:13.091279 kubelet[1407]: I0209 10:10:13.091239 1407 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-lib-modules\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:13.091279 kubelet[1407]: I0209 10:10:13.091248 1407 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-host-proc-sys-net\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:13.091279 kubelet[1407]: I0209 10:10:13.091256 1407 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-xtables-lock\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:13.091445 kubelet[1407]: I0209 10:10:13.091266 1407 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-cilium-run\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:13.091445 kubelet[1407]: I0209 10:10:13.091274 1407 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-etc-cni-netd\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:13.091445 kubelet[1407]: I0209 10:10:13.091283 1407 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02491839-89e5-4fab-ad4e-30f3b18a9ad8-bpf-maps\") on node \"10.0.0.134\" DevicePath \"\"" Feb 9 10:10:13.390166 systemd[1]: var-lib-kubelet-pods-02491839\x2d89e5\x2d4fab\x2dad4e\x2d30f3b18a9ad8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtkxdn.mount: Deactivated successfully. Feb 9 10:10:13.390265 systemd[1]: var-lib-kubelet-pods-02491839\x2d89e5\x2d4fab\x2dad4e\x2d30f3b18a9ad8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 10:10:13.390316 systemd[1]: var-lib-kubelet-pods-02491839\x2d89e5\x2d4fab\x2dad4e\x2d30f3b18a9ad8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 10:10:13.390364 systemd[1]: var-lib-kubelet-pods-02491839\x2d89e5\x2d4fab\x2dad4e\x2d30f3b18a9ad8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 10:10:13.761441 kubelet[1407]: E0209 10:10:13.760826 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:13.782294 env[1137]: time="2024-02-09T10:10:13.782250951Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:10:13.784500 env[1137]: time="2024-02-09T10:10:13.784465807Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:10:13.785591 env[1137]: time="2024-02-09T10:10:13.785568594Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:10:13.785975 env[1137]: time="2024-02-09T10:10:13.785950270Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 10:10:13.788113 env[1137]: time="2024-02-09T10:10:13.788053647Z" level=info msg="CreateContainer within sandbox \"450350e1703d64142a89e7b7b5a1795665d26936d6c23f146e856f730a7a9929\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 10:10:13.797391 env[1137]: time="2024-02-09T10:10:13.797344784Z" level=info msg="CreateContainer within sandbox \"450350e1703d64142a89e7b7b5a1795665d26936d6c23f146e856f730a7a9929\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3b2eb4ca6a145a07089955cfeb9be8f71f5ebbc1cc17300dc62f1215b3dd7acc\"" Feb 9 10:10:13.797812 env[1137]: time="2024-02-09T10:10:13.797747979Z" level=info msg="StartContainer for \"3b2eb4ca6a145a07089955cfeb9be8f71f5ebbc1cc17300dc62f1215b3dd7acc\"" Feb 9 10:10:13.811944 systemd[1]: Removed slice kubepods-burstable-pod02491839_89e5_4fab_ad4e_30f3b18a9ad8.slice. Feb 9 10:10:13.814049 systemd[1]: Started cri-containerd-3b2eb4ca6a145a07089955cfeb9be8f71f5ebbc1cc17300dc62f1215b3dd7acc.scope. Feb 9 10:10:13.850973 env[1137]: time="2024-02-09T10:10:13.850928509Z" level=info msg="StartContainer for \"3b2eb4ca6a145a07089955cfeb9be8f71f5ebbc1cc17300dc62f1215b3dd7acc\" returns successfully" Feb 9 10:10:13.909703 kubelet[1407]: E0209 10:10:13.909577 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:13.917535 kubelet[1407]: I0209 10:10:13.917500 1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-sdxww" podStartSLOduration=0.734893795 podCreationTimestamp="2024-02-09 10:10:12 +0000 UTC" firstStartedPulling="2024-02-09 10:10:12.603598053 +0000 UTC m=+51.625745606" lastFinishedPulling="2024-02-09 10:10:13.786172468 +0000 UTC m=+52.808320061" observedRunningTime="2024-02-09 10:10:13.916846217 +0000 UTC m=+52.938993810" watchObservedRunningTime="2024-02-09 10:10:13.91746825 +0000 UTC m=+52.939615843" Feb 9 10:10:13.951891 kubelet[1407]: I0209 10:10:13.951846 1407 topology_manager.go:215] "Topology Admit Handler" podUID="e9cb2d13-346b-4766-b82a-90beac2036e9" podNamespace="kube-system" podName="cilium-tdh5w" Feb 9 10:10:13.957001 systemd[1]: Created slice kubepods-burstable-pode9cb2d13_346b_4766_b82a_90beac2036e9.slice. Feb 9 10:10:13.958493 kubelet[1407]: W0209 10:10:13.958462 1407 helpers.go:242] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9cb2d13_346b_4766_b82a_90beac2036e9.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9cb2d13_346b_4766_b82a_90beac2036e9.slice/cpuset.cpus.effective: no such device Feb 9 10:10:13.995683 kubelet[1407]: I0209 10:10:13.995634 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9cb2d13-346b-4766-b82a-90beac2036e9-lib-modules\") pod \"cilium-tdh5w\" (UID: \"e9cb2d13-346b-4766-b82a-90beac2036e9\") " pod="kube-system/cilium-tdh5w" Feb 9 10:10:13.995814 kubelet[1407]: I0209 10:10:13.995708 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9cb2d13-346b-4766-b82a-90beac2036e9-cilium-config-path\") pod \"cilium-tdh5w\" (UID: \"e9cb2d13-346b-4766-b82a-90beac2036e9\") " pod="kube-system/cilium-tdh5w" Feb 9 10:10:13.995814 kubelet[1407]: I0209 10:10:13.995733 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9cb2d13-346b-4766-b82a-90beac2036e9-bpf-maps\") pod \"cilium-tdh5w\" (UID: \"e9cb2d13-346b-4766-b82a-90beac2036e9\") " pod="kube-system/cilium-tdh5w" Feb 9 10:10:13.995814 kubelet[1407]: I0209 10:10:13.995762 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9cb2d13-346b-4766-b82a-90beac2036e9-hostproc\") pod \"cilium-tdh5w\" (UID: \"e9cb2d13-346b-4766-b82a-90beac2036e9\") " pod="kube-system/cilium-tdh5w" Feb 9 10:10:13.995814 kubelet[1407]: I0209 10:10:13.995782 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9cb2d13-346b-4766-b82a-90beac2036e9-xtables-lock\") pod \"cilium-tdh5w\" (UID: \"e9cb2d13-346b-4766-b82a-90beac2036e9\") " pod="kube-system/cilium-tdh5w" Feb 9 10:10:13.995814 kubelet[1407]: I0209 10:10:13.995814 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9cb2d13-346b-4766-b82a-90beac2036e9-host-proc-sys-kernel\") pod \"cilium-tdh5w\" (UID: \"e9cb2d13-346b-4766-b82a-90beac2036e9\") " pod="kube-system/cilium-tdh5w" Feb 9 10:10:13.995991 kubelet[1407]: I0209 10:10:13.995842 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5nj\" (UniqueName: \"kubernetes.io/projected/e9cb2d13-346b-4766-b82a-90beac2036e9-kube-api-access-kt5nj\") pod \"cilium-tdh5w\" (UID: \"e9cb2d13-346b-4766-b82a-90beac2036e9\") " pod="kube-system/cilium-tdh5w" Feb 9 10:10:13.995991 kubelet[1407]: I0209 10:10:13.995861 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9cb2d13-346b-4766-b82a-90beac2036e9-cilium-run\") pod \"cilium-tdh5w\" (UID: \"e9cb2d13-346b-4766-b82a-90beac2036e9\") " pod="kube-system/cilium-tdh5w" Feb 9 10:10:13.995991 kubelet[1407]: I0209 10:10:13.995881 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9cb2d13-346b-4766-b82a-90beac2036e9-cilium-cgroup\") pod \"cilium-tdh5w\" (UID: \"e9cb2d13-346b-4766-b82a-90beac2036e9\") " pod="kube-system/cilium-tdh5w" Feb 9 10:10:13.995991 kubelet[1407]: I0209 10:10:13.995900 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9cb2d13-346b-4766-b82a-90beac2036e9-clustermesh-secrets\") pod \"cilium-tdh5w\" (UID: \"e9cb2d13-346b-4766-b82a-90beac2036e9\") " pod="kube-system/cilium-tdh5w" Feb 9 10:10:13.995991 kubelet[1407]: I0209 10:10:13.995927 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9cb2d13-346b-4766-b82a-90beac2036e9-host-proc-sys-net\") pod \"cilium-tdh5w\" (UID: \"e9cb2d13-346b-4766-b82a-90beac2036e9\") " pod="kube-system/cilium-tdh5w" Feb 9 10:10:13.995991 kubelet[1407]: I0209 10:10:13.995949 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9cb2d13-346b-4766-b82a-90beac2036e9-cni-path\") pod \"cilium-tdh5w\" (UID: \"e9cb2d13-346b-4766-b82a-90beac2036e9\") " pod="kube-system/cilium-tdh5w" Feb 9 10:10:13.996137 kubelet[1407]: I0209 10:10:13.995968 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9cb2d13-346b-4766-b82a-90beac2036e9-etc-cni-netd\") pod \"cilium-tdh5w\" (UID: \"e9cb2d13-346b-4766-b82a-90beac2036e9\") " pod="kube-system/cilium-tdh5w" Feb 9 10:10:13.996137 kubelet[1407]: I0209 10:10:13.995997 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e9cb2d13-346b-4766-b82a-90beac2036e9-cilium-ipsec-secrets\") pod \"cilium-tdh5w\" (UID: \"e9cb2d13-346b-4766-b82a-90beac2036e9\") " pod="kube-system/cilium-tdh5w" Feb 9 10:10:13.996137 kubelet[1407]: I0209 10:10:13.996020 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9cb2d13-346b-4766-b82a-90beac2036e9-hubble-tls\") pod \"cilium-tdh5w\" (UID: \"e9cb2d13-346b-4766-b82a-90beac2036e9\") " pod="kube-system/cilium-tdh5w" Feb 9 10:10:14.250145 kubelet[1407]: I0209 10:10:14.250105 1407 setters.go:552] "Node became not ready" node="10.0.0.134" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T10:10:14Z","lastTransitionTime":"2024-02-09T10:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 10:10:14.266613 kubelet[1407]: E0209 10:10:14.266572 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:14.267121 env[1137]: time="2024-02-09T10:10:14.267068079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tdh5w,Uid:e9cb2d13-346b-4766-b82a-90beac2036e9,Namespace:kube-system,Attempt:0,}" Feb 9 10:10:14.281874 env[1137]: time="2024-02-09T10:10:14.281804482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:10:14.281874 env[1137]: time="2024-02-09T10:10:14.281851641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:10:14.281874 env[1137]: time="2024-02-09T10:10:14.281862081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:10:14.282128 env[1137]: time="2024-02-09T10:10:14.282089479Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0a2e07812708063b9433ada7cfce0b2cb5dadf2e04998c4f3b67ca823eebee5 pid=3045 runtime=io.containerd.runc.v2 Feb 9 10:10:14.295576 systemd[1]: Started cri-containerd-c0a2e07812708063b9433ada7cfce0b2cb5dadf2e04998c4f3b67ca823eebee5.scope. Feb 9 10:10:14.328855 env[1137]: time="2024-02-09T10:10:14.328787220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tdh5w,Uid:e9cb2d13-346b-4766-b82a-90beac2036e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0a2e07812708063b9433ada7cfce0b2cb5dadf2e04998c4f3b67ca823eebee5\"" Feb 9 10:10:14.329517 kubelet[1407]: E0209 10:10:14.329480 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:14.331707 env[1137]: time="2024-02-09T10:10:14.331662029Z" level=info msg="CreateContainer within sandbox \"c0a2e07812708063b9433ada7cfce0b2cb5dadf2e04998c4f3b67ca823eebee5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:10:14.364502 env[1137]: time="2024-02-09T10:10:14.364446719Z" level=info msg="CreateContainer within sandbox \"c0a2e07812708063b9433ada7cfce0b2cb5dadf2e04998c4f3b67ca823eebee5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4e65bb4f7b893b47182068ef7b5dbb6c5f37549758819a3de6ec30e4d4b82ef1\"" Feb 9 10:10:14.365150 env[1137]: time="2024-02-09T10:10:14.365116752Z" level=info msg="StartContainer for \"4e65bb4f7b893b47182068ef7b5dbb6c5f37549758819a3de6ec30e4d4b82ef1\"" Feb 9 10:10:14.378669 systemd[1]: Started cri-containerd-4e65bb4f7b893b47182068ef7b5dbb6c5f37549758819a3de6ec30e4d4b82ef1.scope. Feb 9 10:10:14.424099 env[1137]: time="2024-02-09T10:10:14.424046762Z" level=info msg="StartContainer for \"4e65bb4f7b893b47182068ef7b5dbb6c5f37549758819a3de6ec30e4d4b82ef1\" returns successfully" Feb 9 10:10:14.438405 systemd[1]: cri-containerd-4e65bb4f7b893b47182068ef7b5dbb6c5f37549758819a3de6ec30e4d4b82ef1.scope: Deactivated successfully. Feb 9 10:10:14.453460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e65bb4f7b893b47182068ef7b5dbb6c5f37549758819a3de6ec30e4d4b82ef1-rootfs.mount: Deactivated successfully. Feb 9 10:10:14.460403 env[1137]: time="2024-02-09T10:10:14.460363174Z" level=info msg="shim disconnected" id=4e65bb4f7b893b47182068ef7b5dbb6c5f37549758819a3de6ec30e4d4b82ef1 Feb 9 10:10:14.460529 env[1137]: time="2024-02-09T10:10:14.460405813Z" level=warning msg="cleaning up after shim disconnected" id=4e65bb4f7b893b47182068ef7b5dbb6c5f37549758819a3de6ec30e4d4b82ef1 namespace=k8s.io Feb 9 10:10:14.460529 env[1137]: time="2024-02-09T10:10:14.460415733Z" level=info msg="cleaning up dead shim" Feb 9 10:10:14.467665 env[1137]: time="2024-02-09T10:10:14.467628616Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:10:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3132 runtime=io.containerd.runc.v2\n" Feb 9 10:10:14.761418 kubelet[1407]: E0209 10:10:14.761370 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:14.911901 kubelet[1407]: E0209 10:10:14.911659 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:14.911901 kubelet[1407]: E0209 10:10:14.911666 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:14.913711 env[1137]: time="2024-02-09T10:10:14.913659169Z" level=info msg="CreateContainer within sandbox \"c0a2e07812708063b9433ada7cfce0b2cb5dadf2e04998c4f3b67ca823eebee5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 10:10:14.923876 env[1137]: time="2024-02-09T10:10:14.923829341Z" level=info msg="CreateContainer within sandbox \"c0a2e07812708063b9433ada7cfce0b2cb5dadf2e04998c4f3b67ca823eebee5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"899a917ca076b50a36226f504e8f30f8934143abfc54a856095fcef7da0715e0\"" Feb 9 10:10:14.924508 env[1137]: time="2024-02-09T10:10:14.924473054Z" level=info msg="StartContainer for \"899a917ca076b50a36226f504e8f30f8934143abfc54a856095fcef7da0715e0\"" Feb 9 10:10:14.938711 systemd[1]: Started cri-containerd-899a917ca076b50a36226f504e8f30f8934143abfc54a856095fcef7da0715e0.scope. Feb 9 10:10:14.966094 env[1137]: time="2024-02-09T10:10:14.966045369Z" level=info msg="StartContainer for \"899a917ca076b50a36226f504e8f30f8934143abfc54a856095fcef7da0715e0\" returns successfully" Feb 9 10:10:14.970732 systemd[1]: cri-containerd-899a917ca076b50a36226f504e8f30f8934143abfc54a856095fcef7da0715e0.scope: Deactivated successfully. Feb 9 10:10:14.990094 env[1137]: time="2024-02-09T10:10:14.990043313Z" level=info msg="shim disconnected" id=899a917ca076b50a36226f504e8f30f8934143abfc54a856095fcef7da0715e0 Feb 9 10:10:14.990363 env[1137]: time="2024-02-09T10:10:14.990336150Z" level=warning msg="cleaning up after shim disconnected" id=899a917ca076b50a36226f504e8f30f8934143abfc54a856095fcef7da0715e0 namespace=k8s.io Feb 9 10:10:14.990441 env[1137]: time="2024-02-09T10:10:14.990426949Z" level=info msg="cleaning up dead shim" Feb 9 10:10:14.997691 env[1137]: time="2024-02-09T10:10:14.997656752Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:10:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3194 runtime=io.containerd.runc.v2\n" Feb 9 10:10:15.389055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-899a917ca076b50a36226f504e8f30f8934143abfc54a856095fcef7da0715e0-rootfs.mount: Deactivated successfully. Feb 9 10:10:15.762249 kubelet[1407]: E0209 10:10:15.762117 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:15.810699 kubelet[1407]: I0209 10:10:15.810667 1407 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="02491839-89e5-4fab-ad4e-30f3b18a9ad8" path="/var/lib/kubelet/pods/02491839-89e5-4fab-ad4e-30f3b18a9ad8/volumes" Feb 9 10:10:15.915168 kubelet[1407]: E0209 10:10:15.915132 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:15.917113 env[1137]: time="2024-02-09T10:10:15.917059243Z" level=info msg="CreateContainer within sandbox \"c0a2e07812708063b9433ada7cfce0b2cb5dadf2e04998c4f3b67ca823eebee5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 10:10:15.930158 env[1137]: time="2024-02-09T10:10:15.930114669Z" level=info msg="CreateContainer within sandbox \"c0a2e07812708063b9433ada7cfce0b2cb5dadf2e04998c4f3b67ca823eebee5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f41c7f5b8385590787ca086e3f4c2d64460f30d196e49af63ca0266ce9aa349f\"" Feb 9 10:10:15.930923 env[1137]: time="2024-02-09T10:10:15.930865101Z" level=info msg="StartContainer for \"f41c7f5b8385590787ca086e3f4c2d64460f30d196e49af63ca0266ce9aa349f\"" Feb 9 10:10:15.948623 systemd[1]: Started cri-containerd-f41c7f5b8385590787ca086e3f4c2d64460f30d196e49af63ca0266ce9aa349f.scope. Feb 9 10:10:15.979575 env[1137]: time="2024-02-09T10:10:15.979232643Z" level=info msg="StartContainer for \"f41c7f5b8385590787ca086e3f4c2d64460f30d196e49af63ca0266ce9aa349f\" returns successfully" Feb 9 10:10:15.980203 systemd[1]: cri-containerd-f41c7f5b8385590787ca086e3f4c2d64460f30d196e49af63ca0266ce9aa349f.scope: Deactivated successfully. Feb 9 10:10:16.002667 env[1137]: time="2024-02-09T10:10:16.002614323Z" level=info msg="shim disconnected" id=f41c7f5b8385590787ca086e3f4c2d64460f30d196e49af63ca0266ce9aa349f Feb 9 10:10:16.002667 env[1137]: time="2024-02-09T10:10:16.002658882Z" level=warning msg="cleaning up after shim disconnected" id=f41c7f5b8385590787ca086e3f4c2d64460f30d196e49af63ca0266ce9aa349f namespace=k8s.io Feb 9 10:10:16.002667 env[1137]: time="2024-02-09T10:10:16.002668322Z" level=info msg="cleaning up dead shim" Feb 9 10:10:16.011149 env[1137]: time="2024-02-09T10:10:16.011102959Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:10:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3249 runtime=io.containerd.runc.v2\n" Feb 9 10:10:16.389126 systemd[1]: run-containerd-runc-k8s.io-f41c7f5b8385590787ca086e3f4c2d64460f30d196e49af63ca0266ce9aa349f-runc.LF6NhK.mount: Deactivated successfully. Feb 9 10:10:16.389261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f41c7f5b8385590787ca086e3f4c2d64460f30d196e49af63ca0266ce9aa349f-rootfs.mount: Deactivated successfully. Feb 9 10:10:16.763262 kubelet[1407]: E0209 10:10:16.763153 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:16.784917 kubelet[1407]: E0209 10:10:16.784855 1407 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:10:16.919075 kubelet[1407]: E0209 10:10:16.919041 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:16.921128 env[1137]: time="2024-02-09T10:10:16.921083521Z" level=info msg="CreateContainer within sandbox \"c0a2e07812708063b9433ada7cfce0b2cb5dadf2e04998c4f3b67ca823eebee5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 10:10:16.934726 env[1137]: time="2024-02-09T10:10:16.934686186Z" level=info msg="CreateContainer within sandbox \"c0a2e07812708063b9433ada7cfce0b2cb5dadf2e04998c4f3b67ca823eebee5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3c45a53fbcf3e28614a445157b298d6a4772517d1714c34ca0ebfd12f698edee\"" Feb 9 10:10:16.935457 env[1137]: time="2024-02-09T10:10:16.935431538Z" level=info msg="StartContainer for \"3c45a53fbcf3e28614a445157b298d6a4772517d1714c34ca0ebfd12f698edee\"" Feb 9 10:10:16.952672 systemd[1]: Started cri-containerd-3c45a53fbcf3e28614a445157b298d6a4772517d1714c34ca0ebfd12f698edee.scope. Feb 9 10:10:16.978206 systemd[1]: cri-containerd-3c45a53fbcf3e28614a445157b298d6a4772517d1714c34ca0ebfd12f698edee.scope: Deactivated successfully. Feb 9 10:10:16.980347 env[1137]: time="2024-02-09T10:10:16.980313373Z" level=info msg="StartContainer for \"3c45a53fbcf3e28614a445157b298d6a4772517d1714c34ca0ebfd12f698edee\" returns successfully" Feb 9 10:10:16.998897 env[1137]: time="2024-02-09T10:10:16.998854428Z" level=info msg="shim disconnected" id=3c45a53fbcf3e28614a445157b298d6a4772517d1714c34ca0ebfd12f698edee Feb 9 10:10:16.999119 env[1137]: time="2024-02-09T10:10:16.999094506Z" level=warning msg="cleaning up after shim disconnected" id=3c45a53fbcf3e28614a445157b298d6a4772517d1714c34ca0ebfd12f698edee namespace=k8s.io Feb 9 10:10:16.999201 env[1137]: time="2024-02-09T10:10:16.999171745Z" level=info msg="cleaning up dead shim" Feb 9 10:10:17.005613 env[1137]: time="2024-02-09T10:10:17.005579643Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:10:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3303 runtime=io.containerd.runc.v2\n" Feb 9 10:10:17.389241 systemd[1]: run-containerd-runc-k8s.io-3c45a53fbcf3e28614a445157b298d6a4772517d1714c34ca0ebfd12f698edee-runc.NjOL5h.mount: Deactivated successfully. Feb 9 10:10:17.389350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c45a53fbcf3e28614a445157b298d6a4772517d1714c34ca0ebfd12f698edee-rootfs.mount: Deactivated successfully. Feb 9 10:10:17.764225 kubelet[1407]: E0209 10:10:17.764108 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:17.922641 kubelet[1407]: E0209 10:10:17.922613 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:17.924835 env[1137]: time="2024-02-09T10:10:17.924796308Z" level=info msg="CreateContainer within sandbox \"c0a2e07812708063b9433ada7cfce0b2cb5dadf2e04998c4f3b67ca823eebee5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 10:10:17.935701 env[1137]: time="2024-02-09T10:10:17.935653524Z" level=info msg="CreateContainer within sandbox \"c0a2e07812708063b9433ada7cfce0b2cb5dadf2e04998c4f3b67ca823eebee5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"81eec196026dfd8916516506c646f746e49b9ff1e50eee0833d6bd1f5ffb9c6d\"" Feb 9 10:10:17.936505 env[1137]: time="2024-02-09T10:10:17.936478956Z" level=info msg="StartContainer for \"81eec196026dfd8916516506c646f746e49b9ff1e50eee0833d6bd1f5ffb9c6d\"" Feb 9 10:10:17.952436 systemd[1]: Started cri-containerd-81eec196026dfd8916516506c646f746e49b9ff1e50eee0833d6bd1f5ffb9c6d.scope. Feb 9 10:10:17.988942 env[1137]: time="2024-02-09T10:10:17.988895494Z" level=info msg="StartContainer for \"81eec196026dfd8916516506c646f746e49b9ff1e50eee0833d6bd1f5ffb9c6d\" returns successfully" Feb 9 10:10:18.252218 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 10:10:18.764491 kubelet[1407]: E0209 10:10:18.764447 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:18.927691 kubelet[1407]: E0209 10:10:18.927653 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:18.940129 kubelet[1407]: I0209 10:10:18.939915 1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-tdh5w" podStartSLOduration=5.939881516 podCreationTimestamp="2024-02-09 10:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:10:18.9394172 +0000 UTC m=+57.961564793" watchObservedRunningTime="2024-02-09 10:10:18.939881516 +0000 UTC m=+57.962029109" Feb 9 10:10:19.765121 kubelet[1407]: E0209 10:10:19.765083 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:20.267955 kubelet[1407]: E0209 10:10:20.267924 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:20.766218 kubelet[1407]: E0209 10:10:20.766169 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:20.894814 systemd-networkd[1059]: lxc_health: Link UP Feb 9 10:10:20.906231 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:10:20.906380 systemd-networkd[1059]: lxc_health: Gained carrier Feb 9 10:10:21.724056 kubelet[1407]: E0209 10:10:21.724011 1407 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:21.730704 env[1137]: time="2024-02-09T10:10:21.730666432Z" level=info msg="StopPodSandbox for \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\"" Feb 9 10:10:21.731174 env[1137]: time="2024-02-09T10:10:21.731130948Z" level=info msg="TearDown network for sandbox \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\" successfully" Feb 9 10:10:21.731299 env[1137]: time="2024-02-09T10:10:21.731281027Z" level=info msg="StopPodSandbox for \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\" returns successfully" Feb 9 10:10:21.731772 env[1137]: time="2024-02-09T10:10:21.731746583Z" level=info msg="RemovePodSandbox for \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\"" Feb 9 10:10:21.732028 env[1137]: time="2024-02-09T10:10:21.731977141Z" level=info msg="Forcibly stopping sandbox \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\"" Feb 9 10:10:21.732143 env[1137]: time="2024-02-09T10:10:21.732125740Z" level=info msg="TearDown network for sandbox \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\" successfully" Feb 9 10:10:21.736264 env[1137]: time="2024-02-09T10:10:21.736235545Z" level=info msg="RemovePodSandbox \"2237956eb8cf56b0cabfa68f2bc45a0d8cc88f4160db40d01742072ce7b107ba\" returns successfully" Feb 9 10:10:21.766493 kubelet[1407]: E0209 10:10:21.766440 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:22.269035 kubelet[1407]: E0209 10:10:22.269008 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:22.332351 systemd-networkd[1059]: lxc_health: Gained IPv6LL Feb 9 10:10:22.767571 kubelet[1407]: E0209 10:10:22.767518 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:22.934806 kubelet[1407]: E0209 10:10:22.934761 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:23.768699 kubelet[1407]: E0209 10:10:23.768658 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:23.936008 kubelet[1407]: E0209 10:10:23.935969 1407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:10:24.769461 kubelet[1407]: E0209 10:10:24.769426 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:24.977163 systemd[1]: run-containerd-runc-k8s.io-81eec196026dfd8916516506c646f746e49b9ff1e50eee0833d6bd1f5ffb9c6d-runc.5ddLo0.mount: Deactivated successfully. Feb 9 10:10:25.770204 kubelet[1407]: E0209 10:10:25.770136 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:26.770812 kubelet[1407]: E0209 10:10:26.770767 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:27.771120 kubelet[1407]: E0209 10:10:27.771072 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:10:28.771610 kubelet[1407]: E0209 10:10:28.771554 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"