Feb 12 19:17:39.732601 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 12 19:17:39.732620 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 19:17:39.732629 kernel: efi: EFI v2.70 by EDK II Feb 12 19:17:39.732635 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 12 19:17:39.732640 kernel: random: crng init done Feb 12 19:17:39.732645 kernel: ACPI: Early table checksum verification disabled Feb 12 19:17:39.732651 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 12 19:17:39.732657 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 12 19:17:39.732663 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:17:39.732668 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:17:39.732674 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:17:39.732679 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:17:39.732684 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:17:39.732690 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:17:39.732698 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:17:39.732704 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:17:39.732710 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:17:39.732715 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 12 19:17:39.732721 kernel: NUMA: Failed to initialise from firmware Feb 12 19:17:39.732727 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:17:39.732733 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Feb 12 19:17:39.732738 kernel: Zone ranges: Feb 12 19:17:39.732763 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:17:39.732772 kernel: DMA32 empty Feb 12 19:17:39.732778 kernel: Normal empty Feb 12 19:17:39.732784 kernel: Movable zone start for each node Feb 12 19:17:39.732789 kernel: Early memory node ranges Feb 12 19:17:39.732795 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 12 19:17:39.732801 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 12 19:17:39.732807 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 12 19:17:39.732813 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 12 19:17:39.732819 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 12 19:17:39.732825 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 12 19:17:39.732830 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 12 19:17:39.732836 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:17:39.732843 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 12 19:17:39.732849 kernel: psci: probing for conduit method from ACPI. Feb 12 19:17:39.732854 kernel: psci: PSCIv1.1 detected in firmware. Feb 12 19:17:39.732860 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 19:17:39.732866 kernel: psci: Trusted OS migration not required Feb 12 19:17:39.732874 kernel: psci: SMC Calling Convention v1.1 Feb 12 19:17:39.732880 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 12 19:17:39.732888 kernel: ACPI: SRAT not present Feb 12 19:17:39.732895 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 19:17:39.732901 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 19:17:39.732907 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 12 19:17:39.732913 kernel: Detected PIPT I-cache on CPU0 Feb 12 19:17:39.732919 kernel: CPU features: detected: GIC system register CPU interface Feb 12 19:17:39.732925 kernel: CPU features: detected: Hardware dirty bit management Feb 12 19:17:39.732931 kernel: CPU features: detected: Spectre-v4 Feb 12 19:17:39.732937 kernel: CPU features: detected: Spectre-BHB Feb 12 19:17:39.732945 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 19:17:39.732951 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 19:17:39.732957 kernel: CPU features: detected: ARM erratum 1418040 Feb 12 19:17:39.732963 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 12 19:17:39.732969 kernel: Policy zone: DMA Feb 12 19:17:39.732976 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:17:39.732982 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:17:39.732989 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:17:39.732995 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:17:39.733001 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:17:39.733008 kernel: Memory: 2459148K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113140K reserved, 0K cma-reserved) Feb 12 19:17:39.733015 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 19:17:39.733022 kernel: trace event string verifier disabled Feb 12 19:17:39.733028 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 19:17:39.733034 kernel: rcu: RCU event tracing is enabled. Feb 12 19:17:39.733040 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 19:17:39.733046 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 19:17:39.733053 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:17:39.733059 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:17:39.733065 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 19:17:39.733071 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 19:17:39.733077 kernel: GICv3: 256 SPIs implemented Feb 12 19:17:39.733085 kernel: GICv3: 0 Extended SPIs implemented Feb 12 19:17:39.733091 kernel: GICv3: Distributor has no Range Selector support Feb 12 19:17:39.733097 kernel: Root IRQ handler: gic_handle_irq Feb 12 19:17:39.733104 kernel: GICv3: 16 PPIs implemented Feb 12 19:17:39.733110 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 12 19:17:39.733116 kernel: ACPI: SRAT not present Feb 12 19:17:39.733123 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 12 19:17:39.733129 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 12 19:17:39.733136 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 12 19:17:39.733143 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 12 19:17:39.733149 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 12 19:17:39.733156 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:17:39.733186 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 12 19:17:39.733192 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 12 19:17:39.733199 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 12 19:17:39.733205 kernel: arm-pv: using stolen time PV Feb 12 19:17:39.733212 kernel: Console: colour dummy device 80x25 Feb 12 19:17:39.733218 kernel: ACPI: Core revision 20210730 Feb 12 19:17:39.733225 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 12 19:17:39.733231 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:17:39.733237 kernel: LSM: Security Framework initializing Feb 12 19:17:39.733249 kernel: SELinux: Initializing. Feb 12 19:17:39.733264 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:17:39.733270 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:17:39.733277 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:17:39.733283 kernel: Platform MSI: ITS@0x8080000 domain created Feb 12 19:17:39.733289 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 12 19:17:39.733295 kernel: Remapping and enabling EFI services. Feb 12 19:17:39.733302 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:17:39.733308 kernel: Detected PIPT I-cache on CPU1 Feb 12 19:17:39.733314 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 12 19:17:39.733322 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 12 19:17:39.733329 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:17:39.733335 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 12 19:17:39.733341 kernel: Detected PIPT I-cache on CPU2 Feb 12 19:17:39.733347 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 12 19:17:39.733354 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 12 19:17:39.733360 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:17:39.733367 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 12 19:17:39.733373 kernel: Detected PIPT I-cache on CPU3 Feb 12 19:17:39.733379 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 12 19:17:39.733386 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 12 19:17:39.733393 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:17:39.733399 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 12 19:17:39.733405 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 19:17:39.733417 kernel: SMP: Total of 4 processors activated. Feb 12 19:17:39.733425 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 19:17:39.733432 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 12 19:17:39.733438 kernel: CPU features: detected: Common not Private translations Feb 12 19:17:39.733445 kernel: CPU features: detected: CRC32 instructions Feb 12 19:17:39.733451 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 12 19:17:39.733458 kernel: CPU features: detected: LSE atomic instructions Feb 12 19:17:39.733465 kernel: CPU features: detected: Privileged Access Never Feb 12 19:17:39.733472 kernel: CPU features: detected: RAS Extension Support Feb 12 19:17:39.733479 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 12 19:17:39.733492 kernel: CPU: All CPU(s) started at EL1 Feb 12 19:17:39.733499 kernel: alternatives: patching kernel code Feb 12 19:17:39.733506 kernel: devtmpfs: initialized Feb 12 19:17:39.733513 kernel: KASLR enabled Feb 12 19:17:39.733520 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:17:39.733526 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 19:17:39.733533 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:17:39.733539 kernel: SMBIOS 3.0.0 present. Feb 12 19:17:39.733546 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 12 19:17:39.733553 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:17:39.733559 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 19:17:39.733566 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 19:17:39.733574 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 19:17:39.733581 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:17:39.733587 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Feb 12 19:17:39.733594 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:17:39.733600 kernel: cpuidle: using governor menu Feb 12 19:17:39.733608 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 19:17:39.733614 kernel: ASID allocator initialised with 32768 entries Feb 12 19:17:39.733621 kernel: ACPI: bus type PCI registered Feb 12 19:17:39.733628 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:17:39.733636 kernel: Serial: AMBA PL011 UART driver Feb 12 19:17:39.733642 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:17:39.733649 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 19:17:39.733655 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:17:39.733662 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 19:17:39.733669 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:17:39.733676 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 19:17:39.733683 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:17:39.733690 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:17:39.733697 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:17:39.733704 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:17:39.733711 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:17:39.733718 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:17:39.733724 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:17:39.733731 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:17:39.733737 kernel: ACPI: Interpreter enabled Feb 12 19:17:39.733749 kernel: ACPI: Using GIC for interrupt routing Feb 12 19:17:39.733756 kernel: ACPI: MCFG table detected, 1 entries Feb 12 19:17:39.733764 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 12 19:17:39.733771 kernel: printk: console [ttyAMA0] enabled Feb 12 19:17:39.733777 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:17:39.733918 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:17:39.733991 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 12 19:17:39.734053 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 12 19:17:39.734110 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 12 19:17:39.734172 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 12 19:17:39.734181 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 12 19:17:39.734188 kernel: PCI host bridge to bus 0000:00 Feb 12 19:17:39.734255 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 12 19:17:39.734312 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 12 19:17:39.734367 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 12 19:17:39.734423 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:17:39.734508 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 12 19:17:39.734589 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 19:17:39.734659 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 12 19:17:39.734724 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 12 19:17:39.734816 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 12 19:17:39.734881 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 12 19:17:39.734945 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 12 19:17:39.735012 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 12 19:17:39.735086 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 12 19:17:39.735144 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 12 19:17:39.735204 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 12 19:17:39.735213 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 12 19:17:39.735220 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 12 19:17:39.735227 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 12 19:17:39.735236 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 12 19:17:39.735243 kernel: iommu: Default domain type: Translated Feb 12 19:17:39.735250 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 19:17:39.735257 kernel: vgaarb: loaded Feb 12 19:17:39.735264 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:17:39.735271 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:17:39.735278 kernel: PTP clock support registered Feb 12 19:17:39.735284 kernel: Registered efivars operations Feb 12 19:17:39.735291 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 19:17:39.735300 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:17:39.735307 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:17:39.735313 kernel: pnp: PnP ACPI init Feb 12 19:17:39.735385 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 12 19:17:39.735395 kernel: pnp: PnP ACPI: found 1 devices Feb 12 19:17:39.735403 kernel: NET: Registered PF_INET protocol family Feb 12 19:17:39.735410 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:17:39.735417 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:17:39.735424 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:17:39.735433 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:17:39.735440 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:17:39.735447 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:17:39.735454 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:17:39.735461 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:17:39.735468 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:17:39.735476 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:17:39.735489 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 12 19:17:39.735498 kernel: kvm [1]: HYP mode not available Feb 12 19:17:39.735505 kernel: Initialise system trusted keyrings Feb 12 19:17:39.735512 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:17:39.735518 kernel: Key type asymmetric registered Feb 12 19:17:39.735525 kernel: Asymmetric key parser 'x509' registered Feb 12 19:17:39.735532 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:17:39.735539 kernel: io scheduler mq-deadline registered Feb 12 19:17:39.735545 kernel: io scheduler kyber registered Feb 12 19:17:39.735552 kernel: io scheduler bfq registered Feb 12 19:17:39.735559 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 12 19:17:39.735568 kernel: ACPI: button: Power Button [PWRB] Feb 12 19:17:39.735575 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 12 19:17:39.735641 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 12 19:17:39.735650 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:17:39.735657 kernel: thunder_xcv, ver 1.0 Feb 12 19:17:39.735664 kernel: thunder_bgx, ver 1.0 Feb 12 19:17:39.735670 kernel: nicpf, ver 1.0 Feb 12 19:17:39.735677 kernel: nicvf, ver 1.0 Feb 12 19:17:39.735756 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 19:17:39.735822 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T19:17:39 UTC (1707765459) Feb 12 19:17:39.735831 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:17:39.735838 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:17:39.735844 kernel: Segment Routing with IPv6 Feb 12 19:17:39.735851 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:17:39.735858 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:17:39.735864 kernel: Key type dns_resolver registered Feb 12 19:17:39.735871 kernel: registered taskstats version 1 Feb 12 19:17:39.735880 kernel: Loading compiled-in X.509 certificates Feb 12 19:17:39.735887 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 19:17:39.735893 kernel: Key type .fscrypt registered Feb 12 19:17:39.735900 kernel: Key type fscrypt-provisioning registered Feb 12 19:17:39.735907 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:17:39.735915 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:17:39.735921 kernel: ima: No architecture policies found Feb 12 19:17:39.735928 kernel: Freeing unused kernel memory: 34688K Feb 12 19:17:39.735936 kernel: Run /init as init process Feb 12 19:17:39.735943 kernel: with arguments: Feb 12 19:17:39.735949 kernel: /init Feb 12 19:17:39.735956 kernel: with environment: Feb 12 19:17:39.735962 kernel: HOME=/ Feb 12 19:17:39.735969 kernel: TERM=linux Feb 12 19:17:39.735976 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:17:39.735986 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:17:39.735995 systemd[1]: Detected virtualization kvm. Feb 12 19:17:39.736004 systemd[1]: Detected architecture arm64. Feb 12 19:17:39.736011 systemd[1]: Running in initrd. Feb 12 19:17:39.736018 systemd[1]: No hostname configured, using default hostname. Feb 12 19:17:39.736025 systemd[1]: Hostname set to . Feb 12 19:17:39.736032 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:17:39.736039 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:17:39.736046 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:17:39.736054 systemd[1]: Reached target cryptsetup.target. Feb 12 19:17:39.736062 systemd[1]: Reached target paths.target. Feb 12 19:17:39.736069 systemd[1]: Reached target slices.target. Feb 12 19:17:39.736076 systemd[1]: Reached target swap.target. Feb 12 19:17:39.736084 systemd[1]: Reached target timers.target. Feb 12 19:17:39.736091 systemd[1]: Listening on iscsid.socket. Feb 12 19:17:39.736098 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:17:39.736105 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:17:39.736113 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:17:39.736121 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:17:39.736128 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:17:39.736135 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:17:39.736142 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:17:39.736149 systemd[1]: Reached target sockets.target. Feb 12 19:17:39.736156 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:17:39.736163 systemd[1]: Finished network-cleanup.service. Feb 12 19:17:39.736170 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:17:39.736178 systemd[1]: Starting systemd-journald.service... Feb 12 19:17:39.736186 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:17:39.736193 systemd[1]: Starting systemd-resolved.service... Feb 12 19:17:39.736200 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:17:39.736207 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:17:39.736214 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:17:39.736222 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:17:39.736229 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:17:39.736237 kernel: audit: type=1130 audit(1707765459.734:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:39.736245 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:17:39.736256 systemd-journald[289]: Journal started Feb 12 19:17:39.736294 systemd-journald[289]: Runtime Journal (/run/log/journal/e8e18c8719fe460197d53cfc2636ecc5) is 6.0M, max 48.7M, 42.6M free. Feb 12 19:17:39.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:39.727703 systemd-modules-load[290]: Inserted module 'overlay' Feb 12 19:17:39.739269 systemd[1]: Started systemd-journald.service. Feb 12 19:17:39.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:39.740276 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:17:39.746799 kernel: audit: type=1130 audit(1707765459.739:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:39.746822 kernel: audit: type=1130 audit(1707765459.742:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:39.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:39.751940 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:17:39.756368 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:17:39.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:39.758267 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:17:39.762575 kernel: audit: type=1130 audit(1707765459.757:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:39.762597 kernel: Bridge firewalling registered Feb 12 19:17:39.761176 systemd-resolved[291]: Positive Trust Anchors: Feb 12 19:17:39.761183 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:17:39.761211 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:17:39.761397 systemd-modules-load[290]: Inserted module 'br_netfilter' Feb 12 19:17:39.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:39.766594 systemd-resolved[291]: Defaulting to hostname 'linux'. Feb 12 19:17:39.772845 kernel: audit: type=1130 audit(1707765459.768:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:39.767409 systemd[1]: Started systemd-resolved.service. Feb 12 19:17:39.769861 systemd[1]: Reached target nss-lookup.target. Feb 12 19:17:39.775268 dracut-cmdline[306]: dracut-dracut-053 Feb 12 19:17:39.776781 kernel: SCSI subsystem initialized Feb 12 19:17:39.777522 dracut-cmdline[306]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:17:39.784302 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:17:39.784359 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:17:39.784368 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:17:39.786610 systemd-modules-load[290]: Inserted module 'dm_multipath' Feb 12 19:17:39.787339 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:17:39.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:39.788986 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:17:39.791770 kernel: audit: type=1130 audit(1707765459.788:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:39.796815 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:17:39.799787 kernel: audit: type=1130 audit(1707765459.796:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:39.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:39.839783 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:17:39.848782 kernel: iscsi: registered transport (tcp) Feb 12 19:17:39.861771 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:17:39.861817 kernel: QLogic iSCSI HBA Driver Feb 12 19:17:39.897203 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:17:39.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:39.898999 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:17:39.901551 kernel: audit: type=1130 audit(1707765459.897:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:39.946054 kernel: raid6: neonx8 gen() 13804 MB/s Feb 12 19:17:39.961788 kernel: raid6: neonx8 xor() 10825 MB/s Feb 12 19:17:39.978792 kernel: raid6: neonx4 gen() 13565 MB/s Feb 12 19:17:39.999536 kernel: raid6: neonx4 xor() 11179 MB/s Feb 12 19:17:40.012808 kernel: raid6: neonx2 gen() 12968 MB/s Feb 12 19:17:40.029796 kernel: raid6: neonx2 xor() 10262 MB/s Feb 12 19:17:40.046798 kernel: raid6: neonx1 gen() 10486 MB/s Feb 12 19:17:40.063800 kernel: raid6: neonx1 xor() 8783 MB/s Feb 12 19:17:40.080799 kernel: raid6: int64x8 gen() 6292 MB/s Feb 12 19:17:40.097791 kernel: raid6: int64x8 xor() 3544 MB/s Feb 12 19:17:40.114788 kernel: raid6: int64x4 gen() 7274 MB/s Feb 12 19:17:40.132411 kernel: raid6: int64x4 xor() 3857 MB/s Feb 12 19:17:40.148785 kernel: raid6: int64x2 gen() 6142 MB/s Feb 12 19:17:40.167492 kernel: raid6: int64x2 xor() 3322 MB/s Feb 12 19:17:40.182785 kernel: raid6: int64x1 gen() 5040 MB/s Feb 12 19:17:40.200090 kernel: raid6: int64x1 xor() 2646 MB/s Feb 12 19:17:40.200157 kernel: raid6: using algorithm neonx8 gen() 13804 MB/s Feb 12 19:17:40.200167 kernel: raid6: .... xor() 10825 MB/s, rmw enabled Feb 12 19:17:40.200176 kernel: raid6: using neon recovery algorithm Feb 12 19:17:40.212824 kernel: xor: measuring software checksum speed Feb 12 19:17:40.212858 kernel: 8regs : 17333 MB/sec Feb 12 19:17:40.213762 kernel: 32regs : 20755 MB/sec Feb 12 19:17:40.214803 kernel: arm64_neon : 27949 MB/sec Feb 12 19:17:40.214821 kernel: xor: using function: arm64_neon (27949 MB/sec) Feb 12 19:17:40.270771 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 19:17:40.281644 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:17:40.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:40.284000 audit: BPF prog-id=7 op=LOAD Feb 12 19:17:40.284000 audit: BPF prog-id=8 op=LOAD Feb 12 19:17:40.285348 systemd[1]: Starting systemd-udevd.service... Feb 12 19:17:40.286584 kernel: audit: type=1130 audit(1707765460.281:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:40.304392 systemd-udevd[488]: Using default interface naming scheme 'v252'. Feb 12 19:17:40.307844 systemd[1]: Started systemd-udevd.service. Feb 12 19:17:40.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:40.309577 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:17:40.321434 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Feb 12 19:17:40.351098 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:17:40.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:40.352561 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:17:40.388056 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:17:40.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:40.419537 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 19:17:40.421931 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:17:40.421962 kernel: GPT:9289727 != 19775487 Feb 12 19:17:40.421972 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:17:40.421982 kernel: GPT:9289727 != 19775487 Feb 12 19:17:40.422757 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:17:40.422775 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:17:40.432441 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:17:40.435779 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (547) Feb 12 19:17:40.440998 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:17:40.442092 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:17:40.447199 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:17:40.451352 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:17:40.455089 systemd[1]: Starting disk-uuid.service... Feb 12 19:17:40.461468 disk-uuid[558]: Primary Header is updated. Feb 12 19:17:40.461468 disk-uuid[558]: Secondary Entries is updated. Feb 12 19:17:40.461468 disk-uuid[558]: Secondary Header is updated. Feb 12 19:17:40.464779 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:17:41.477361 disk-uuid[559]: The operation has completed successfully. Feb 12 19:17:41.478669 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:17:41.498244 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:17:41.498353 systemd[1]: Finished disk-uuid.service. Feb 12 19:17:41.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.505727 systemd[1]: Starting verity-setup.service... Feb 12 19:17:41.520763 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 19:17:41.543991 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:17:41.546689 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:17:41.547728 systemd[1]: Finished verity-setup.service. Feb 12 19:17:41.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.633760 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:17:41.634259 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:17:41.635134 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:17:41.635897 systemd[1]: Starting ignition-setup.service... Feb 12 19:17:41.638238 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:17:41.644112 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:17:41.644149 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:17:41.644158 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:17:41.655914 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:17:41.664000 systemd[1]: Finished ignition-setup.service. Feb 12 19:17:41.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.665698 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:17:41.723438 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:17:41.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.724000 audit: BPF prog-id=9 op=LOAD Feb 12 19:17:41.725917 systemd[1]: Starting systemd-networkd.service... Feb 12 19:17:41.744155 ignition[651]: Ignition 2.14.0 Feb 12 19:17:41.744166 ignition[651]: Stage: fetch-offline Feb 12 19:17:41.744206 ignition[651]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:17:41.744253 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:17:41.744376 ignition[651]: parsed url from cmdline: "" Feb 12 19:17:41.744379 ignition[651]: no config URL provided Feb 12 19:17:41.744384 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:17:41.744391 ignition[651]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:17:41.744406 ignition[651]: op(1): [started] loading QEMU firmware config module Feb 12 19:17:41.744411 ignition[651]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 19:17:41.749935 systemd-networkd[736]: lo: Link UP Feb 12 19:17:41.749938 systemd-networkd[736]: lo: Gained carrier Feb 12 19:17:41.750410 systemd-networkd[736]: Enumeration completed Feb 12 19:17:41.750612 systemd-networkd[736]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:17:41.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.750837 systemd[1]: Started systemd-networkd.service. Feb 12 19:17:41.752877 ignition[651]: op(1): [finished] loading QEMU firmware config module Feb 12 19:17:41.751540 systemd-networkd[736]: eth0: Link UP Feb 12 19:17:41.751543 systemd-networkd[736]: eth0: Gained carrier Feb 12 19:17:41.752890 systemd[1]: Reached target network.target. Feb 12 19:17:41.755354 systemd[1]: Starting iscsiuio.service... Feb 12 19:17:41.766123 systemd[1]: Started iscsiuio.service. Feb 12 19:17:41.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.767850 systemd[1]: Starting iscsid.service... Feb 12 19:17:41.771591 iscsid[743]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:17:41.771591 iscsid[743]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:17:41.771591 iscsid[743]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:17:41.771591 iscsid[743]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:17:41.771591 iscsid[743]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:17:41.771591 iscsid[743]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:17:41.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.775825 systemd[1]: Started iscsid.service. Feb 12 19:17:41.777858 systemd-networkd[736]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:17:41.779247 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:17:41.790249 ignition[651]: parsing config with SHA512: 4393dfa0ec5da12b099072072cffec0f42a8358cda9c5844096e35c7d539fe029769b5bcd288cc34e05d27dfb3a2af680ca7a76c925565516a01455f104d1958 Feb 12 19:17:41.791242 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:17:41.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.792298 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:17:41.794052 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:17:41.795360 systemd[1]: Reached target remote-fs.target. Feb 12 19:17:41.797494 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:17:41.805729 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:17:41.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.817468 unknown[651]: fetched base config from "system" Feb 12 19:17:41.817487 unknown[651]: fetched user config from "qemu" Feb 12 19:17:41.817922 ignition[651]: fetch-offline: fetch-offline passed Feb 12 19:17:41.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.819198 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:17:41.817980 ignition[651]: Ignition finished successfully Feb 12 19:17:41.820637 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 19:17:41.821435 systemd[1]: Starting ignition-kargs.service... Feb 12 19:17:41.830287 ignition[758]: Ignition 2.14.0 Feb 12 19:17:41.830297 ignition[758]: Stage: kargs Feb 12 19:17:41.830402 ignition[758]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:17:41.830412 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:17:41.831252 ignition[758]: kargs: kargs passed Feb 12 19:17:41.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.832927 systemd[1]: Finished ignition-kargs.service. Feb 12 19:17:41.831299 ignition[758]: Ignition finished successfully Feb 12 19:17:41.835439 systemd[1]: Starting ignition-disks.service... Feb 12 19:17:41.842777 ignition[764]: Ignition 2.14.0 Feb 12 19:17:41.842790 ignition[764]: Stage: disks Feb 12 19:17:41.842902 ignition[764]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:17:41.842912 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:17:41.845253 systemd[1]: Finished ignition-disks.service. Feb 12 19:17:41.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.843951 ignition[764]: disks: disks passed Feb 12 19:17:41.846864 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:17:41.843999 ignition[764]: Ignition finished successfully Feb 12 19:17:41.848041 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:17:41.849133 systemd[1]: Reached target local-fs.target. Feb 12 19:17:41.850407 systemd[1]: Reached target sysinit.target. Feb 12 19:17:41.851563 systemd[1]: Reached target basic.target. Feb 12 19:17:41.853598 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:17:41.864142 systemd-fsck[772]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 19:17:41.867313 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:17:41.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.869885 systemd[1]: Mounting sysroot.mount... Feb 12 19:17:41.876622 systemd[1]: Mounted sysroot.mount. Feb 12 19:17:41.877784 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:17:41.877428 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:17:41.879646 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:17:41.880689 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 19:17:41.880771 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:17:41.880795 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:17:41.883143 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:17:41.885320 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:17:41.889861 initrd-setup-root[782]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:17:41.894345 initrd-setup-root[790]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:17:41.898553 initrd-setup-root[798]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:17:41.903040 initrd-setup-root[806]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:17:41.931834 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:17:41.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.933629 systemd[1]: Starting ignition-mount.service... Feb 12 19:17:41.935054 systemd[1]: Starting sysroot-boot.service... Feb 12 19:17:41.940367 bash[823]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 19:17:41.949184 ignition[825]: INFO : Ignition 2.14.0 Feb 12 19:17:41.949184 ignition[825]: INFO : Stage: mount Feb 12 19:17:41.951023 ignition[825]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:17:41.951023 ignition[825]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:17:41.951023 ignition[825]: INFO : mount: mount passed Feb 12 19:17:41.951023 ignition[825]: INFO : Ignition finished successfully Feb 12 19:17:41.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.951978 systemd[1]: Finished ignition-mount.service. Feb 12 19:17:41.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:41.954144 systemd[1]: Finished sysroot-boot.service. Feb 12 19:17:42.554808 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:17:42.560758 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (833) Feb 12 19:17:42.561971 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:17:42.561983 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:17:42.561992 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:17:42.565159 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:17:42.566807 systemd[1]: Starting ignition-files.service... Feb 12 19:17:42.580600 ignition[853]: INFO : Ignition 2.14.0 Feb 12 19:17:42.580600 ignition[853]: INFO : Stage: files Feb 12 19:17:42.581822 ignition[853]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:17:42.581822 ignition[853]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:17:42.581822 ignition[853]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:17:42.586019 ignition[853]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:17:42.586019 ignition[853]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:17:42.588859 ignition[853]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:17:42.588859 ignition[853]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:17:42.590802 ignition[853]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:17:42.590802 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 12 19:17:42.590802 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 12 19:17:42.589012 unknown[853]: wrote ssh authorized keys file for user: core Feb 12 19:17:42.917561 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:17:43.164892 systemd-networkd[736]: eth0: Gained IPv6LL Feb 12 19:17:43.320754 ignition[853]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 12 19:17:43.323641 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 12 19:17:43.323641 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 12 19:17:43.323641 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 12 19:17:43.553156 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:17:43.675497 ignition[853]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 12 19:17:43.678505 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 12 19:17:43.678505 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:17:43.678505 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubeadm: attempt #1 Feb 12 19:17:43.739884 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:17:44.023699 ignition[853]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5a08b81f9cc82d3cce21130856ca63b8dafca9149d9775dd25b376eb0f18209aa0e4a47c0a6d7e6fb1316aacd5d59dec770f26c09120c866949d70bc415518b3 Feb 12 19:17:44.026593 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:17:44.026593 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:17:44.026593 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubelet: attempt #1 Feb 12 19:17:44.048320 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:17:44.644657 ignition[853]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 5a898ef543a6482895101ea58e33602e3c0a7682d322aaf08ac3dc8a5a3c8da8f09600d577024549288f8cebb1a86f9c79927796b69a3d8fe989ca8f12b147d6 Feb 12 19:17:44.647874 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:17:44.647874 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:17:44.647874 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:17:44.647874 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:17:44.647874 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:17:44.647874 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:17:44.647874 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:17:44.647874 ignition[853]: INFO : files: op(a): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:17:44.647874 ignition[853]: INFO : files: op(a): op(b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:17:44.647874 ignition[853]: INFO : files: op(a): op(b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:17:44.647874 ignition[853]: INFO : files: op(a): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:17:44.647874 ignition[853]: INFO : files: op(c): [started] processing unit "prepare-critools.service" Feb 12 19:17:44.647874 ignition[853]: INFO : files: op(c): op(d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:17:44.647874 ignition[853]: INFO : files: op(c): op(d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:17:44.647874 ignition[853]: INFO : files: op(c): [finished] processing unit "prepare-critools.service" Feb 12 19:17:44.647874 ignition[853]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 12 19:17:44.647874 ignition[853]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:17:44.680563 ignition[853]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:17:44.680563 ignition[853]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 12 19:17:44.680563 ignition[853]: INFO : files: op(10): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:17:44.680563 ignition[853]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:17:44.680563 ignition[853]: INFO : files: op(11): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:17:44.680563 ignition[853]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:17:44.680563 ignition[853]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 19:17:44.680563 ignition[853]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:17:44.704153 ignition[853]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:17:44.706584 ignition[853]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 19:17:44.706584 ignition[853]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:17:44.706584 ignition[853]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:17:44.706584 ignition[853]: INFO : files: files passed Feb 12 19:17:44.706584 ignition[853]: INFO : Ignition finished successfully Feb 12 19:17:44.715408 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 19:17:44.715429 kernel: audit: type=1130 audit(1707765464.707:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.706622 systemd[1]: Finished ignition-files.service. Feb 12 19:17:44.709305 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:17:44.717727 initrd-setup-root-after-ignition[877]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 19:17:44.723220 kernel: audit: type=1130 audit(1707765464.717:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.723247 kernel: audit: type=1131 audit(1707765464.717:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.712569 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:17:44.727247 kernel: audit: type=1130 audit(1707765464.723:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.727350 initrd-setup-root-after-ignition[880]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:17:44.713410 systemd[1]: Starting ignition-quench.service... Feb 12 19:17:44.716590 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:17:44.716687 systemd[1]: Finished ignition-quench.service. Feb 12 19:17:44.719187 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:17:44.724205 systemd[1]: Reached target ignition-complete.target. Feb 12 19:17:44.728629 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:17:44.741694 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:17:44.741832 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:17:44.747619 kernel: audit: type=1130 audit(1707765464.743:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.747643 kernel: audit: type=1131 audit(1707765464.743:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.743416 systemd[1]: Reached target initrd-fs.target. Feb 12 19:17:44.748404 systemd[1]: Reached target initrd.target. Feb 12 19:17:44.749680 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:17:44.750568 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:17:44.761295 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:17:44.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.763063 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:17:44.765548 kernel: audit: type=1130 audit(1707765464.762:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.771895 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:17:44.772587 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:17:44.773714 systemd[1]: Stopped target timers.target. Feb 12 19:17:44.775048 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:17:44.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.775172 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:17:44.779834 kernel: audit: type=1131 audit(1707765464.775:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.776190 systemd[1]: Stopped target initrd.target. Feb 12 19:17:44.779209 systemd[1]: Stopped target basic.target. Feb 12 19:17:44.780415 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:17:44.781484 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:17:44.782605 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:17:44.783722 systemd[1]: Stopped target remote-fs.target. Feb 12 19:17:44.784805 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:17:44.785939 systemd[1]: Stopped target sysinit.target. Feb 12 19:17:44.787114 systemd[1]: Stopped target local-fs.target. Feb 12 19:17:44.788103 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:17:44.789089 systemd[1]: Stopped target swap.target. Feb 12 19:17:44.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.790013 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:17:44.790130 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:17:44.797186 kernel: audit: type=1131 audit(1707765464.790:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.797208 kernel: audit: type=1131 audit(1707765464.794:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.791168 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:17:44.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.793872 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:17:44.793975 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:17:44.795089 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:17:44.795183 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:17:44.797890 systemd[1]: Stopped target paths.target. Feb 12 19:17:44.798795 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:17:44.802774 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:17:44.803632 systemd[1]: Stopped target slices.target. Feb 12 19:17:44.804701 systemd[1]: Stopped target sockets.target. Feb 12 19:17:44.805777 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:17:44.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.805890 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:17:44.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.807004 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:17:44.807092 systemd[1]: Stopped ignition-files.service. Feb 12 19:17:44.812270 iscsid[743]: iscsid shutting down. Feb 12 19:17:44.809190 systemd[1]: Stopping ignition-mount.service... Feb 12 19:17:44.810229 systemd[1]: Stopping iscsid.service... Feb 12 19:17:44.815401 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:17:44.816533 ignition[893]: INFO : Ignition 2.14.0 Feb 12 19:17:44.816533 ignition[893]: INFO : Stage: umount Feb 12 19:17:44.816533 ignition[893]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:17:44.816533 ignition[893]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:17:44.819654 ignition[893]: INFO : umount: umount passed Feb 12 19:17:44.819654 ignition[893]: INFO : Ignition finished successfully Feb 12 19:17:44.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.817846 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:17:44.818009 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:17:44.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.819077 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:17:44.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.819163 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:17:44.821665 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:17:44.821765 systemd[1]: Stopped iscsid.service. Feb 12 19:17:44.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.823350 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:17:44.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.823426 systemd[1]: Stopped ignition-mount.service. Feb 12 19:17:44.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.825212 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:17:44.825727 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:17:44.825817 systemd[1]: Closed iscsid.socket. Feb 12 19:17:44.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.826864 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:17:44.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.826906 systemd[1]: Stopped ignition-disks.service. Feb 12 19:17:44.828108 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:17:44.828152 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:17:44.829729 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:17:44.829778 systemd[1]: Stopped ignition-setup.service. Feb 12 19:17:44.832986 systemd[1]: Stopping iscsiuio.service... Feb 12 19:17:44.835633 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:17:44.835732 systemd[1]: Stopped iscsiuio.service. Feb 12 19:17:44.837706 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:17:44.837806 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:17:44.839955 systemd[1]: Stopped target network.target. Feb 12 19:17:44.840821 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:17:44.840855 systemd[1]: Closed iscsiuio.socket. Feb 12 19:17:44.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.842766 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:17:44.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.844104 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:17:44.852156 systemd-networkd[736]: eth0: DHCPv6 lease lost Feb 12 19:17:44.860000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:17:44.853440 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:17:44.861000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:17:44.853561 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:17:44.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.855210 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:17:44.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.855300 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:17:44.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.857275 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:17:44.857303 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:17:44.859630 systemd[1]: Stopping network-cleanup.service... Feb 12 19:17:44.861302 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:17:44.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.861357 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:17:44.863602 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:17:44.863645 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:17:44.866044 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:17:44.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.866088 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:17:44.867157 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:17:44.869682 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:17:44.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.872855 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:17:44.872954 systemd[1]: Stopped network-cleanup.service. Feb 12 19:17:44.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.877819 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:17:44.877934 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:17:44.879094 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:17:44.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.879131 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:17:44.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.880317 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:17:44.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.880350 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:17:44.881590 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:17:44.881640 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:17:44.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.882870 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:17:44.882913 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:17:44.884324 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:17:44.884368 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:17:44.886444 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:17:44.887652 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 19:17:44.887712 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 19:17:44.889611 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:17:44.889652 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:17:44.890594 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:17:44.890641 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:17:44.892698 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 19:17:44.893174 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:17:44.893263 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:17:44.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.903888 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:17:44.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:44.903986 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:17:44.905308 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:17:44.906533 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:17:44.906588 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:17:44.908170 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:17:44.914946 systemd[1]: Switching root. Feb 12 19:17:44.932020 systemd-journald[289]: Journal stopped Feb 12 19:17:47.013318 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Feb 12 19:17:47.013375 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:17:47.013387 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:17:47.013397 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:17:47.013407 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:17:47.013419 kernel: SELinux: policy capability open_perms=1 Feb 12 19:17:47.013429 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:17:47.013438 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:17:47.013448 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:17:47.013468 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:17:47.013478 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:17:47.013488 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:17:47.013498 systemd[1]: Successfully loaded SELinux policy in 34.320ms. Feb 12 19:17:47.013518 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.822ms. Feb 12 19:17:47.013532 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:17:47.013543 systemd[1]: Detected virtualization kvm. Feb 12 19:17:47.013554 systemd[1]: Detected architecture arm64. Feb 12 19:17:47.013565 systemd[1]: Detected first boot. Feb 12 19:17:47.013575 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:17:47.013585 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:17:47.013595 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:17:47.013605 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:17:47.013618 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:17:47.013630 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:17:47.013641 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 19:17:47.013656 systemd[1]: Stopped initrd-switch-root.service. Feb 12 19:17:47.013666 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 19:17:47.013677 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:17:47.013692 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:17:47.013702 systemd[1]: Created slice system-getty.slice. Feb 12 19:17:47.013713 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:17:47.013724 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:17:47.013734 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:17:47.013759 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:17:47.013771 systemd[1]: Created slice user.slice. Feb 12 19:17:47.013781 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:17:47.013792 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:17:47.013804 systemd[1]: Set up automount boot.automount. Feb 12 19:17:47.013815 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:17:47.013825 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 19:17:47.013835 systemd[1]: Stopped target initrd-fs.target. Feb 12 19:17:47.013845 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 19:17:47.013856 systemd[1]: Reached target integritysetup.target. Feb 12 19:17:47.013866 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:17:47.013876 systemd[1]: Reached target remote-fs.target. Feb 12 19:17:47.013888 systemd[1]: Reached target slices.target. Feb 12 19:17:47.013898 systemd[1]: Reached target swap.target. Feb 12 19:17:47.013908 systemd[1]: Reached target torcx.target. Feb 12 19:17:47.013918 systemd[1]: Reached target veritysetup.target. Feb 12 19:17:47.013928 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:17:47.013939 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:17:47.013950 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:17:47.013960 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:17:47.013971 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:17:47.013981 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:17:47.013992 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:17:47.014002 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:17:47.014012 systemd[1]: Mounting media.mount... Feb 12 19:17:47.014023 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:17:47.014033 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:17:47.014043 systemd[1]: Mounting tmp.mount... Feb 12 19:17:47.014053 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:17:47.014063 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:17:47.014074 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:17:47.014086 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:17:47.014096 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:17:47.014106 systemd[1]: Starting modprobe@drm.service... Feb 12 19:17:47.014116 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:17:47.014126 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:17:47.014137 systemd[1]: Starting modprobe@loop.service... Feb 12 19:17:47.014148 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:17:47.014158 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 19:17:47.014169 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 19:17:47.014180 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 19:17:47.014190 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 19:17:47.014200 systemd[1]: Stopped systemd-journald.service. Feb 12 19:17:47.014210 systemd[1]: Starting systemd-journald.service... Feb 12 19:17:47.014220 kernel: fuse: init (API version 7.34) Feb 12 19:17:47.014230 kernel: loop: module loaded Feb 12 19:17:47.014242 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:17:47.014254 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:17:47.014264 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:17:47.014274 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:17:47.014284 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 19:17:47.014297 systemd[1]: Stopped verity-setup.service. Feb 12 19:17:47.014308 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:17:47.014318 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:17:47.014328 systemd[1]: Mounted media.mount. Feb 12 19:17:47.014338 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:17:47.014348 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:17:47.014358 systemd[1]: Mounted tmp.mount. Feb 12 19:17:47.014370 systemd-journald[992]: Journal started Feb 12 19:17:47.014411 systemd-journald[992]: Runtime Journal (/run/log/journal/e8e18c8719fe460197d53cfc2636ecc5) is 6.0M, max 48.7M, 42.6M free. Feb 12 19:17:45.006000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:17:45.162000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:17:45.162000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:17:45.162000 audit: BPF prog-id=10 op=LOAD Feb 12 19:17:45.162000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:17:45.162000 audit: BPF prog-id=11 op=LOAD Feb 12 19:17:45.162000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:17:45.199000 audit[926]: AVC avc: denied { associate } for pid=926 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:17:45.199000 audit[926]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001bd8ac a1=400013ede0 a2=40001450c0 a3=32 items=0 ppid=909 pid=926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:17:45.199000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:17:45.200000 audit[926]: AVC avc: denied { associate } for pid=926 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:17:45.200000 audit[926]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001bd985 a2=1ed a3=0 items=2 ppid=909 pid=926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:17:45.200000 audit: CWD cwd="/" Feb 12 19:17:45.200000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:17:45.200000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:17:45.200000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:17:46.892000 audit: BPF prog-id=12 op=LOAD Feb 12 19:17:46.892000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:17:46.892000 audit: BPF prog-id=13 op=LOAD Feb 12 19:17:46.892000 audit: BPF prog-id=14 op=LOAD Feb 12 19:17:46.892000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:17:46.892000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:17:46.893000 audit: BPF prog-id=15 op=LOAD Feb 12 19:17:46.893000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:17:46.893000 audit: BPF prog-id=16 op=LOAD Feb 12 19:17:46.893000 audit: BPF prog-id=17 op=LOAD Feb 12 19:17:46.893000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:17:46.893000 audit: BPF prog-id=14 op=UNLOAD Feb 12 19:17:46.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:46.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:46.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:46.906000 audit: BPF prog-id=15 op=UNLOAD Feb 12 19:17:46.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:46.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:46.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:46.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:46.987000 audit: BPF prog-id=18 op=LOAD Feb 12 19:17:46.987000 audit: BPF prog-id=19 op=LOAD Feb 12 19:17:46.987000 audit: BPF prog-id=20 op=LOAD Feb 12 19:17:46.987000 audit: BPF prog-id=16 op=UNLOAD Feb 12 19:17:46.987000 audit: BPF prog-id=17 op=UNLOAD Feb 12 19:17:47.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.012000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:17:47.012000 audit[992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=fffff955b6d0 a2=4000 a3=1 items=0 ppid=1 pid=992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:17:47.012000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:17:46.892027 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:17:45.198421 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:17:46.892039 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:17:45.198692 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:45Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:17:46.895035 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 19:17:45.198710 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:45Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:17:45.198739 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:45Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 19:17:47.015912 systemd[1]: Started systemd-journald.service. Feb 12 19:17:45.198768 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:45Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 19:17:45.198798 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:45Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 19:17:47.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:45.198810 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:45Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 19:17:45.198997 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:45Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 19:17:45.199031 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:45Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:17:45.199043 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:45Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:17:45.199518 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:45Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 19:17:45.199549 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:45Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 19:17:45.199566 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:45Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 19:17:45.199580 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:45Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 19:17:45.199596 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:45Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 19:17:47.016714 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:17:45.199610 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:45Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 19:17:46.646975 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:46Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:17:46.647318 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:46Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:17:46.647410 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:46Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:17:46.647582 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:46Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:17:46.647633 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:46Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 19:17:46.647689 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-12T19:17:46Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 19:17:47.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.018035 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:17:47.018185 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:17:47.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.019246 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:17:47.019642 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:17:47.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.020866 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:17:47.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.021013 systemd[1]: Finished modprobe@drm.service. Feb 12 19:17:47.022016 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:17:47.022153 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:17:47.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.023297 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:17:47.023462 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:17:47.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.024630 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:17:47.024793 systemd[1]: Finished modprobe@loop.service. Feb 12 19:17:47.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.025924 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:17:47.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.027056 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:17:47.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.028328 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:17:47.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.029519 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:17:47.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.030980 systemd[1]: Reached target network-pre.target. Feb 12 19:17:47.033059 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:17:47.034885 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:17:47.035600 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:17:47.037194 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:17:47.038933 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:17:47.039645 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:17:47.040720 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:17:47.041549 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:17:47.042737 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:17:47.044637 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:17:47.053377 systemd-journald[992]: Time spent on flushing to /var/log/journal/e8e18c8719fe460197d53cfc2636ecc5 is 15.930ms for 1007 entries. Feb 12 19:17:47.053377 systemd-journald[992]: System Journal (/var/log/journal/e8e18c8719fe460197d53cfc2636ecc5) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:17:47.082661 systemd-journald[992]: Received client request to flush runtime journal. Feb 12 19:17:47.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.051171 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:17:47.052192 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:17:47.083938 udevadm[1027]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 19:17:47.059189 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:17:47.061215 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:17:47.062295 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:17:47.064331 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:17:47.065687 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:17:47.070888 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:17:47.073010 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:17:47.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.083577 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:17:47.091025 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:17:47.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.429211 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:17:47.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.430000 audit: BPF prog-id=21 op=LOAD Feb 12 19:17:47.430000 audit: BPF prog-id=22 op=LOAD Feb 12 19:17:47.430000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:17:47.430000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:17:47.431483 systemd[1]: Starting systemd-udevd.service... Feb 12 19:17:47.448608 systemd-udevd[1031]: Using default interface naming scheme 'v252'. Feb 12 19:17:47.461121 systemd[1]: Started systemd-udevd.service. Feb 12 19:17:47.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.464000 audit: BPF prog-id=23 op=LOAD Feb 12 19:17:47.468171 systemd[1]: Starting systemd-networkd.service... Feb 12 19:17:47.482879 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 12 19:17:47.491000 audit: BPF prog-id=24 op=LOAD Feb 12 19:17:47.493000 audit: BPF prog-id=25 op=LOAD Feb 12 19:17:47.493000 audit: BPF prog-id=26 op=LOAD Feb 12 19:17:47.495281 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:17:47.516399 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:17:47.530139 systemd[1]: Started systemd-userdbd.service. Feb 12 19:17:47.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.582110 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:17:47.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.584035 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:17:47.591142 systemd-networkd[1046]: lo: Link UP Feb 12 19:17:47.591154 systemd-networkd[1046]: lo: Gained carrier Feb 12 19:17:47.591926 systemd-networkd[1046]: Enumeration completed Feb 12 19:17:47.592049 systemd[1]: Started systemd-networkd.service. Feb 12 19:17:47.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.593602 systemd-networkd[1046]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:17:47.594446 lvm[1064]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:17:47.600267 systemd-networkd[1046]: eth0: Link UP Feb 12 19:17:47.600280 systemd-networkd[1046]: eth0: Gained carrier Feb 12 19:17:47.625913 systemd-networkd[1046]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:17:47.626651 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:17:47.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.627469 systemd[1]: Reached target cryptsetup.target. Feb 12 19:17:47.629193 systemd[1]: Starting lvm2-activation.service... Feb 12 19:17:47.632802 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:17:47.667811 systemd[1]: Finished lvm2-activation.service. Feb 12 19:17:47.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.668555 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:17:47.669192 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:17:47.669224 systemd[1]: Reached target local-fs.target. Feb 12 19:17:47.669804 systemd[1]: Reached target machines.target. Feb 12 19:17:47.671593 systemd[1]: Starting ldconfig.service... Feb 12 19:17:47.672517 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:17:47.672571 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:17:47.673627 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:17:47.675553 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:17:47.677954 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:17:47.678989 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:17:47.679054 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:17:47.680029 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:17:47.682042 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1068 (bootctl) Feb 12 19:17:47.684673 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:17:47.686543 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:17:47.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.693899 systemd-tmpfiles[1071]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:17:47.699326 systemd-tmpfiles[1071]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:17:47.700519 systemd-tmpfiles[1071]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:17:47.754725 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:17:47.755330 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:17:47.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.771259 systemd-fsck[1076]: fsck.fat 4.2 (2021-01-31) Feb 12 19:17:47.771259 systemd-fsck[1076]: /dev/vda1: 236 files, 113719/258078 clusters Feb 12 19:17:47.774139 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:17:47.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.777184 systemd[1]: Mounting boot.mount... Feb 12 19:17:47.794994 systemd[1]: Mounted boot.mount. Feb 12 19:17:47.801804 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:17:47.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.857850 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:17:47.858296 ldconfig[1067]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:17:47.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.859985 systemd[1]: Starting audit-rules.service... Feb 12 19:17:47.861871 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:17:47.864103 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:17:47.864000 audit: BPF prog-id=27 op=LOAD Feb 12 19:17:47.866632 systemd[1]: Starting systemd-resolved.service... Feb 12 19:17:47.867000 audit: BPF prog-id=28 op=LOAD Feb 12 19:17:47.869077 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:17:47.871170 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:17:47.875840 systemd[1]: Finished ldconfig.service. Feb 12 19:17:47.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.880385 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:17:47.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.881468 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:17:47.882000 audit[1085]: SYSTEM_BOOT pid=1085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.886834 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:17:47.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.908958 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:17:47.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.911437 systemd[1]: Starting systemd-update-done.service... Feb 12 19:17:47.918095 systemd[1]: Finished systemd-update-done.service. Feb 12 19:17:47.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:17:47.925000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:17:47.925000 audit[1100]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd50400b0 a2=420 a3=0 items=0 ppid=1079 pid=1100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:17:47.925000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:17:47.928252 augenrules[1100]: No rules Feb 12 19:17:47.927462 systemd[1]: Finished audit-rules.service. Feb 12 19:17:47.928441 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:17:47.929514 systemd[1]: Reached target time-set.target. Feb 12 19:17:47.930613 systemd-timesyncd[1084]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 19:17:47.930669 systemd-timesyncd[1084]: Initial clock synchronization to Mon 2024-02-12 19:17:48.089992 UTC. Feb 12 19:17:47.940334 systemd-resolved[1082]: Positive Trust Anchors: Feb 12 19:17:47.940349 systemd-resolved[1082]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:17:47.940376 systemd-resolved[1082]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:17:47.951069 systemd-resolved[1082]: Defaulting to hostname 'linux'. Feb 12 19:17:47.954326 systemd[1]: Started systemd-resolved.service. Feb 12 19:17:47.955276 systemd[1]: Reached target network.target. Feb 12 19:17:47.956073 systemd[1]: Reached target nss-lookup.target. Feb 12 19:17:47.956883 systemd[1]: Reached target sysinit.target. Feb 12 19:17:47.957712 systemd[1]: Started motdgen.path. Feb 12 19:17:47.958426 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:17:47.959699 systemd[1]: Started logrotate.timer. Feb 12 19:17:47.960557 systemd[1]: Started mdadm.timer. Feb 12 19:17:47.961231 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:17:47.962044 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:17:47.962076 systemd[1]: Reached target paths.target. Feb 12 19:17:47.962791 systemd[1]: Reached target timers.target. Feb 12 19:17:47.963897 systemd[1]: Listening on dbus.socket. Feb 12 19:17:47.965683 systemd[1]: Starting docker.socket... Feb 12 19:17:47.968828 systemd[1]: Listening on sshd.socket. Feb 12 19:17:47.969629 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:17:47.970059 systemd[1]: Listening on docker.socket. Feb 12 19:17:47.970884 systemd[1]: Reached target sockets.target. Feb 12 19:17:47.971646 systemd[1]: Reached target basic.target. Feb 12 19:17:47.972436 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:17:47.972477 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:17:47.973460 systemd[1]: Starting containerd.service... Feb 12 19:17:47.975115 systemd[1]: Starting dbus.service... Feb 12 19:17:47.976706 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:17:47.978574 systemd[1]: Starting extend-filesystems.service... Feb 12 19:17:47.980315 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:17:47.981530 systemd[1]: Starting motdgen.service... Feb 12 19:17:47.984070 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:17:47.984569 jq[1110]: false Feb 12 19:17:47.986007 systemd[1]: Starting prepare-critools.service... Feb 12 19:17:47.987858 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:17:47.989679 systemd[1]: Starting sshd-keygen.service... Feb 12 19:17:47.992648 systemd[1]: Starting systemd-logind.service... Feb 12 19:17:47.993485 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:17:47.993556 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:17:47.993984 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 19:17:47.994693 systemd[1]: Starting update-engine.service... Feb 12 19:17:47.996403 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:17:47.999131 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:17:47.999305 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:17:48.000602 jq[1124]: true Feb 12 19:17:48.001184 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:17:48.001349 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:17:48.011968 dbus-daemon[1109]: [system] SELinux support is enabled Feb 12 19:17:48.012150 systemd[1]: Started dbus.service. Feb 12 19:17:48.014864 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:17:48.014893 systemd[1]: Reached target system-config.target. Feb 12 19:17:48.015922 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:17:48.015943 systemd[1]: Reached target user-config.target. Feb 12 19:17:48.023225 jq[1132]: true Feb 12 19:17:48.032767 extend-filesystems[1111]: Found vda Feb 12 19:17:48.032767 extend-filesystems[1111]: Found vda1 Feb 12 19:17:48.032767 extend-filesystems[1111]: Found vda2 Feb 12 19:17:48.032767 extend-filesystems[1111]: Found vda3 Feb 12 19:17:48.032767 extend-filesystems[1111]: Found usr Feb 12 19:17:48.032767 extend-filesystems[1111]: Found vda4 Feb 12 19:17:48.032767 extend-filesystems[1111]: Found vda6 Feb 12 19:17:48.032767 extend-filesystems[1111]: Found vda7 Feb 12 19:17:48.032767 extend-filesystems[1111]: Found vda9 Feb 12 19:17:48.032767 extend-filesystems[1111]: Checking size of /dev/vda9 Feb 12 19:17:48.030501 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:17:48.040683 tar[1129]: crictl Feb 12 19:17:48.030721 systemd[1]: Finished motdgen.service. Feb 12 19:17:48.054409 extend-filesystems[1111]: Resized partition /dev/vda9 Feb 12 19:17:48.055450 extend-filesystems[1151]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:17:48.076100 tar[1128]: ./ Feb 12 19:17:48.076100 tar[1128]: ./loopback Feb 12 19:17:48.084845 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 19:17:48.104494 update_engine[1123]: I0212 19:17:48.104264 1123 main.cc:92] Flatcar Update Engine starting Feb 12 19:17:48.115652 update_engine[1123]: I0212 19:17:48.112390 1123 update_check_scheduler.cc:74] Next update check in 3m31s Feb 12 19:17:48.112471 systemd[1]: Started update-engine.service. Feb 12 19:17:48.115259 systemd-logind[1121]: Watching system buttons on /dev/input/event0 (Power Button) Feb 12 19:17:48.115731 systemd[1]: Started locksmithd.service. Feb 12 19:17:48.116011 systemd-logind[1121]: New seat seat0. Feb 12 19:17:48.120786 systemd[1]: Started systemd-logind.service. Feb 12 19:17:48.126787 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 19:17:48.139481 extend-filesystems[1151]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:17:48.139481 extend-filesystems[1151]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 19:17:48.139481 extend-filesystems[1151]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 19:17:48.143123 extend-filesystems[1111]: Resized filesystem in /dev/vda9 Feb 12 19:17:48.143885 bash[1162]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:17:48.145301 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:17:48.145465 systemd[1]: Finished extend-filesystems.service. Feb 12 19:17:48.146920 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:17:48.158176 env[1135]: time="2024-02-12T19:17:48.158112690Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:17:48.161096 tar[1128]: ./bandwidth Feb 12 19:17:48.193316 env[1135]: time="2024-02-12T19:17:48.193199539Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:17:48.193423 env[1135]: time="2024-02-12T19:17:48.193385289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:17:48.196846 tar[1128]: ./ptp Feb 12 19:17:48.201921 env[1135]: time="2024-02-12T19:17:48.201870268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:17:48.201921 env[1135]: time="2024-02-12T19:17:48.201911963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:17:48.202180 env[1135]: time="2024-02-12T19:17:48.202152871Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:17:48.202180 env[1135]: time="2024-02-12T19:17:48.202174494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:17:48.202237 env[1135]: time="2024-02-12T19:17:48.202188283Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:17:48.202237 env[1135]: time="2024-02-12T19:17:48.202198645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:17:48.202298 env[1135]: time="2024-02-12T19:17:48.202279343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:17:48.202533 env[1135]: time="2024-02-12T19:17:48.202507603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:17:48.202650 env[1135]: time="2024-02-12T19:17:48.202627343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:17:48.202650 env[1135]: time="2024-02-12T19:17:48.202647375Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:17:48.202720 env[1135]: time="2024-02-12T19:17:48.202702370Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:17:48.202770 env[1135]: time="2024-02-12T19:17:48.202719790Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:17:48.207043 env[1135]: time="2024-02-12T19:17:48.207002648Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:17:48.207043 env[1135]: time="2024-02-12T19:17:48.207044996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:17:48.207152 env[1135]: time="2024-02-12T19:17:48.207058826Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:17:48.207152 env[1135]: time="2024-02-12T19:17:48.207090362Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:17:48.207152 env[1135]: time="2024-02-12T19:17:48.207106885Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:17:48.207152 env[1135]: time="2024-02-12T19:17:48.207123041Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:17:48.207152 env[1135]: time="2024-02-12T19:17:48.207144133Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:17:48.207548 env[1135]: time="2024-02-12T19:17:48.207523793Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:17:48.207548 env[1135]: time="2024-02-12T19:17:48.207548353Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:17:48.207607 env[1135]: time="2024-02-12T19:17:48.207562265Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:17:48.207607 env[1135]: time="2024-02-12T19:17:48.207575442Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:17:48.207607 env[1135]: time="2024-02-12T19:17:48.207589150Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:17:48.207749 env[1135]: time="2024-02-12T19:17:48.207725331Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:17:48.207856 env[1135]: time="2024-02-12T19:17:48.207836708Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:17:48.208087 env[1135]: time="2024-02-12T19:17:48.208066764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:17:48.208121 env[1135]: time="2024-02-12T19:17:48.208096750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:17:48.208121 env[1135]: time="2024-02-12T19:17:48.208111396Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:17:48.208240 env[1135]: time="2024-02-12T19:17:48.208223752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:17:48.208271 env[1135]: time="2024-02-12T19:17:48.208240438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:17:48.208271 env[1135]: time="2024-02-12T19:17:48.208253697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:17:48.208271 env[1135]: time="2024-02-12T19:17:48.208266344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:17:48.208326 env[1135]: time="2024-02-12T19:17:48.208278991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:17:48.208326 env[1135]: time="2024-02-12T19:17:48.208291353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:17:48.208326 env[1135]: time="2024-02-12T19:17:48.208303062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:17:48.208326 env[1135]: time="2024-02-12T19:17:48.208314607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:17:48.208408 env[1135]: time="2024-02-12T19:17:48.208328030Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:17:48.208480 env[1135]: time="2024-02-12T19:17:48.208457031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:17:48.208529 env[1135]: time="2024-02-12T19:17:48.208480897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:17:48.208529 env[1135]: time="2024-02-12T19:17:48.208494401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:17:48.208529 env[1135]: time="2024-02-12T19:17:48.208507619Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:17:48.208598 env[1135]: time="2024-02-12T19:17:48.208531200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:17:48.208598 env[1135]: time="2024-02-12T19:17:48.208543195Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:17:48.208598 env[1135]: time="2024-02-12T19:17:48.208561757Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:17:48.208659 env[1135]: time="2024-02-12T19:17:48.208596884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:17:48.208863 env[1135]: time="2024-02-12T19:17:48.208808295Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:17:48.211435 env[1135]: time="2024-02-12T19:17:48.208867982Z" level=info msg="Connect containerd service" Feb 12 19:17:48.211435 env[1135]: time="2024-02-12T19:17:48.208904455Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:17:48.211435 env[1135]: time="2024-02-12T19:17:48.209638031Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:17:48.211435 env[1135]: time="2024-02-12T19:17:48.210066688Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:17:48.211435 env[1135]: time="2024-02-12T19:17:48.210111483Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:17:48.211435 env[1135]: time="2024-02-12T19:17:48.210113972Z" level=info msg="Start subscribing containerd event" Feb 12 19:17:48.211435 env[1135]: time="2024-02-12T19:17:48.210165621Z" level=info msg="containerd successfully booted in 0.076791s" Feb 12 19:17:48.211435 env[1135]: time="2024-02-12T19:17:48.210176065Z" level=info msg="Start recovering state" Feb 12 19:17:48.211435 env[1135]: time="2024-02-12T19:17:48.210246726Z" level=info msg="Start event monitor" Feb 12 19:17:48.211435 env[1135]: time="2024-02-12T19:17:48.210266146Z" level=info msg="Start snapshots syncer" Feb 12 19:17:48.211435 env[1135]: time="2024-02-12T19:17:48.210278262Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:17:48.211435 env[1135]: time="2024-02-12T19:17:48.210286830Z" level=info msg="Start streaming server" Feb 12 19:17:48.210253 systemd[1]: Started containerd.service. Feb 12 19:17:48.237618 tar[1128]: ./vlan Feb 12 19:17:48.267351 tar[1128]: ./host-device Feb 12 19:17:48.284359 locksmithd[1164]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:17:48.298182 tar[1128]: ./tuning Feb 12 19:17:48.324178 tar[1128]: ./vrf Feb 12 19:17:48.351005 tar[1128]: ./sbr Feb 12 19:17:48.377219 tar[1128]: ./tap Feb 12 19:17:48.407414 tar[1128]: ./dhcp Feb 12 19:17:48.480483 tar[1128]: ./static Feb 12 19:17:48.502105 tar[1128]: ./firewall Feb 12 19:17:48.535245 tar[1128]: ./macvlan Feb 12 19:17:48.535398 systemd[1]: Finished prepare-critools.service. Feb 12 19:17:48.565317 tar[1128]: ./dummy Feb 12 19:17:48.594644 tar[1128]: ./bridge Feb 12 19:17:48.626537 tar[1128]: ./ipvlan Feb 12 19:17:48.655726 tar[1128]: ./portmap Feb 12 19:17:48.683533 tar[1128]: ./host-local Feb 12 19:17:48.718504 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:17:49.052896 systemd-networkd[1046]: eth0: Gained IPv6LL Feb 12 19:17:49.180193 sshd_keygen[1130]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:17:49.197891 systemd[1]: Finished sshd-keygen.service. Feb 12 19:17:49.200220 systemd[1]: Starting issuegen.service... Feb 12 19:17:49.204707 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:17:49.204969 systemd[1]: Finished issuegen.service. Feb 12 19:17:49.207108 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:17:49.213300 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:17:49.215609 systemd[1]: Started getty@tty1.service. Feb 12 19:17:49.217818 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 12 19:17:49.218892 systemd[1]: Reached target getty.target. Feb 12 19:17:49.219729 systemd[1]: Reached target multi-user.target. Feb 12 19:17:49.221847 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:17:49.228647 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:17:49.228821 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:17:49.229874 systemd[1]: Startup finished in 603ms (kernel) + 5.380s (initrd) + 4.271s (userspace) = 10.256s. Feb 12 19:17:51.851063 systemd[1]: Created slice system-sshd.slice. Feb 12 19:17:51.852211 systemd[1]: Started sshd@0-10.0.0.62:22-10.0.0.1:49640.service. Feb 12 19:17:51.894066 sshd[1193]: Accepted publickey for core from 10.0.0.1 port 49640 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:17:51.896592 sshd[1193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:17:51.907660 systemd-logind[1121]: New session 1 of user core. Feb 12 19:17:51.908611 systemd[1]: Created slice user-500.slice. Feb 12 19:17:51.909925 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:17:51.918882 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:17:51.920308 systemd[1]: Starting user@500.service... Feb 12 19:17:51.923082 (systemd)[1196]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:17:51.985251 systemd[1196]: Queued start job for default target default.target. Feb 12 19:17:51.985775 systemd[1196]: Reached target paths.target. Feb 12 19:17:51.985798 systemd[1196]: Reached target sockets.target. Feb 12 19:17:51.985810 systemd[1196]: Reached target timers.target. Feb 12 19:17:51.985821 systemd[1196]: Reached target basic.target. Feb 12 19:17:51.985876 systemd[1196]: Reached target default.target. Feb 12 19:17:51.985908 systemd[1196]: Startup finished in 56ms. Feb 12 19:17:51.986098 systemd[1]: Started user@500.service. Feb 12 19:17:51.987105 systemd[1]: Started session-1.scope. Feb 12 19:17:52.039448 systemd[1]: Started sshd@1-10.0.0.62:22-10.0.0.1:49656.service. Feb 12 19:17:52.084493 sshd[1205]: Accepted publickey for core from 10.0.0.1 port 49656 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:17:52.086000 sshd[1205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:17:52.089348 systemd-logind[1121]: New session 2 of user core. Feb 12 19:17:52.090234 systemd[1]: Started session-2.scope. Feb 12 19:17:52.152637 sshd[1205]: pam_unix(sshd:session): session closed for user core Feb 12 19:17:52.155984 systemd[1]: Started sshd@2-10.0.0.62:22-10.0.0.1:49668.service. Feb 12 19:17:52.156487 systemd[1]: sshd@1-10.0.0.62:22-10.0.0.1:49656.service: Deactivated successfully. Feb 12 19:17:52.157318 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:17:52.157821 systemd-logind[1121]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:17:52.158483 systemd-logind[1121]: Removed session 2. Feb 12 19:17:52.191687 sshd[1210]: Accepted publickey for core from 10.0.0.1 port 49668 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:17:52.192898 sshd[1210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:17:52.196255 systemd-logind[1121]: New session 3 of user core. Feb 12 19:17:52.197105 systemd[1]: Started session-3.scope. Feb 12 19:17:52.246786 sshd[1210]: pam_unix(sshd:session): session closed for user core Feb 12 19:17:52.249356 systemd[1]: sshd@2-10.0.0.62:22-10.0.0.1:49668.service: Deactivated successfully. Feb 12 19:17:52.250019 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:17:52.250498 systemd-logind[1121]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:17:52.251469 systemd[1]: Started sshd@3-10.0.0.62:22-10.0.0.1:49670.service. Feb 12 19:17:52.252167 systemd-logind[1121]: Removed session 3. Feb 12 19:17:52.287605 sshd[1217]: Accepted publickey for core from 10.0.0.1 port 49670 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:17:52.289102 sshd[1217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:17:52.292336 systemd-logind[1121]: New session 4 of user core. Feb 12 19:17:52.293310 systemd[1]: Started session-4.scope. Feb 12 19:17:52.347040 sshd[1217]: pam_unix(sshd:session): session closed for user core Feb 12 19:17:52.349686 systemd[1]: sshd@3-10.0.0.62:22-10.0.0.1:49670.service: Deactivated successfully. Feb 12 19:17:52.350342 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:17:52.350833 systemd-logind[1121]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:17:52.351907 systemd[1]: Started sshd@4-10.0.0.62:22-10.0.0.1:49686.service. Feb 12 19:17:52.352559 systemd-logind[1121]: Removed session 4. Feb 12 19:17:52.387439 sshd[1223]: Accepted publickey for core from 10.0.0.1 port 49686 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:17:52.388624 sshd[1223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:17:52.392043 systemd-logind[1121]: New session 5 of user core. Feb 12 19:17:52.392842 systemd[1]: Started session-5.scope. Feb 12 19:17:52.467823 sudo[1226]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:17:52.468633 sudo[1226]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:17:52.984504 systemd[1]: Reloading. Feb 12 19:17:53.034718 /usr/lib/systemd/system-generators/torcx-generator[1256]: time="2024-02-12T19:17:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:17:53.034749 /usr/lib/systemd/system-generators/torcx-generator[1256]: time="2024-02-12T19:17:53Z" level=info msg="torcx already run" Feb 12 19:17:53.088185 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:17:53.088204 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:17:53.105471 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:17:53.162593 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:17:53.167978 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:17:53.168403 systemd[1]: Reached target network-online.target. Feb 12 19:17:53.169901 systemd[1]: Started kubelet.service. Feb 12 19:17:53.179772 systemd[1]: Starting coreos-metadata.service... Feb 12 19:17:53.186646 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 12 19:17:53.186987 systemd[1]: Finished coreos-metadata.service. Feb 12 19:17:53.300597 kubelet[1294]: E0212 19:17:53.300465 1294 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 12 19:17:53.303556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:17:53.303678 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:17:53.474449 systemd[1]: Stopped kubelet.service. Feb 12 19:17:53.490134 systemd[1]: Reloading. Feb 12 19:17:53.538253 /usr/lib/systemd/system-generators/torcx-generator[1364]: time="2024-02-12T19:17:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:17:53.538283 /usr/lib/systemd/system-generators/torcx-generator[1364]: time="2024-02-12T19:17:53Z" level=info msg="torcx already run" Feb 12 19:17:53.653422 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:17:53.653669 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:17:53.671393 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:17:53.733115 systemd[1]: Started kubelet.service. Feb 12 19:17:53.771829 kubelet[1402]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:17:53.771829 kubelet[1402]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:17:53.771829 kubelet[1402]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:17:53.772175 kubelet[1402]: I0212 19:17:53.771867 1402 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:17:54.584816 kubelet[1402]: I0212 19:17:54.584776 1402 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 12 19:17:54.584816 kubelet[1402]: I0212 19:17:54.584804 1402 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:17:54.585048 kubelet[1402]: I0212 19:17:54.585019 1402 server.go:895] "Client rotation is on, will bootstrap in background" Feb 12 19:17:54.588766 kubelet[1402]: I0212 19:17:54.588406 1402 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:17:54.594698 kubelet[1402]: W0212 19:17:54.594670 1402 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:17:54.595421 kubelet[1402]: I0212 19:17:54.595397 1402 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:17:54.595612 kubelet[1402]: I0212 19:17:54.595595 1402 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:17:54.595763 kubelet[1402]: I0212 19:17:54.595742 1402 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 12 19:17:54.595836 kubelet[1402]: I0212 19:17:54.595786 1402 topology_manager.go:138] "Creating topology manager with none policy" Feb 12 19:17:54.595836 kubelet[1402]: I0212 19:17:54.595795 1402 container_manager_linux.go:301] "Creating device plugin manager" Feb 12 19:17:54.595904 kubelet[1402]: I0212 19:17:54.595890 1402 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:17:54.596126 kubelet[1402]: I0212 19:17:54.596112 1402 kubelet.go:393] "Attempting to sync node with API server" Feb 12 19:17:54.596159 kubelet[1402]: I0212 19:17:54.596129 1402 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:17:54.596159 kubelet[1402]: I0212 19:17:54.596148 1402 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:17:54.596198 kubelet[1402]: I0212 19:17:54.596162 1402 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:17:54.596605 kubelet[1402]: E0212 19:17:54.596579 1402 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:17:54.596728 kubelet[1402]: E0212 19:17:54.596714 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:17:54.597161 kubelet[1402]: I0212 19:17:54.597127 1402 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:17:54.597722 kubelet[1402]: W0212 19:17:54.597690 1402 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:17:54.598641 kubelet[1402]: I0212 19:17:54.598615 1402 server.go:1232] "Started kubelet" Feb 12 19:17:54.598718 kubelet[1402]: I0212 19:17:54.598695 1402 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:17:54.599138 kubelet[1402]: I0212 19:17:54.599118 1402 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:17:54.599328 kubelet[1402]: E0212 19:17:54.599297 1402 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:17:54.599383 kubelet[1402]: E0212 19:17:54.599357 1402 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:17:54.599566 kubelet[1402]: I0212 19:17:54.599546 1402 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 12 19:17:54.599884 kubelet[1402]: I0212 19:17:54.599856 1402 server.go:462] "Adding debug handlers to kubelet server" Feb 12 19:17:54.601833 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:17:54.601966 kubelet[1402]: I0212 19:17:54.601937 1402 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:17:54.602079 kubelet[1402]: I0212 19:17:54.602059 1402 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 12 19:17:54.602182 kubelet[1402]: E0212 19:17:54.602166 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.62\" not found" Feb 12 19:17:54.602340 kubelet[1402]: I0212 19:17:54.602324 1402 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:17:54.602504 kubelet[1402]: I0212 19:17:54.602489 1402 reconciler_new.go:29] "Reconciler: start to sync state" Feb 12 19:17:54.620699 kubelet[1402]: W0212 19:17:54.620663 1402 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:17:54.620699 kubelet[1402]: E0212 19:17:54.620701 1402 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:17:54.620858 kubelet[1402]: W0212 19:17:54.620821 1402 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.62" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:17:54.620858 kubelet[1402]: E0212 19:17:54.620834 1402 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.62" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:17:54.620970 kubelet[1402]: E0212 19:17:54.620874 1402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.62.17b333a6a9201770", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.62", UID:"10.0.0.62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.62"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 598590320, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 598590320, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.62"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:17:54.621374 kubelet[1402]: E0212 19:17:54.621346 1402 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.62\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 12 19:17:54.621422 kubelet[1402]: W0212 19:17:54.621407 1402 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:17:54.621422 kubelet[1402]: E0212 19:17:54.621421 1402 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:17:54.622226 kubelet[1402]: E0212 19:17:54.622154 1402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.62.17b333a6a92b2651", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.62", UID:"10.0.0.62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.62"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 599315025, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 599315025, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.62"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:17:54.623829 kubelet[1402]: I0212 19:17:54.623794 1402 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:17:54.623829 kubelet[1402]: I0212 19:17:54.623827 1402 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:17:54.623949 kubelet[1402]: I0212 19:17:54.623849 1402 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:17:54.624400 kubelet[1402]: E0212 19:17:54.624336 1402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.62.17b333a6aa978537", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.62", UID:"10.0.0.62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.62 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.62"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 623194423, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 623194423, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.62"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:17:54.625444 kubelet[1402]: E0212 19:17:54.625379 1402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.62.17b333a6aa97b45a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.62", UID:"10.0.0.62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.62 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.62"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 623206490, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 623206490, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.62"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:17:54.625704 kubelet[1402]: I0212 19:17:54.625670 1402 policy_none.go:49] "None policy: Start" Feb 12 19:17:54.626340 kubelet[1402]: I0212 19:17:54.626324 1402 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:17:54.626425 kubelet[1402]: I0212 19:17:54.626414 1402 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:17:54.626887 kubelet[1402]: E0212 19:17:54.626686 1402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.62.17b333a6aa97bf13", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.62", UID:"10.0.0.62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.62 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.62"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 623209235, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 623209235, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.62"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:17:54.632553 systemd[1]: Created slice kubepods.slice. Feb 12 19:17:54.636815 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 19:17:54.639309 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 19:17:54.645598 kubelet[1402]: I0212 19:17:54.645580 1402 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:17:54.646034 kubelet[1402]: I0212 19:17:54.646016 1402 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:17:54.647167 kubelet[1402]: E0212 19:17:54.647150 1402 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.62\" not found" Feb 12 19:17:54.648056 kubelet[1402]: E0212 19:17:54.647977 1402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.62.17b333a6ac006278", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.62", UID:"10.0.0.62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.62"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 646844024, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 646844024, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.62"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:17:54.681341 kubelet[1402]: I0212 19:17:54.681309 1402 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 12 19:17:54.682178 kubelet[1402]: I0212 19:17:54.682154 1402 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 12 19:17:54.682178 kubelet[1402]: I0212 19:17:54.682183 1402 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 12 19:17:54.682276 kubelet[1402]: I0212 19:17:54.682203 1402 kubelet.go:2303] "Starting kubelet main sync loop" Feb 12 19:17:54.682276 kubelet[1402]: E0212 19:17:54.682249 1402 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:17:54.683651 kubelet[1402]: W0212 19:17:54.683623 1402 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:17:54.683755 kubelet[1402]: E0212 19:17:54.683656 1402 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:17:54.703853 kubelet[1402]: I0212 19:17:54.703827 1402 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.62" Feb 12 19:17:54.705601 kubelet[1402]: E0212 19:17:54.705522 1402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.62.17b333a6aa978537", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.62", UID:"10.0.0.62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.62 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.62"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 623194423, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 703782085, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.62"}': 'events "10.0.0.62.17b333a6aa978537" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:17:54.705889 kubelet[1402]: E0212 19:17:54.705871 1402 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.62" Feb 12 19:17:54.706507 kubelet[1402]: E0212 19:17:54.706439 1402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.62.17b333a6aa97b45a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.62", UID:"10.0.0.62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.62 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.62"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 623206490, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 703793466, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.62"}': 'events "10.0.0.62.17b333a6aa97b45a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:17:54.707419 kubelet[1402]: E0212 19:17:54.707363 1402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.62.17b333a6aa97bf13", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.62", UID:"10.0.0.62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.62 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.62"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 623209235, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 703801336, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.62"}': 'events "10.0.0.62.17b333a6aa97bf13" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:17:54.826801 kubelet[1402]: E0212 19:17:54.826764 1402 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.62\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 12 19:17:54.906912 kubelet[1402]: I0212 19:17:54.906826 1402 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.62" Feb 12 19:17:54.908660 kubelet[1402]: E0212 19:17:54.908577 1402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.62.17b333a6aa978537", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.62", UID:"10.0.0.62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.62 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.62"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 623194423, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 906777512, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.62"}': 'events "10.0.0.62.17b333a6aa978537" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:17:54.909104 kubelet[1402]: E0212 19:17:54.909059 1402 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.62" Feb 12 19:17:54.909782 kubelet[1402]: E0212 19:17:54.909704 1402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.62.17b333a6aa97b45a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.62", UID:"10.0.0.62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.62 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.62"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 623206490, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 906792848, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.62"}': 'events "10.0.0.62.17b333a6aa97b45a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:17:54.910583 kubelet[1402]: E0212 19:17:54.910517 1402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.62.17b333a6aa97bf13", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.62", UID:"10.0.0.62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.62 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.62"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 623209235, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 906796359, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.62"}': 'events "10.0.0.62.17b333a6aa97bf13" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:17:55.228937 kubelet[1402]: E0212 19:17:55.228857 1402 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.62\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 12 19:17:55.310094 kubelet[1402]: I0212 19:17:55.310068 1402 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.62" Feb 12 19:17:55.316318 kubelet[1402]: E0212 19:17:55.316290 1402 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.62" Feb 12 19:17:55.316431 kubelet[1402]: E0212 19:17:55.316288 1402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.62.17b333a6aa978537", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.62", UID:"10.0.0.62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.62 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.62"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 623194423, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 55, 310022044, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.62"}': 'events "10.0.0.62.17b333a6aa978537" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:17:55.317382 kubelet[1402]: E0212 19:17:55.317315 1402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.62.17b333a6aa97b45a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.62", UID:"10.0.0.62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.62 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.62"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 623206490, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 55, 310034501, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.62"}': 'events "10.0.0.62.17b333a6aa97b45a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:17:55.318406 kubelet[1402]: E0212 19:17:55.318346 1402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.62.17b333a6aa97bf13", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.62", UID:"10.0.0.62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.62 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.62"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 54, 623209235, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 55, 310037564, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.62"}': 'events "10.0.0.62.17b333a6aa97bf13" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:17:55.471472 kubelet[1402]: W0212 19:17:55.471438 1402 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:17:55.471568 kubelet[1402]: E0212 19:17:55.471475 1402 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:17:55.587165 kubelet[1402]: I0212 19:17:55.587079 1402 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 19:17:55.597586 kubelet[1402]: E0212 19:17:55.597533 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:17:55.979161 kubelet[1402]: E0212 19:17:55.979062 1402 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.62" not found Feb 12 19:17:56.032556 kubelet[1402]: E0212 19:17:56.032528 1402 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.62\" not found" node="10.0.0.62" Feb 12 19:17:56.117903 kubelet[1402]: I0212 19:17:56.117878 1402 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.62" Feb 12 19:17:56.121957 kubelet[1402]: I0212 19:17:56.121921 1402 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.62" Feb 12 19:17:56.234998 kubelet[1402]: I0212 19:17:56.234899 1402 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 19:17:56.235508 env[1135]: time="2024-02-12T19:17:56.235399266Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:17:56.235786 kubelet[1402]: I0212 19:17:56.235581 1402 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 19:17:56.247128 sudo[1226]: pam_unix(sudo:session): session closed for user root Feb 12 19:17:56.248942 sshd[1223]: pam_unix(sshd:session): session closed for user core Feb 12 19:17:56.251378 systemd[1]: sshd@4-10.0.0.62:22-10.0.0.1:49686.service: Deactivated successfully. Feb 12 19:17:56.252239 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:17:56.252822 systemd-logind[1121]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:17:56.253592 systemd-logind[1121]: Removed session 5. Feb 12 19:17:56.598004 kubelet[1402]: I0212 19:17:56.597882 1402 apiserver.go:52] "Watching apiserver" Feb 12 19:17:56.598143 kubelet[1402]: E0212 19:17:56.597906 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:17:56.600594 kubelet[1402]: I0212 19:17:56.600562 1402 topology_manager.go:215] "Topology Admit Handler" podUID="44f9a2fe-4d05-4440-81cf-e0cfcc1af3e9" podNamespace="kube-system" podName="kube-proxy-xgvv9" Feb 12 19:17:56.600675 kubelet[1402]: I0212 19:17:56.600661 1402 topology_manager.go:215] "Topology Admit Handler" podUID="3addb8ba-f894-4902-b2c3-db57695002ef" podNamespace="kube-system" podName="cilium-m6rdq" Feb 12 19:17:56.602788 kubelet[1402]: I0212 19:17:56.602768 1402 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:17:56.605596 systemd[1]: Created slice kubepods-burstable-pod3addb8ba_f894_4902_b2c3_db57695002ef.slice. Feb 12 19:17:56.614408 kubelet[1402]: I0212 19:17:56.614385 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-xtables-lock\") pod \"cilium-m6rdq\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " pod="kube-system/cilium-m6rdq" Feb 12 19:17:56.614535 kubelet[1402]: I0212 19:17:56.614523 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/44f9a2fe-4d05-4440-81cf-e0cfcc1af3e9-kube-proxy\") pod \"kube-proxy-xgvv9\" (UID: \"44f9a2fe-4d05-4440-81cf-e0cfcc1af3e9\") " pod="kube-system/kube-proxy-xgvv9" Feb 12 19:17:56.614607 kubelet[1402]: I0212 19:17:56.614597 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44f9a2fe-4d05-4440-81cf-e0cfcc1af3e9-xtables-lock\") pod \"kube-proxy-xgvv9\" (UID: \"44f9a2fe-4d05-4440-81cf-e0cfcc1af3e9\") " pod="kube-system/kube-proxy-xgvv9" Feb 12 19:17:56.614703 kubelet[1402]: I0212 19:17:56.614692 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44f9a2fe-4d05-4440-81cf-e0cfcc1af3e9-lib-modules\") pod \"kube-proxy-xgvv9\" (UID: \"44f9a2fe-4d05-4440-81cf-e0cfcc1af3e9\") " pod="kube-system/kube-proxy-xgvv9" Feb 12 19:17:56.614850 kubelet[1402]: I0212 19:17:56.614806 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjg5w\" (UniqueName: \"kubernetes.io/projected/44f9a2fe-4d05-4440-81cf-e0cfcc1af3e9-kube-api-access-kjg5w\") pod \"kube-proxy-xgvv9\" (UID: \"44f9a2fe-4d05-4440-81cf-e0cfcc1af3e9\") " pod="kube-system/kube-proxy-xgvv9" Feb 12 19:17:56.614981 kubelet[1402]: I0212 19:17:56.614960 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-bpf-maps\") pod \"cilium-m6rdq\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " pod="kube-system/cilium-m6rdq" Feb 12 19:17:56.615020 kubelet[1402]: I0212 19:17:56.615003 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-cilium-cgroup\") pod \"cilium-m6rdq\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " pod="kube-system/cilium-m6rdq" Feb 12 19:17:56.615045 kubelet[1402]: I0212 19:17:56.615023 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-cni-path\") pod \"cilium-m6rdq\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " pod="kube-system/cilium-m6rdq" Feb 12 19:17:56.615045 kubelet[1402]: I0212 19:17:56.615044 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3addb8ba-f894-4902-b2c3-db57695002ef-clustermesh-secrets\") pod \"cilium-m6rdq\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " pod="kube-system/cilium-m6rdq" Feb 12 19:17:56.615103 kubelet[1402]: I0212 19:17:56.615074 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3addb8ba-f894-4902-b2c3-db57695002ef-cilium-config-path\") pod \"cilium-m6rdq\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " pod="kube-system/cilium-m6rdq" Feb 12 19:17:56.615103 kubelet[1402]: I0212 19:17:56.615097 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-host-proc-sys-kernel\") pod \"cilium-m6rdq\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " pod="kube-system/cilium-m6rdq" Feb 12 19:17:56.615147 kubelet[1402]: I0212 19:17:56.615122 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrms5\" (UniqueName: \"kubernetes.io/projected/3addb8ba-f894-4902-b2c3-db57695002ef-kube-api-access-qrms5\") pod \"cilium-m6rdq\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " pod="kube-system/cilium-m6rdq" Feb 12 19:17:56.615147 kubelet[1402]: I0212 19:17:56.615141 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-hostproc\") pod \"cilium-m6rdq\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " pod="kube-system/cilium-m6rdq" Feb 12 19:17:56.615192 kubelet[1402]: I0212 19:17:56.615158 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-etc-cni-netd\") pod \"cilium-m6rdq\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " pod="kube-system/cilium-m6rdq" Feb 12 19:17:56.615192 kubelet[1402]: I0212 19:17:56.615179 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-host-proc-sys-net\") pod \"cilium-m6rdq\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " pod="kube-system/cilium-m6rdq" Feb 12 19:17:56.615236 kubelet[1402]: I0212 19:17:56.615197 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-cilium-run\") pod \"cilium-m6rdq\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " pod="kube-system/cilium-m6rdq" Feb 12 19:17:56.615236 kubelet[1402]: I0212 19:17:56.615215 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-lib-modules\") pod \"cilium-m6rdq\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " pod="kube-system/cilium-m6rdq" Feb 12 19:17:56.615276 kubelet[1402]: I0212 19:17:56.615239 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3addb8ba-f894-4902-b2c3-db57695002ef-hubble-tls\") pod \"cilium-m6rdq\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " pod="kube-system/cilium-m6rdq" Feb 12 19:17:56.617190 systemd[1]: Created slice kubepods-besteffort-pod44f9a2fe_4d05_4440_81cf_e0cfcc1af3e9.slice. Feb 12 19:17:56.916179 kubelet[1402]: E0212 19:17:56.916075 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:56.917606 env[1135]: time="2024-02-12T19:17:56.917326927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m6rdq,Uid:3addb8ba-f894-4902-b2c3-db57695002ef,Namespace:kube-system,Attempt:0,}" Feb 12 19:17:56.931675 kubelet[1402]: E0212 19:17:56.931643 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:56.932257 env[1135]: time="2024-02-12T19:17:56.932202716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xgvv9,Uid:44f9a2fe-4d05-4440-81cf-e0cfcc1af3e9,Namespace:kube-system,Attempt:0,}" Feb 12 19:17:57.462503 env[1135]: time="2024-02-12T19:17:57.462444435Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:57.463587 env[1135]: time="2024-02-12T19:17:57.463543341Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:57.467766 env[1135]: time="2024-02-12T19:17:57.467696160Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:57.468811 env[1135]: time="2024-02-12T19:17:57.468782109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:57.470950 env[1135]: time="2024-02-12T19:17:57.470925034Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:57.471931 env[1135]: time="2024-02-12T19:17:57.471901089Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:57.472765 env[1135]: time="2024-02-12T19:17:57.472725480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:57.474850 env[1135]: time="2024-02-12T19:17:57.474816778Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:57.507116 env[1135]: time="2024-02-12T19:17:57.506978527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:17:57.507252 env[1135]: time="2024-02-12T19:17:57.507171356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:17:57.507252 env[1135]: time="2024-02-12T19:17:57.507205036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:17:57.507252 env[1135]: time="2024-02-12T19:17:57.507214734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:17:57.507557 env[1135]: time="2024-02-12T19:17:57.507401809Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63 pid=1465 runtime=io.containerd.runc.v2 Feb 12 19:17:57.507976 env[1135]: time="2024-02-12T19:17:57.507937721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:17:57.508106 env[1135]: time="2024-02-12T19:17:57.508077795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:17:57.508394 env[1135]: time="2024-02-12T19:17:57.508351949Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36fce7aafacc72215d2814ddd04cdbb5d7dd79d07740d5e63f8cef76f1d46362 pid=1466 runtime=io.containerd.runc.v2 Feb 12 19:17:57.534966 systemd[1]: Started cri-containerd-b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63.scope. Feb 12 19:17:57.543257 systemd[1]: Started cri-containerd-36fce7aafacc72215d2814ddd04cdbb5d7dd79d07740d5e63f8cef76f1d46362.scope. Feb 12 19:17:57.587498 env[1135]: time="2024-02-12T19:17:57.587456821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m6rdq,Uid:3addb8ba-f894-4902-b2c3-db57695002ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\"" Feb 12 19:17:57.588466 kubelet[1402]: E0212 19:17:57.588425 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:57.590033 env[1135]: time="2024-02-12T19:17:57.589995866Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:17:57.597886 env[1135]: time="2024-02-12T19:17:57.597844300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xgvv9,Uid:44f9a2fe-4d05-4440-81cf-e0cfcc1af3e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"36fce7aafacc72215d2814ddd04cdbb5d7dd79d07740d5e63f8cef76f1d46362\"" Feb 12 19:17:57.598385 kubelet[1402]: E0212 19:17:57.598344 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:17:57.598841 kubelet[1402]: E0212 19:17:57.598659 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:57.721875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160502077.mount: Deactivated successfully. Feb 12 19:17:58.598876 kubelet[1402]: E0212 19:17:58.598700 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:17:59.599719 kubelet[1402]: E0212 19:17:59.599615 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:00.600438 kubelet[1402]: E0212 19:18:00.600370 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:01.227965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1592473802.mount: Deactivated successfully. Feb 12 19:18:01.601418 kubelet[1402]: E0212 19:18:01.601288 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:02.602009 kubelet[1402]: E0212 19:18:02.601965 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:03.330082 env[1135]: time="2024-02-12T19:18:03.330032228Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:03.331155 env[1135]: time="2024-02-12T19:18:03.331127806Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:03.332459 env[1135]: time="2024-02-12T19:18:03.332431262Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:03.333605 env[1135]: time="2024-02-12T19:18:03.333573005Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 12 19:18:03.335366 env[1135]: time="2024-02-12T19:18:03.334807035Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 12 19:18:03.335692 env[1135]: time="2024-02-12T19:18:03.335616687Z" level=info msg="CreateContainer within sandbox \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:18:03.346972 env[1135]: time="2024-02-12T19:18:03.346917439Z" level=info msg="CreateContainer within sandbox \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced\"" Feb 12 19:18:03.347707 env[1135]: time="2024-02-12T19:18:03.347679242Z" level=info msg="StartContainer for \"f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced\"" Feb 12 19:18:03.364458 systemd[1]: Started cri-containerd-f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced.scope. Feb 12 19:18:03.398330 env[1135]: time="2024-02-12T19:18:03.398279488Z" level=info msg="StartContainer for \"f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced\" returns successfully" Feb 12 19:18:03.438753 systemd[1]: cri-containerd-f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced.scope: Deactivated successfully. Feb 12 19:18:03.548556 env[1135]: time="2024-02-12T19:18:03.548497097Z" level=info msg="shim disconnected" id=f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced Feb 12 19:18:03.548556 env[1135]: time="2024-02-12T19:18:03.548541777Z" level=warning msg="cleaning up after shim disconnected" id=f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced namespace=k8s.io Feb 12 19:18:03.548556 env[1135]: time="2024-02-12T19:18:03.548551082Z" level=info msg="cleaning up dead shim" Feb 12 19:18:03.555167 env[1135]: time="2024-02-12T19:18:03.555124915Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1583 runtime=io.containerd.runc.v2\n" Feb 12 19:18:03.604197 kubelet[1402]: E0212 19:18:03.602785 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:03.697782 kubelet[1402]: E0212 19:18:03.697714 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:03.699641 env[1135]: time="2024-02-12T19:18:03.699595590Z" level=info msg="CreateContainer within sandbox \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:18:03.709432 env[1135]: time="2024-02-12T19:18:03.709377067Z" level=info msg="CreateContainer within sandbox \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb\"" Feb 12 19:18:03.710007 env[1135]: time="2024-02-12T19:18:03.709978119Z" level=info msg="StartContainer for \"ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb\"" Feb 12 19:18:03.723937 systemd[1]: Started cri-containerd-ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb.scope. Feb 12 19:18:03.750822 env[1135]: time="2024-02-12T19:18:03.750767609Z" level=info msg="StartContainer for \"ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb\" returns successfully" Feb 12 19:18:03.768395 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:18:03.768946 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:18:03.769114 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:18:03.770534 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:18:03.772359 systemd[1]: cri-containerd-ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb.scope: Deactivated successfully. Feb 12 19:18:03.777199 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:18:03.792317 env[1135]: time="2024-02-12T19:18:03.792278875Z" level=info msg="shim disconnected" id=ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb Feb 12 19:18:03.792498 env[1135]: time="2024-02-12T19:18:03.792478932Z" level=warning msg="cleaning up after shim disconnected" id=ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb namespace=k8s.io Feb 12 19:18:03.792555 env[1135]: time="2024-02-12T19:18:03.792542943Z" level=info msg="cleaning up dead shim" Feb 12 19:18:03.799399 env[1135]: time="2024-02-12T19:18:03.799366085Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1645 runtime=io.containerd.runc.v2\n" Feb 12 19:18:04.343669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced-rootfs.mount: Deactivated successfully. Feb 12 19:18:04.450637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2376458396.mount: Deactivated successfully. Feb 12 19:18:04.603917 kubelet[1402]: E0212 19:18:04.603585 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:04.700079 kubelet[1402]: E0212 19:18:04.700038 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:04.702184 env[1135]: time="2024-02-12T19:18:04.701962833Z" level=info msg="CreateContainer within sandbox \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:18:04.720368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount298692271.mount: Deactivated successfully. Feb 12 19:18:04.723481 env[1135]: time="2024-02-12T19:18:04.723433723Z" level=info msg="CreateContainer within sandbox \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc\"" Feb 12 19:18:04.723988 env[1135]: time="2024-02-12T19:18:04.723955107Z" level=info msg="StartContainer for \"3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc\"" Feb 12 19:18:04.738617 systemd[1]: Started cri-containerd-3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc.scope. Feb 12 19:18:04.792587 env[1135]: time="2024-02-12T19:18:04.786043879Z" level=info msg="StartContainer for \"3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc\" returns successfully" Feb 12 19:18:04.795913 systemd[1]: cri-containerd-3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc.scope: Deactivated successfully. Feb 12 19:18:04.915633 env[1135]: time="2024-02-12T19:18:04.915163427Z" level=info msg="shim disconnected" id=3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc Feb 12 19:18:04.915633 env[1135]: time="2024-02-12T19:18:04.915211620Z" level=warning msg="cleaning up after shim disconnected" id=3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc namespace=k8s.io Feb 12 19:18:04.915633 env[1135]: time="2024-02-12T19:18:04.915221804Z" level=info msg="cleaning up dead shim" Feb 12 19:18:04.919797 env[1135]: time="2024-02-12T19:18:04.919761783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:04.921077 env[1135]: time="2024-02-12T19:18:04.921050208Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:04.922639 env[1135]: time="2024-02-12T19:18:04.922611714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:04.922771 env[1135]: time="2024-02-12T19:18:04.922620254Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1704 runtime=io.containerd.runc.v2\n" Feb 12 19:18:04.925977 env[1135]: time="2024-02-12T19:18:04.925936119Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:04.926331 env[1135]: time="2024-02-12T19:18:04.926284617Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74\"" Feb 12 19:18:04.928840 env[1135]: time="2024-02-12T19:18:04.928801767Z" level=info msg="CreateContainer within sandbox \"36fce7aafacc72215d2814ddd04cdbb5d7dd79d07740d5e63f8cef76f1d46362\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:18:04.945443 env[1135]: time="2024-02-12T19:18:04.945370267Z" level=info msg="CreateContainer within sandbox \"36fce7aafacc72215d2814ddd04cdbb5d7dd79d07740d5e63f8cef76f1d46362\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c3aed42edad0bae1a53a39726ac9a442d249d2a73dbc13eae23912738ead46c6\"" Feb 12 19:18:04.946191 env[1135]: time="2024-02-12T19:18:04.946164171Z" level=info msg="StartContainer for \"c3aed42edad0bae1a53a39726ac9a442d249d2a73dbc13eae23912738ead46c6\"" Feb 12 19:18:04.960329 systemd[1]: Started cri-containerd-c3aed42edad0bae1a53a39726ac9a442d249d2a73dbc13eae23912738ead46c6.scope. Feb 12 19:18:04.998699 env[1135]: time="2024-02-12T19:18:04.998632195Z" level=info msg="StartContainer for \"c3aed42edad0bae1a53a39726ac9a442d249d2a73dbc13eae23912738ead46c6\" returns successfully" Feb 12 19:18:05.343590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc-rootfs.mount: Deactivated successfully. Feb 12 19:18:05.604047 kubelet[1402]: E0212 19:18:05.603907 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:05.704696 kubelet[1402]: E0212 19:18:05.704657 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:05.706822 env[1135]: time="2024-02-12T19:18:05.706773181Z" level=info msg="CreateContainer within sandbox \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:18:05.707641 kubelet[1402]: E0212 19:18:05.707619 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:05.723799 env[1135]: time="2024-02-12T19:18:05.723672909Z" level=info msg="CreateContainer within sandbox \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c\"" Feb 12 19:18:05.724994 env[1135]: time="2024-02-12T19:18:05.724504778Z" level=info msg="StartContainer for \"c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c\"" Feb 12 19:18:05.742825 systemd[1]: Started cri-containerd-c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c.scope. Feb 12 19:18:05.743271 kubelet[1402]: I0212 19:18:05.743204 1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xgvv9" podStartSLOduration=2.415393216 podCreationTimestamp="2024-02-12 19:17:56 +0000 UTC" firstStartedPulling="2024-02-12 19:17:57.599216273 +0000 UTC m=+3.862954050" lastFinishedPulling="2024-02-12 19:18:04.926962248 +0000 UTC m=+11.190700025" observedRunningTime="2024-02-12 19:18:05.740863715 +0000 UTC m=+12.004601532" watchObservedRunningTime="2024-02-12 19:18:05.743139191 +0000 UTC m=+12.006876968" Feb 12 19:18:05.776896 systemd[1]: cri-containerd-c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c.scope: Deactivated successfully. Feb 12 19:18:05.777608 env[1135]: time="2024-02-12T19:18:05.777568661Z" level=info msg="StartContainer for \"c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c\" returns successfully" Feb 12 19:18:05.806430 env[1135]: time="2024-02-12T19:18:05.806364835Z" level=info msg="shim disconnected" id=c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c Feb 12 19:18:05.806430 env[1135]: time="2024-02-12T19:18:05.806412212Z" level=warning msg="cleaning up after shim disconnected" id=c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c namespace=k8s.io Feb 12 19:18:05.806430 env[1135]: time="2024-02-12T19:18:05.806429889Z" level=info msg="cleaning up dead shim" Feb 12 19:18:05.812904 env[1135]: time="2024-02-12T19:18:05.812851445Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1921 runtime=io.containerd.runc.v2\n" Feb 12 19:18:06.343041 systemd[1]: run-containerd-runc-k8s.io-c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c-runc.FphmHS.mount: Deactivated successfully. Feb 12 19:18:06.343140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c-rootfs.mount: Deactivated successfully. Feb 12 19:18:06.604507 kubelet[1402]: E0212 19:18:06.604362 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:06.711119 kubelet[1402]: E0212 19:18:06.711076 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:06.711700 kubelet[1402]: E0212 19:18:06.711679 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:06.713610 env[1135]: time="2024-02-12T19:18:06.713564130Z" level=info msg="CreateContainer within sandbox \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:18:06.726102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3171414133.mount: Deactivated successfully. Feb 12 19:18:06.733037 env[1135]: time="2024-02-12T19:18:06.732993715Z" level=info msg="CreateContainer within sandbox \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5\"" Feb 12 19:18:06.733682 env[1135]: time="2024-02-12T19:18:06.733618959Z" level=info msg="StartContainer for \"7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5\"" Feb 12 19:18:06.747988 systemd[1]: Started cri-containerd-7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5.scope. Feb 12 19:18:06.804897 env[1135]: time="2024-02-12T19:18:06.804845702Z" level=info msg="StartContainer for \"7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5\" returns successfully" Feb 12 19:18:06.947493 kubelet[1402]: I0212 19:18:06.946734 1402 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:18:07.092787 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:18:07.311784 kernel: Initializing XFRM netlink socket Feb 12 19:18:07.313791 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:18:07.604792 kubelet[1402]: E0212 19:18:07.604667 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:07.716272 kubelet[1402]: E0212 19:18:07.716245 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:07.731733 kubelet[1402]: I0212 19:18:07.731704 1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-m6rdq" podStartSLOduration=5.987204082 podCreationTimestamp="2024-02-12 19:17:56 +0000 UTC" firstStartedPulling="2024-02-12 19:17:57.589533391 +0000 UTC m=+3.853271168" lastFinishedPulling="2024-02-12 19:18:03.333998907 +0000 UTC m=+9.597736684" observedRunningTime="2024-02-12 19:18:07.731531181 +0000 UTC m=+13.995268998" watchObservedRunningTime="2024-02-12 19:18:07.731669598 +0000 UTC m=+13.995407376" Feb 12 19:18:08.605756 kubelet[1402]: E0212 19:18:08.605708 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:08.717443 kubelet[1402]: E0212 19:18:08.717410 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:08.925321 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:18:08.924856 systemd-networkd[1046]: cilium_host: Link UP Feb 12 19:18:08.924962 systemd-networkd[1046]: cilium_net: Link UP Feb 12 19:18:08.924965 systemd-networkd[1046]: cilium_net: Gained carrier Feb 12 19:18:08.925080 systemd-networkd[1046]: cilium_host: Gained carrier Feb 12 19:18:08.925204 systemd-networkd[1046]: cilium_host: Gained IPv6LL Feb 12 19:18:09.009035 systemd-networkd[1046]: cilium_vxlan: Link UP Feb 12 19:18:09.009043 systemd-networkd[1046]: cilium_vxlan: Gained carrier Feb 12 19:18:09.321772 kernel: NET: Registered PF_ALG protocol family Feb 12 19:18:09.606165 kubelet[1402]: E0212 19:18:09.606119 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:09.718639 kubelet[1402]: E0212 19:18:09.718602 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:09.789088 systemd-networkd[1046]: cilium_net: Gained IPv6LL Feb 12 19:18:09.885082 systemd-networkd[1046]: lxc_health: Link UP Feb 12 19:18:09.894160 systemd-networkd[1046]: lxc_health: Gained carrier Feb 12 19:18:09.894767 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:18:10.557166 systemd-networkd[1046]: cilium_vxlan: Gained IPv6LL Feb 12 19:18:10.607301 kubelet[1402]: E0212 19:18:10.607264 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:10.918283 kubelet[1402]: E0212 19:18:10.918168 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:11.389216 systemd-networkd[1046]: lxc_health: Gained IPv6LL Feb 12 19:18:11.608056 kubelet[1402]: E0212 19:18:11.608009 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:11.720923 kubelet[1402]: E0212 19:18:11.720828 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:12.608465 kubelet[1402]: E0212 19:18:12.608415 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:12.846891 kubelet[1402]: I0212 19:18:12.846835 1402 topology_manager.go:215] "Topology Admit Handler" podUID="86462b33-8b8f-491c-a5aa-6368430fbfac" podNamespace="default" podName="nginx-deployment-6d5f899847-dd6b5" Feb 12 19:18:12.852475 systemd[1]: Created slice kubepods-besteffort-pod86462b33_8b8f_491c_a5aa_6368430fbfac.slice. Feb 12 19:18:12.918942 kubelet[1402]: I0212 19:18:12.918827 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7572x\" (UniqueName: \"kubernetes.io/projected/86462b33-8b8f-491c-a5aa-6368430fbfac-kube-api-access-7572x\") pod \"nginx-deployment-6d5f899847-dd6b5\" (UID: \"86462b33-8b8f-491c-a5aa-6368430fbfac\") " pod="default/nginx-deployment-6d5f899847-dd6b5" Feb 12 19:18:13.155802 env[1135]: time="2024-02-12T19:18:13.155734578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-dd6b5,Uid:86462b33-8b8f-491c-a5aa-6368430fbfac,Namespace:default,Attempt:0,}" Feb 12 19:18:13.199249 systemd-networkd[1046]: lxc9e74bb943115: Link UP Feb 12 19:18:13.208774 kernel: eth0: renamed from tmp1ab18 Feb 12 19:18:13.215770 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:18:13.215822 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9e74bb943115: link becomes ready Feb 12 19:18:13.215809 systemd-networkd[1046]: lxc9e74bb943115: Gained carrier Feb 12 19:18:13.609543 kubelet[1402]: E0212 19:18:13.609164 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:14.421381 env[1135]: time="2024-02-12T19:18:14.420997508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:18:14.421381 env[1135]: time="2024-02-12T19:18:14.421217324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:18:14.421381 env[1135]: time="2024-02-12T19:18:14.421229012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:18:14.421804 env[1135]: time="2024-02-12T19:18:14.421423292Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ab183681226e4cbe1c5086a8c0ac815a936fd6e51380ed7f1509802595e5962 pid=2470 runtime=io.containerd.runc.v2 Feb 12 19:18:14.432280 systemd[1]: Started cri-containerd-1ab183681226e4cbe1c5086a8c0ac815a936fd6e51380ed7f1509802595e5962.scope. Feb 12 19:18:14.499971 systemd-resolved[1082]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:18:14.516677 env[1135]: time="2024-02-12T19:18:14.516626836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-dd6b5,Uid:86462b33-8b8f-491c-a5aa-6368430fbfac,Namespace:default,Attempt:0,} returns sandbox id \"1ab183681226e4cbe1c5086a8c0ac815a936fd6e51380ed7f1509802595e5962\"" Feb 12 19:18:14.518076 env[1135]: time="2024-02-12T19:18:14.518051838Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:18:14.588891 systemd-networkd[1046]: lxc9e74bb943115: Gained IPv6LL Feb 12 19:18:14.596691 kubelet[1402]: E0212 19:18:14.596655 1402 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:14.609837 kubelet[1402]: E0212 19:18:14.609797 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:15.610941 kubelet[1402]: E0212 19:18:15.610892 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:16.572697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1049530875.mount: Deactivated successfully. Feb 12 19:18:16.611371 kubelet[1402]: E0212 19:18:16.611321 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:17.294252 env[1135]: time="2024-02-12T19:18:17.294184661Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:17.295563 env[1135]: time="2024-02-12T19:18:17.295532580Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:17.297148 env[1135]: time="2024-02-12T19:18:17.297122439Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:17.299343 env[1135]: time="2024-02-12T19:18:17.299306064Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:17.299932 env[1135]: time="2024-02-12T19:18:17.299906473Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 12 19:18:17.301474 env[1135]: time="2024-02-12T19:18:17.301443710Z" level=info msg="CreateContainer within sandbox \"1ab183681226e4cbe1c5086a8c0ac815a936fd6e51380ed7f1509802595e5962\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 19:18:17.312438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3632780057.mount: Deactivated successfully. Feb 12 19:18:17.314577 env[1135]: time="2024-02-12T19:18:17.314534378Z" level=info msg="CreateContainer within sandbox \"1ab183681226e4cbe1c5086a8c0ac815a936fd6e51380ed7f1509802595e5962\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"ef8e8867d513debec6b43c9dfad92ad7bf775e69607d64bacd53ec5930a36a0f\"" Feb 12 19:18:17.315137 env[1135]: time="2024-02-12T19:18:17.315106335Z" level=info msg="StartContainer for \"ef8e8867d513debec6b43c9dfad92ad7bf775e69607d64bacd53ec5930a36a0f\"" Feb 12 19:18:17.331352 systemd[1]: Started cri-containerd-ef8e8867d513debec6b43c9dfad92ad7bf775e69607d64bacd53ec5930a36a0f.scope. Feb 12 19:18:17.383660 env[1135]: time="2024-02-12T19:18:17.383605533Z" level=info msg="StartContainer for \"ef8e8867d513debec6b43c9dfad92ad7bf775e69607d64bacd53ec5930a36a0f\" returns successfully" Feb 12 19:18:17.612060 kubelet[1402]: E0212 19:18:17.611937 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:17.738052 kubelet[1402]: I0212 19:18:17.738007 1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-dd6b5" podStartSLOduration=2.955681984 podCreationTimestamp="2024-02-12 19:18:12 +0000 UTC" firstStartedPulling="2024-02-12 19:18:14.517827979 +0000 UTC m=+20.781565756" lastFinishedPulling="2024-02-12 19:18:17.300120482 +0000 UTC m=+23.563858259" observedRunningTime="2024-02-12 19:18:17.737921185 +0000 UTC m=+24.001659002" watchObservedRunningTime="2024-02-12 19:18:17.737974487 +0000 UTC m=+24.001712224" Feb 12 19:18:18.310908 systemd[1]: run-containerd-runc-k8s.io-ef8e8867d513debec6b43c9dfad92ad7bf775e69607d64bacd53ec5930a36a0f-runc.f7i2q2.mount: Deactivated successfully. Feb 12 19:18:18.612584 kubelet[1402]: E0212 19:18:18.612444 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:19.254400 kubelet[1402]: I0212 19:18:19.254365 1402 topology_manager.go:215] "Topology Admit Handler" podUID="d3dffef5-d1f4-4b2b-b6d8-d044b2abb7e0" podNamespace="default" podName="nfs-server-provisioner-0" Feb 12 19:18:19.258800 systemd[1]: Created slice kubepods-besteffort-podd3dffef5_d1f4_4b2b_b6d8_d044b2abb7e0.slice. Feb 12 19:18:19.350935 kubelet[1402]: I0212 19:18:19.350894 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d3dffef5-d1f4-4b2b-b6d8-d044b2abb7e0-data\") pod \"nfs-server-provisioner-0\" (UID: \"d3dffef5-d1f4-4b2b-b6d8-d044b2abb7e0\") " pod="default/nfs-server-provisioner-0" Feb 12 19:18:19.351130 kubelet[1402]: I0212 19:18:19.351118 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjl2h\" (UniqueName: \"kubernetes.io/projected/d3dffef5-d1f4-4b2b-b6d8-d044b2abb7e0-kube-api-access-sjl2h\") pod \"nfs-server-provisioner-0\" (UID: \"d3dffef5-d1f4-4b2b-b6d8-d044b2abb7e0\") " pod="default/nfs-server-provisioner-0" Feb 12 19:18:19.561967 env[1135]: time="2024-02-12T19:18:19.561840328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d3dffef5-d1f4-4b2b-b6d8-d044b2abb7e0,Namespace:default,Attempt:0,}" Feb 12 19:18:19.586460 systemd-networkd[1046]: lxcc64be9a2684c: Link UP Feb 12 19:18:19.598352 kernel: eth0: renamed from tmp157bf Feb 12 19:18:19.605656 systemd-networkd[1046]: lxcc64be9a2684c: Gained carrier Feb 12 19:18:19.605819 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:18:19.605868 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc64be9a2684c: link becomes ready Feb 12 19:18:19.613211 kubelet[1402]: E0212 19:18:19.613166 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:19.868509 env[1135]: time="2024-02-12T19:18:19.868361431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:18:19.868509 env[1135]: time="2024-02-12T19:18:19.868408286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:18:19.868509 env[1135]: time="2024-02-12T19:18:19.868418970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:18:19.868672 env[1135]: time="2024-02-12T19:18:19.868540568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/157bf888983fed418bd74b71014b95e14c904c05cafffb11486d9cc84a8170b3 pid=2599 runtime=io.containerd.runc.v2 Feb 12 19:18:19.885411 systemd[1]: Started cri-containerd-157bf888983fed418bd74b71014b95e14c904c05cafffb11486d9cc84a8170b3.scope. Feb 12 19:18:19.906653 systemd-resolved[1082]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:18:19.923968 env[1135]: time="2024-02-12T19:18:19.923914826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d3dffef5-d1f4-4b2b-b6d8-d044b2abb7e0,Namespace:default,Attempt:0,} returns sandbox id \"157bf888983fed418bd74b71014b95e14c904c05cafffb11486d9cc84a8170b3\"" Feb 12 19:18:19.925444 env[1135]: time="2024-02-12T19:18:19.925415903Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 19:18:20.614122 kubelet[1402]: E0212 19:18:20.614079 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:20.989967 systemd-networkd[1046]: lxcc64be9a2684c: Gained IPv6LL Feb 12 19:18:21.615173 kubelet[1402]: E0212 19:18:21.615105 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:22.053157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1207556804.mount: Deactivated successfully. Feb 12 19:18:22.615427 kubelet[1402]: E0212 19:18:22.615383 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:23.616494 kubelet[1402]: E0212 19:18:23.616441 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:24.205067 env[1135]: time="2024-02-12T19:18:24.205007720Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:24.206336 env[1135]: time="2024-02-12T19:18:24.206301764Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:24.207841 env[1135]: time="2024-02-12T19:18:24.207812130Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:24.209494 env[1135]: time="2024-02-12T19:18:24.209469683Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:24.210939 env[1135]: time="2024-02-12T19:18:24.210895992Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 12 19:18:24.212898 env[1135]: time="2024-02-12T19:18:24.212854842Z" level=info msg="CreateContainer within sandbox \"157bf888983fed418bd74b71014b95e14c904c05cafffb11486d9cc84a8170b3\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 19:18:24.223066 env[1135]: time="2024-02-12T19:18:24.223018122Z" level=info msg="CreateContainer within sandbox \"157bf888983fed418bd74b71014b95e14c904c05cafffb11486d9cc84a8170b3\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"deca247d6a3dd5b2ba5f9100cfc4df4711249e77c7ff77296e8bb3d45c929b97\"" Feb 12 19:18:24.223457 env[1135]: time="2024-02-12T19:18:24.223424199Z" level=info msg="StartContainer for \"deca247d6a3dd5b2ba5f9100cfc4df4711249e77c7ff77296e8bb3d45c929b97\"" Feb 12 19:18:24.238671 systemd[1]: Started cri-containerd-deca247d6a3dd5b2ba5f9100cfc4df4711249e77c7ff77296e8bb3d45c929b97.scope. Feb 12 19:18:24.268435 env[1135]: time="2024-02-12T19:18:24.268387372Z" level=info msg="StartContainer for \"deca247d6a3dd5b2ba5f9100cfc4df4711249e77c7ff77296e8bb3d45c929b97\" returns successfully" Feb 12 19:18:24.617343 kubelet[1402]: E0212 19:18:24.617220 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:24.754786 kubelet[1402]: I0212 19:18:24.754739 1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.468731105 podCreationTimestamp="2024-02-12 19:18:19 +0000 UTC" firstStartedPulling="2024-02-12 19:18:19.925178788 +0000 UTC m=+26.188916565" lastFinishedPulling="2024-02-12 19:18:24.211145679 +0000 UTC m=+30.474883416" observedRunningTime="2024-02-12 19:18:24.753780983 +0000 UTC m=+31.017518800" watchObservedRunningTime="2024-02-12 19:18:24.754697956 +0000 UTC m=+31.018435693" Feb 12 19:18:25.618364 kubelet[1402]: E0212 19:18:25.618316 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:26.619204 kubelet[1402]: E0212 19:18:26.619133 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:27.620142 kubelet[1402]: E0212 19:18:27.620099 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:28.620449 kubelet[1402]: E0212 19:18:28.620413 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:29.621907 kubelet[1402]: E0212 19:18:29.621866 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:30.622358 kubelet[1402]: E0212 19:18:30.622313 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:31.622789 kubelet[1402]: E0212 19:18:31.622711 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:32.623763 kubelet[1402]: E0212 19:18:32.623712 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:32.983505 update_engine[1123]: I0212 19:18:32.983440 1123 update_attempter.cc:509] Updating boot flags... Feb 12 19:18:33.624173 kubelet[1402]: E0212 19:18:33.624129 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:33.651475 kubelet[1402]: I0212 19:18:33.651409 1402 topology_manager.go:215] "Topology Admit Handler" podUID="b5b999ca-1cb2-41c3-8a03-9f280f511ccb" podNamespace="default" podName="test-pod-1" Feb 12 19:18:33.657903 systemd[1]: Created slice kubepods-besteffort-podb5b999ca_1cb2_41c3_8a03_9f280f511ccb.slice. Feb 12 19:18:33.730279 kubelet[1402]: I0212 19:18:33.730236 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3a15d659-3d49-4373-be16-5d25de59667a\" (UniqueName: \"kubernetes.io/nfs/b5b999ca-1cb2-41c3-8a03-9f280f511ccb-pvc-3a15d659-3d49-4373-be16-5d25de59667a\") pod \"test-pod-1\" (UID: \"b5b999ca-1cb2-41c3-8a03-9f280f511ccb\") " pod="default/test-pod-1" Feb 12 19:18:33.730498 kubelet[1402]: I0212 19:18:33.730464 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72mfc\" (UniqueName: \"kubernetes.io/projected/b5b999ca-1cb2-41c3-8a03-9f280f511ccb-kube-api-access-72mfc\") pod \"test-pod-1\" (UID: \"b5b999ca-1cb2-41c3-8a03-9f280f511ccb\") " pod="default/test-pod-1" Feb 12 19:18:33.854775 kernel: FS-Cache: Loaded Feb 12 19:18:33.882073 kernel: RPC: Registered named UNIX socket transport module. Feb 12 19:18:33.882171 kernel: RPC: Registered udp transport module. Feb 12 19:18:33.882194 kernel: RPC: Registered tcp transport module. Feb 12 19:18:33.882808 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 19:18:33.913768 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 19:18:34.043780 kernel: NFS: Registering the id_resolver key type Feb 12 19:18:34.043900 kernel: Key type id_resolver registered Feb 12 19:18:34.043921 kernel: Key type id_legacy registered Feb 12 19:18:34.066753 nfsidmap[2730]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 19:18:34.070030 nfsidmap[2733]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 19:18:34.261504 env[1135]: time="2024-02-12T19:18:34.261349516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b5b999ca-1cb2-41c3-8a03-9f280f511ccb,Namespace:default,Attempt:0,}" Feb 12 19:18:34.288287 systemd-networkd[1046]: lxce824f643d0c2: Link UP Feb 12 19:18:34.297799 kernel: eth0: renamed from tmpc7a21 Feb 12 19:18:34.312534 systemd-networkd[1046]: lxce824f643d0c2: Gained carrier Feb 12 19:18:34.312764 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:18:34.312806 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce824f643d0c2: link becomes ready Feb 12 19:18:34.491997 env[1135]: time="2024-02-12T19:18:34.491919849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:18:34.491997 env[1135]: time="2024-02-12T19:18:34.491964254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:18:34.491997 env[1135]: time="2024-02-12T19:18:34.491974415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:18:34.492181 env[1135]: time="2024-02-12T19:18:34.492120111Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7a21921a695d6b759bf4ad4f78813862870720fce1969a76d7d4961ca2a87dc pid=2762 runtime=io.containerd.runc.v2 Feb 12 19:18:34.502069 systemd[1]: Started cri-containerd-c7a21921a695d6b759bf4ad4f78813862870720fce1969a76d7d4961ca2a87dc.scope. Feb 12 19:18:34.523193 systemd-resolved[1082]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:18:34.539584 env[1135]: time="2024-02-12T19:18:34.539545851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b5b999ca-1cb2-41c3-8a03-9f280f511ccb,Namespace:default,Attempt:0,} returns sandbox id \"c7a21921a695d6b759bf4ad4f78813862870720fce1969a76d7d4961ca2a87dc\"" Feb 12 19:18:34.540988 env[1135]: time="2024-02-12T19:18:34.540957888Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:18:34.597121 kubelet[1402]: E0212 19:18:34.597077 1402 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:34.624316 kubelet[1402]: E0212 19:18:34.624274 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:34.875889 env[1135]: time="2024-02-12T19:18:34.875604684Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:34.877059 env[1135]: time="2024-02-12T19:18:34.877029122Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:34.880152 env[1135]: time="2024-02-12T19:18:34.880111504Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:34.883481 env[1135]: time="2024-02-12T19:18:34.883452074Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:34.884071 env[1135]: time="2024-02-12T19:18:34.884044500Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 12 19:18:34.885616 env[1135]: time="2024-02-12T19:18:34.885585071Z" level=info msg="CreateContainer within sandbox \"c7a21921a695d6b759bf4ad4f78813862870720fce1969a76d7d4961ca2a87dc\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 19:18:34.894264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3621423568.mount: Deactivated successfully. Feb 12 19:18:34.898497 env[1135]: time="2024-02-12T19:18:34.898455138Z" level=info msg="CreateContainer within sandbox \"c7a21921a695d6b759bf4ad4f78813862870720fce1969a76d7d4961ca2a87dc\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"da533f4fb54e711a0457ef766b6d955d6a1bbdd1203b9231883a99612373b69e\"" Feb 12 19:18:34.898898 env[1135]: time="2024-02-12T19:18:34.898816498Z" level=info msg="StartContainer for \"da533f4fb54e711a0457ef766b6d955d6a1bbdd1203b9231883a99612373b69e\"" Feb 12 19:18:34.911943 systemd[1]: Started cri-containerd-da533f4fb54e711a0457ef766b6d955d6a1bbdd1203b9231883a99612373b69e.scope. Feb 12 19:18:34.958754 env[1135]: time="2024-02-12T19:18:34.958700980Z" level=info msg="StartContainer for \"da533f4fb54e711a0457ef766b6d955d6a1bbdd1203b9231883a99612373b69e\" returns successfully" Feb 12 19:18:35.625042 kubelet[1402]: E0212 19:18:35.624993 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:35.772090 kubelet[1402]: I0212 19:18:35.772051 1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.428387448 podCreationTimestamp="2024-02-12 19:18:19 +0000 UTC" firstStartedPulling="2024-02-12 19:18:34.54061689 +0000 UTC m=+40.804354667" lastFinishedPulling="2024-02-12 19:18:34.884242962 +0000 UTC m=+41.147980739" observedRunningTime="2024-02-12 19:18:35.771206194 +0000 UTC m=+42.034943971" watchObservedRunningTime="2024-02-12 19:18:35.77201352 +0000 UTC m=+42.035751297" Feb 12 19:18:35.772876 systemd-networkd[1046]: lxce824f643d0c2: Gained IPv6LL Feb 12 19:18:36.625425 kubelet[1402]: E0212 19:18:36.625376 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:37.625899 kubelet[1402]: E0212 19:18:37.625839 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:38.626376 kubelet[1402]: E0212 19:18:38.626314 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:39.627441 kubelet[1402]: E0212 19:18:39.627379 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:40.627909 kubelet[1402]: E0212 19:18:40.627840 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:41.628223 kubelet[1402]: E0212 19:18:41.628151 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:42.630691 systemd[1]: run-containerd-runc-k8s.io-7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5-runc.h3w04m.mount: Deactivated successfully. Feb 12 19:18:42.631865 kubelet[1402]: E0212 19:18:42.631498 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:42.658283 env[1135]: time="2024-02-12T19:18:42.658230501Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:18:42.667305 env[1135]: time="2024-02-12T19:18:42.667269070Z" level=info msg="StopContainer for \"7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5\" with timeout 2 (s)" Feb 12 19:18:42.667945 env[1135]: time="2024-02-12T19:18:42.667917400Z" level=info msg="Stop container \"7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5\" with signal terminated" Feb 12 19:18:42.675705 systemd-networkd[1046]: lxc_health: Link DOWN Feb 12 19:18:42.675714 systemd-networkd[1046]: lxc_health: Lost carrier Feb 12 19:18:42.732314 systemd[1]: cri-containerd-7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5.scope: Deactivated successfully. Feb 12 19:18:42.732637 systemd[1]: cri-containerd-7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5.scope: Consumed 6.693s CPU time. Feb 12 19:18:42.749708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5-rootfs.mount: Deactivated successfully. Feb 12 19:18:42.869965 env[1135]: time="2024-02-12T19:18:42.869909316Z" level=info msg="shim disconnected" id=7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5 Feb 12 19:18:42.869965 env[1135]: time="2024-02-12T19:18:42.869954039Z" level=warning msg="cleaning up after shim disconnected" id=7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5 namespace=k8s.io Feb 12 19:18:42.869965 env[1135]: time="2024-02-12T19:18:42.869963200Z" level=info msg="cleaning up dead shim" Feb 12 19:18:42.876415 env[1135]: time="2024-02-12T19:18:42.876381729Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2897 runtime=io.containerd.runc.v2\n" Feb 12 19:18:42.879168 env[1135]: time="2024-02-12T19:18:42.879131419Z" level=info msg="StopContainer for \"7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5\" returns successfully" Feb 12 19:18:42.879735 env[1135]: time="2024-02-12T19:18:42.879709023Z" level=info msg="StopPodSandbox for \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\"" Feb 12 19:18:42.879887 env[1135]: time="2024-02-12T19:18:42.879864075Z" level=info msg="Container to stop \"f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:18:42.879959 env[1135]: time="2024-02-12T19:18:42.879942281Z" level=info msg="Container to stop \"ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:18:42.880018 env[1135]: time="2024-02-12T19:18:42.880002725Z" level=info msg="Container to stop \"3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:18:42.880077 env[1135]: time="2024-02-12T19:18:42.880060410Z" level=info msg="Container to stop \"c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:18:42.880149 env[1135]: time="2024-02-12T19:18:42.880130975Z" level=info msg="Container to stop \"7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:18:42.881550 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63-shm.mount: Deactivated successfully. Feb 12 19:18:42.886144 systemd[1]: cri-containerd-b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63.scope: Deactivated successfully. Feb 12 19:18:42.905820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63-rootfs.mount: Deactivated successfully. Feb 12 19:18:42.910807 env[1135]: time="2024-02-12T19:18:42.910735548Z" level=info msg="shim disconnected" id=b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63 Feb 12 19:18:42.910807 env[1135]: time="2024-02-12T19:18:42.910806993Z" level=warning msg="cleaning up after shim disconnected" id=b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63 namespace=k8s.io Feb 12 19:18:42.910973 env[1135]: time="2024-02-12T19:18:42.910816834Z" level=info msg="cleaning up dead shim" Feb 12 19:18:42.917456 env[1135]: time="2024-02-12T19:18:42.917421577Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2927 runtime=io.containerd.runc.v2\n" Feb 12 19:18:42.917728 env[1135]: time="2024-02-12T19:18:42.917697238Z" level=info msg="TearDown network for sandbox \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\" successfully" Feb 12 19:18:42.917728 env[1135]: time="2024-02-12T19:18:42.917718040Z" level=info msg="StopPodSandbox for \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\" returns successfully" Feb 12 19:18:43.078495 kubelet[1402]: I0212 19:18:43.078446 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3addb8ba-f894-4902-b2c3-db57695002ef" (UID: "3addb8ba-f894-4902-b2c3-db57695002ef"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:43.078495 kubelet[1402]: I0212 19:18:43.078471 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-lib-modules\") pod \"3addb8ba-f894-4902-b2c3-db57695002ef\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " Feb 12 19:18:43.078689 kubelet[1402]: I0212 19:18:43.078527 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-cni-path\") pod \"3addb8ba-f894-4902-b2c3-db57695002ef\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " Feb 12 19:18:43.078689 kubelet[1402]: I0212 19:18:43.078548 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-hostproc\") pod \"3addb8ba-f894-4902-b2c3-db57695002ef\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " Feb 12 19:18:43.078689 kubelet[1402]: I0212 19:18:43.078548 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-cni-path" (OuterVolumeSpecName: "cni-path") pod "3addb8ba-f894-4902-b2c3-db57695002ef" (UID: "3addb8ba-f894-4902-b2c3-db57695002ef"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:43.078689 kubelet[1402]: I0212 19:18:43.078567 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-cilium-cgroup\") pod \"3addb8ba-f894-4902-b2c3-db57695002ef\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " Feb 12 19:18:43.078689 kubelet[1402]: I0212 19:18:43.078577 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-hostproc" (OuterVolumeSpecName: "hostproc") pod "3addb8ba-f894-4902-b2c3-db57695002ef" (UID: "3addb8ba-f894-4902-b2c3-db57695002ef"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:43.078689 kubelet[1402]: I0212 19:18:43.078592 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3addb8ba-f894-4902-b2c3-db57695002ef-hubble-tls\") pod \"3addb8ba-f894-4902-b2c3-db57695002ef\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " Feb 12 19:18:43.078865 kubelet[1402]: I0212 19:18:43.078598 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3addb8ba-f894-4902-b2c3-db57695002ef" (UID: "3addb8ba-f894-4902-b2c3-db57695002ef"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:43.078865 kubelet[1402]: I0212 19:18:43.078616 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3addb8ba-f894-4902-b2c3-db57695002ef-cilium-config-path\") pod \"3addb8ba-f894-4902-b2c3-db57695002ef\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " Feb 12 19:18:43.078865 kubelet[1402]: I0212 19:18:43.078638 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-host-proc-sys-kernel\") pod \"3addb8ba-f894-4902-b2c3-db57695002ef\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " Feb 12 19:18:43.078865 kubelet[1402]: I0212 19:18:43.078658 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-bpf-maps\") pod \"3addb8ba-f894-4902-b2c3-db57695002ef\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " Feb 12 19:18:43.078865 kubelet[1402]: I0212 19:18:43.078678 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrms5\" (UniqueName: \"kubernetes.io/projected/3addb8ba-f894-4902-b2c3-db57695002ef-kube-api-access-qrms5\") pod \"3addb8ba-f894-4902-b2c3-db57695002ef\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " Feb 12 19:18:43.078865 kubelet[1402]: I0212 19:18:43.078695 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-etc-cni-netd\") pod \"3addb8ba-f894-4902-b2c3-db57695002ef\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " Feb 12 19:18:43.079008 kubelet[1402]: I0212 19:18:43.078718 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-host-proc-sys-net\") pod \"3addb8ba-f894-4902-b2c3-db57695002ef\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " Feb 12 19:18:43.079008 kubelet[1402]: I0212 19:18:43.078734 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-cilium-run\") pod \"3addb8ba-f894-4902-b2c3-db57695002ef\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " Feb 12 19:18:43.079008 kubelet[1402]: I0212 19:18:43.078764 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-xtables-lock\") pod \"3addb8ba-f894-4902-b2c3-db57695002ef\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " Feb 12 19:18:43.079008 kubelet[1402]: I0212 19:18:43.078784 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3addb8ba-f894-4902-b2c3-db57695002ef-clustermesh-secrets\") pod \"3addb8ba-f894-4902-b2c3-db57695002ef\" (UID: \"3addb8ba-f894-4902-b2c3-db57695002ef\") " Feb 12 19:18:43.079008 kubelet[1402]: I0212 19:18:43.078818 1402 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-lib-modules\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:43.079008 kubelet[1402]: I0212 19:18:43.078831 1402 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-cni-path\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:43.079008 kubelet[1402]: I0212 19:18:43.078841 1402 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-hostproc\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:43.079175 kubelet[1402]: I0212 19:18:43.078850 1402 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-cilium-cgroup\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:43.079175 kubelet[1402]: I0212 19:18:43.078941 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3addb8ba-f894-4902-b2c3-db57695002ef" (UID: "3addb8ba-f894-4902-b2c3-db57695002ef"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:43.079175 kubelet[1402]: I0212 19:18:43.078977 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3addb8ba-f894-4902-b2c3-db57695002ef" (UID: "3addb8ba-f894-4902-b2c3-db57695002ef"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:43.079175 kubelet[1402]: I0212 19:18:43.078996 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3addb8ba-f894-4902-b2c3-db57695002ef" (UID: "3addb8ba-f894-4902-b2c3-db57695002ef"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:43.079175 kubelet[1402]: I0212 19:18:43.079067 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3addb8ba-f894-4902-b2c3-db57695002ef" (UID: "3addb8ba-f894-4902-b2c3-db57695002ef"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:43.079290 kubelet[1402]: I0212 19:18:43.079096 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3addb8ba-f894-4902-b2c3-db57695002ef" (UID: "3addb8ba-f894-4902-b2c3-db57695002ef"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:43.079290 kubelet[1402]: I0212 19:18:43.079113 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3addb8ba-f894-4902-b2c3-db57695002ef" (UID: "3addb8ba-f894-4902-b2c3-db57695002ef"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:43.080913 kubelet[1402]: I0212 19:18:43.080867 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3addb8ba-f894-4902-b2c3-db57695002ef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3addb8ba-f894-4902-b2c3-db57695002ef" (UID: "3addb8ba-f894-4902-b2c3-db57695002ef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:18:43.081767 kubelet[1402]: I0212 19:18:43.081720 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3addb8ba-f894-4902-b2c3-db57695002ef-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3addb8ba-f894-4902-b2c3-db57695002ef" (UID: "3addb8ba-f894-4902-b2c3-db57695002ef"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:18:43.081839 kubelet[1402]: I0212 19:18:43.081822 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3addb8ba-f894-4902-b2c3-db57695002ef-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3addb8ba-f894-4902-b2c3-db57695002ef" (UID: "3addb8ba-f894-4902-b2c3-db57695002ef"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:18:43.081981 kubelet[1402]: I0212 19:18:43.081933 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3addb8ba-f894-4902-b2c3-db57695002ef-kube-api-access-qrms5" (OuterVolumeSpecName: "kube-api-access-qrms5") pod "3addb8ba-f894-4902-b2c3-db57695002ef" (UID: "3addb8ba-f894-4902-b2c3-db57695002ef"). InnerVolumeSpecName "kube-api-access-qrms5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:18:43.180055 kubelet[1402]: I0212 19:18:43.179384 1402 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3addb8ba-f894-4902-b2c3-db57695002ef-cilium-config-path\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:43.180055 kubelet[1402]: I0212 19:18:43.179420 1402 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-host-proc-sys-kernel\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:43.180055 kubelet[1402]: I0212 19:18:43.179432 1402 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-bpf-maps\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:43.180055 kubelet[1402]: I0212 19:18:43.179441 1402 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3addb8ba-f894-4902-b2c3-db57695002ef-hubble-tls\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:43.180055 kubelet[1402]: I0212 19:18:43.179451 1402 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-host-proc-sys-net\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:43.180055 kubelet[1402]: I0212 19:18:43.179459 1402 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-cilium-run\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:43.180055 kubelet[1402]: I0212 19:18:43.179469 1402 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-xtables-lock\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:43.180055 kubelet[1402]: I0212 19:18:43.179477 1402 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3addb8ba-f894-4902-b2c3-db57695002ef-clustermesh-secrets\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:43.180600 kubelet[1402]: I0212 19:18:43.179487 1402 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qrms5\" (UniqueName: \"kubernetes.io/projected/3addb8ba-f894-4902-b2c3-db57695002ef-kube-api-access-qrms5\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:43.180600 kubelet[1402]: I0212 19:18:43.179496 1402 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3addb8ba-f894-4902-b2c3-db57695002ef-etc-cni-netd\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:43.628421 systemd[1]: var-lib-kubelet-pods-3addb8ba\x2df894\x2d4902\x2db2c3\x2ddb57695002ef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqrms5.mount: Deactivated successfully. Feb 12 19:18:43.628529 systemd[1]: var-lib-kubelet-pods-3addb8ba\x2df894\x2d4902\x2db2c3\x2ddb57695002ef-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:18:43.628586 systemd[1]: var-lib-kubelet-pods-3addb8ba\x2df894\x2d4902\x2db2c3\x2ddb57695002ef-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:18:43.631903 kubelet[1402]: E0212 19:18:43.631878 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:43.779671 kubelet[1402]: I0212 19:18:43.779641 1402 scope.go:117] "RemoveContainer" containerID="7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5" Feb 12 19:18:43.783068 env[1135]: time="2024-02-12T19:18:43.783030098Z" level=info msg="RemoveContainer for \"7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5\"" Feb 12 19:18:43.783568 systemd[1]: Removed slice kubepods-burstable-pod3addb8ba_f894_4902_b2c3_db57695002ef.slice. Feb 12 19:18:43.783660 systemd[1]: kubepods-burstable-pod3addb8ba_f894_4902_b2c3_db57695002ef.slice: Consumed 6.900s CPU time. Feb 12 19:18:43.787033 env[1135]: time="2024-02-12T19:18:43.786997988Z" level=info msg="RemoveContainer for \"7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5\" returns successfully" Feb 12 19:18:43.787273 kubelet[1402]: I0212 19:18:43.787251 1402 scope.go:117] "RemoveContainer" containerID="c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c" Feb 12 19:18:43.788326 env[1135]: time="2024-02-12T19:18:43.788294122Z" level=info msg="RemoveContainer for \"c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c\"" Feb 12 19:18:43.790310 env[1135]: time="2024-02-12T19:18:43.790285588Z" level=info msg="RemoveContainer for \"c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c\" returns successfully" Feb 12 19:18:43.790494 kubelet[1402]: I0212 19:18:43.790473 1402 scope.go:117] "RemoveContainer" containerID="3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc" Feb 12 19:18:43.791604 env[1135]: time="2024-02-12T19:18:43.791567761Z" level=info msg="RemoveContainer for \"3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc\"" Feb 12 19:18:43.793631 env[1135]: time="2024-02-12T19:18:43.793592629Z" level=info msg="RemoveContainer for \"3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc\" returns successfully" Feb 12 19:18:43.793796 kubelet[1402]: I0212 19:18:43.793771 1402 scope.go:117] "RemoveContainer" containerID="ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb" Feb 12 19:18:43.794889 env[1135]: time="2024-02-12T19:18:43.794865562Z" level=info msg="RemoveContainer for \"ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb\"" Feb 12 19:18:43.797958 env[1135]: time="2024-02-12T19:18:43.797921785Z" level=info msg="RemoveContainer for \"ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb\" returns successfully" Feb 12 19:18:43.798117 kubelet[1402]: I0212 19:18:43.798097 1402 scope.go:117] "RemoveContainer" containerID="f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced" Feb 12 19:18:43.799082 env[1135]: time="2024-02-12T19:18:43.799058548Z" level=info msg="RemoveContainer for \"f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced\"" Feb 12 19:18:43.801156 env[1135]: time="2024-02-12T19:18:43.801123619Z" level=info msg="RemoveContainer for \"f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced\" returns successfully" Feb 12 19:18:43.801344 kubelet[1402]: I0212 19:18:43.801323 1402 scope.go:117] "RemoveContainer" containerID="7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5" Feb 12 19:18:43.801668 env[1135]: time="2024-02-12T19:18:43.801586693Z" level=error msg="ContainerStatus for \"7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5\": not found" Feb 12 19:18:43.801929 kubelet[1402]: E0212 19:18:43.801912 1402 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5\": not found" containerID="7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5" Feb 12 19:18:43.802027 kubelet[1402]: I0212 19:18:43.802014 1402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5"} err="failed to get container status \"7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"7cc3a989c6c53e2cc000175499f45165fcb2c4c1afd2382a72e30e6f5f3e48b5\": not found" Feb 12 19:18:43.802058 kubelet[1402]: I0212 19:18:43.802030 1402 scope.go:117] "RemoveContainer" containerID="c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c" Feb 12 19:18:43.802291 env[1135]: time="2024-02-12T19:18:43.802234820Z" level=error msg="ContainerStatus for \"c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c\": not found" Feb 12 19:18:43.802443 kubelet[1402]: E0212 19:18:43.802424 1402 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c\": not found" containerID="c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c" Feb 12 19:18:43.802534 kubelet[1402]: I0212 19:18:43.802522 1402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c"} err="failed to get container status \"c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"c0f206f1133eb71ad35e4241676a35b56630f76a44df1b7a1f3d72c69cd62e7c\": not found" Feb 12 19:18:43.802591 kubelet[1402]: I0212 19:18:43.802582 1402 scope.go:117] "RemoveContainer" containerID="3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc" Feb 12 19:18:43.802826 env[1135]: time="2024-02-12T19:18:43.802782860Z" level=error msg="ContainerStatus for \"3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc\": not found" Feb 12 19:18:43.802955 kubelet[1402]: E0212 19:18:43.802934 1402 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc\": not found" containerID="3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc" Feb 12 19:18:43.803001 kubelet[1402]: I0212 19:18:43.802969 1402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc"} err="failed to get container status \"3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc\": rpc error: code = NotFound desc = an error occurred when try to find container \"3516dc6e3b5421a72bd2c9b19897c3bfe803f082c6867edd7e7220ad55865efc\": not found" Feb 12 19:18:43.803001 kubelet[1402]: I0212 19:18:43.802981 1402 scope.go:117] "RemoveContainer" containerID="ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb" Feb 12 19:18:43.803242 env[1135]: time="2024-02-12T19:18:43.803179929Z" level=error msg="ContainerStatus for \"ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb\": not found" Feb 12 19:18:43.803431 kubelet[1402]: E0212 19:18:43.803416 1402 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb\": not found" containerID="ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb" Feb 12 19:18:43.803483 kubelet[1402]: I0212 19:18:43.803445 1402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb"} err="failed to get container status \"ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba648a26e8607caf4bb33e56395d8e1c026002405fbcd07543104e62336bf4eb\": not found" Feb 12 19:18:43.803483 kubelet[1402]: I0212 19:18:43.803455 1402 scope.go:117] "RemoveContainer" containerID="f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced" Feb 12 19:18:43.803614 env[1135]: time="2024-02-12T19:18:43.803574598Z" level=error msg="ContainerStatus for \"f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced\": not found" Feb 12 19:18:43.803793 kubelet[1402]: E0212 19:18:43.803777 1402 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced\": not found" containerID="f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced" Feb 12 19:18:43.803885 kubelet[1402]: I0212 19:18:43.803873 1402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced"} err="failed to get container status \"f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6d0e37c7115b471161ad027408eee4baebac9734388bd4015ba3d87eb4f5ced\": not found" Feb 12 19:18:44.633010 kubelet[1402]: E0212 19:18:44.632965 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:44.655535 kubelet[1402]: E0212 19:18:44.655492 1402 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:18:44.685384 kubelet[1402]: I0212 19:18:44.685349 1402 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3addb8ba-f894-4902-b2c3-db57695002ef" path="/var/lib/kubelet/pods/3addb8ba-f894-4902-b2c3-db57695002ef/volumes" Feb 12 19:18:45.633850 kubelet[1402]: E0212 19:18:45.633810 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:46.024524 kubelet[1402]: I0212 19:18:46.024408 1402 topology_manager.go:215] "Topology Admit Handler" podUID="1f5f663b-3acd-4dac-9568-27953f2a8667" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-gn9ws" Feb 12 19:18:46.024848 kubelet[1402]: E0212 19:18:46.024828 1402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3addb8ba-f894-4902-b2c3-db57695002ef" containerName="apply-sysctl-overwrites" Feb 12 19:18:46.024936 kubelet[1402]: E0212 19:18:46.024926 1402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3addb8ba-f894-4902-b2c3-db57695002ef" containerName="mount-bpf-fs" Feb 12 19:18:46.024994 kubelet[1402]: E0212 19:18:46.024979 1402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3addb8ba-f894-4902-b2c3-db57695002ef" containerName="clean-cilium-state" Feb 12 19:18:46.025075 kubelet[1402]: E0212 19:18:46.025066 1402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3addb8ba-f894-4902-b2c3-db57695002ef" containerName="cilium-agent" Feb 12 19:18:46.025133 kubelet[1402]: E0212 19:18:46.025117 1402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3addb8ba-f894-4902-b2c3-db57695002ef" containerName="mount-cgroup" Feb 12 19:18:46.025209 kubelet[1402]: I0212 19:18:46.025199 1402 memory_manager.go:346] "RemoveStaleState removing state" podUID="3addb8ba-f894-4902-b2c3-db57695002ef" containerName="cilium-agent" Feb 12 19:18:46.029809 systemd[1]: Created slice kubepods-besteffort-pod1f5f663b_3acd_4dac_9568_27953f2a8667.slice. Feb 12 19:18:46.037867 kubelet[1402]: I0212 19:18:46.037834 1402 topology_manager.go:215] "Topology Admit Handler" podUID="386b301f-1da0-4e9d-865a-cef8d034b057" podNamespace="kube-system" podName="cilium-l2lxc" Feb 12 19:18:46.043353 systemd[1]: Created slice kubepods-burstable-pod386b301f_1da0_4e9d_865a_cef8d034b057.slice. Feb 12 19:18:46.193711 kubelet[1402]: I0212 19:18:46.193665 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/386b301f-1da0-4e9d-865a-cef8d034b057-clustermesh-secrets\") pod \"cilium-l2lxc\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " pod="kube-system/cilium-l2lxc" Feb 12 19:18:46.193711 kubelet[1402]: I0212 19:18:46.193719 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/386b301f-1da0-4e9d-865a-cef8d034b057-hubble-tls\") pod \"cilium-l2lxc\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " pod="kube-system/cilium-l2lxc" Feb 12 19:18:46.193902 kubelet[1402]: I0212 19:18:46.193763 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-cilium-run\") pod \"cilium-l2lxc\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " pod="kube-system/cilium-l2lxc" Feb 12 19:18:46.193902 kubelet[1402]: I0212 19:18:46.193783 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-hostproc\") pod \"cilium-l2lxc\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " pod="kube-system/cilium-l2lxc" Feb 12 19:18:46.193902 kubelet[1402]: I0212 19:18:46.193802 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-cni-path\") pod \"cilium-l2lxc\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " pod="kube-system/cilium-l2lxc" Feb 12 19:18:46.193902 kubelet[1402]: I0212 19:18:46.193824 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/386b301f-1da0-4e9d-865a-cef8d034b057-cilium-config-path\") pod \"cilium-l2lxc\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " pod="kube-system/cilium-l2lxc" Feb 12 19:18:46.193902 kubelet[1402]: I0212 19:18:46.193851 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/386b301f-1da0-4e9d-865a-cef8d034b057-cilium-ipsec-secrets\") pod \"cilium-l2lxc\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " pod="kube-system/cilium-l2lxc" Feb 12 19:18:46.194032 kubelet[1402]: I0212 19:18:46.193907 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-host-proc-sys-net\") pod \"cilium-l2lxc\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " pod="kube-system/cilium-l2lxc" Feb 12 19:18:46.194032 kubelet[1402]: I0212 19:18:46.193947 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f5f663b-3acd-4dac-9568-27953f2a8667-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-gn9ws\" (UID: \"1f5f663b-3acd-4dac-9568-27953f2a8667\") " pod="kube-system/cilium-operator-6bc8ccdb58-gn9ws" Feb 12 19:18:46.194032 kubelet[1402]: I0212 19:18:46.193968 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6tlq\" (UniqueName: \"kubernetes.io/projected/1f5f663b-3acd-4dac-9568-27953f2a8667-kube-api-access-p6tlq\") pod \"cilium-operator-6bc8ccdb58-gn9ws\" (UID: \"1f5f663b-3acd-4dac-9568-27953f2a8667\") " pod="kube-system/cilium-operator-6bc8ccdb58-gn9ws" Feb 12 19:18:46.194032 kubelet[1402]: I0212 19:18:46.193988 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-bpf-maps\") pod \"cilium-l2lxc\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " pod="kube-system/cilium-l2lxc" Feb 12 19:18:46.194118 kubelet[1402]: I0212 19:18:46.194071 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-xtables-lock\") pod \"cilium-l2lxc\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " pod="kube-system/cilium-l2lxc" Feb 12 19:18:46.194118 kubelet[1402]: I0212 19:18:46.194099 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgvj6\" (UniqueName: \"kubernetes.io/projected/386b301f-1da0-4e9d-865a-cef8d034b057-kube-api-access-hgvj6\") pod \"cilium-l2lxc\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " pod="kube-system/cilium-l2lxc" Feb 12 19:18:46.194159 kubelet[1402]: I0212 19:18:46.194141 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-cilium-cgroup\") pod \"cilium-l2lxc\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " pod="kube-system/cilium-l2lxc" Feb 12 19:18:46.194185 kubelet[1402]: I0212 19:18:46.194168 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-etc-cni-netd\") pod \"cilium-l2lxc\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " pod="kube-system/cilium-l2lxc" Feb 12 19:18:46.194210 kubelet[1402]: I0212 19:18:46.194190 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-lib-modules\") pod \"cilium-l2lxc\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " pod="kube-system/cilium-l2lxc" Feb 12 19:18:46.194210 kubelet[1402]: I0212 19:18:46.194209 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-host-proc-sys-kernel\") pod \"cilium-l2lxc\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " pod="kube-system/cilium-l2lxc" Feb 12 19:18:46.195918 kubelet[1402]: E0212 19:18:46.195888 1402 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-hgvj6 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-l2lxc" podUID="386b301f-1da0-4e9d-865a-cef8d034b057" Feb 12 19:18:46.332531 kubelet[1402]: E0212 19:18:46.332432 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:46.333767 env[1135]: time="2024-02-12T19:18:46.333418752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-gn9ws,Uid:1f5f663b-3acd-4dac-9568-27953f2a8667,Namespace:kube-system,Attempt:0,}" Feb 12 19:18:46.345446 env[1135]: time="2024-02-12T19:18:46.345383484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:18:46.345446 env[1135]: time="2024-02-12T19:18:46.345432008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:18:46.345446 env[1135]: time="2024-02-12T19:18:46.345444448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:18:46.345794 env[1135]: time="2024-02-12T19:18:46.345706705Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e54ccf8a7fb16c441ae700e04b69acce4bd4715b27def29ea20ccf4c276fb5d7 pid=2957 runtime=io.containerd.runc.v2 Feb 12 19:18:46.355941 systemd[1]: Started cri-containerd-e54ccf8a7fb16c441ae700e04b69acce4bd4715b27def29ea20ccf4c276fb5d7.scope. Feb 12 19:18:46.403628 env[1135]: time="2024-02-12T19:18:46.403587521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-gn9ws,Uid:1f5f663b-3acd-4dac-9568-27953f2a8667,Namespace:kube-system,Attempt:0,} returns sandbox id \"e54ccf8a7fb16c441ae700e04b69acce4bd4715b27def29ea20ccf4c276fb5d7\"" Feb 12 19:18:46.404412 kubelet[1402]: E0212 19:18:46.404379 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:46.405152 env[1135]: time="2024-02-12T19:18:46.405116740Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:18:46.584803 kubelet[1402]: I0212 19:18:46.584514 1402 setters.go:552] "Node became not ready" node="10.0.0.62" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-12T19:18:46Z","lastTransitionTime":"2024-02-12T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 12 19:18:46.634053 kubelet[1402]: E0212 19:18:46.634001 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:46.898963 kubelet[1402]: I0212 19:18:46.898880 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-bpf-maps\") pod \"386b301f-1da0-4e9d-865a-cef8d034b057\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " Feb 12 19:18:46.898963 kubelet[1402]: I0212 19:18:46.898917 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "386b301f-1da0-4e9d-865a-cef8d034b057" (UID: "386b301f-1da0-4e9d-865a-cef8d034b057"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:46.898963 kubelet[1402]: I0212 19:18:46.898944 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgvj6\" (UniqueName: \"kubernetes.io/projected/386b301f-1da0-4e9d-865a-cef8d034b057-kube-api-access-hgvj6\") pod \"386b301f-1da0-4e9d-865a-cef8d034b057\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " Feb 12 19:18:46.898963 kubelet[1402]: I0212 19:18:46.898970 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/386b301f-1da0-4e9d-865a-cef8d034b057-cilium-config-path\") pod \"386b301f-1da0-4e9d-865a-cef8d034b057\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " Feb 12 19:18:46.899185 kubelet[1402]: I0212 19:18:46.898991 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-lib-modules\") pod \"386b301f-1da0-4e9d-865a-cef8d034b057\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " Feb 12 19:18:46.899185 kubelet[1402]: I0212 19:18:46.899010 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-cilium-cgroup\") pod \"386b301f-1da0-4e9d-865a-cef8d034b057\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " Feb 12 19:18:46.899185 kubelet[1402]: I0212 19:18:46.899028 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-host-proc-sys-kernel\") pod \"386b301f-1da0-4e9d-865a-cef8d034b057\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " Feb 12 19:18:46.899185 kubelet[1402]: I0212 19:18:46.899048 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/386b301f-1da0-4e9d-865a-cef8d034b057-clustermesh-secrets\") pod \"386b301f-1da0-4e9d-865a-cef8d034b057\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " Feb 12 19:18:46.899185 kubelet[1402]: I0212 19:18:46.899066 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-xtables-lock\") pod \"386b301f-1da0-4e9d-865a-cef8d034b057\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " Feb 12 19:18:46.899185 kubelet[1402]: I0212 19:18:46.899086 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/386b301f-1da0-4e9d-865a-cef8d034b057-cilium-ipsec-secrets\") pod \"386b301f-1da0-4e9d-865a-cef8d034b057\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " Feb 12 19:18:46.899324 kubelet[1402]: I0212 19:18:46.899104 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-host-proc-sys-net\") pod \"386b301f-1da0-4e9d-865a-cef8d034b057\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " Feb 12 19:18:46.899324 kubelet[1402]: I0212 19:18:46.899121 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-etc-cni-netd\") pod \"386b301f-1da0-4e9d-865a-cef8d034b057\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " Feb 12 19:18:46.899324 kubelet[1402]: I0212 19:18:46.899139 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-cilium-run\") pod \"386b301f-1da0-4e9d-865a-cef8d034b057\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " Feb 12 19:18:46.899324 kubelet[1402]: I0212 19:18:46.899158 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-hostproc\") pod \"386b301f-1da0-4e9d-865a-cef8d034b057\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " Feb 12 19:18:46.899324 kubelet[1402]: I0212 19:18:46.899176 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-cni-path\") pod \"386b301f-1da0-4e9d-865a-cef8d034b057\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " Feb 12 19:18:46.899324 kubelet[1402]: I0212 19:18:46.899195 1402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/386b301f-1da0-4e9d-865a-cef8d034b057-hubble-tls\") pod \"386b301f-1da0-4e9d-865a-cef8d034b057\" (UID: \"386b301f-1da0-4e9d-865a-cef8d034b057\") " Feb 12 19:18:46.899463 kubelet[1402]: I0212 19:18:46.899229 1402 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-bpf-maps\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:46.899486 kubelet[1402]: I0212 19:18:46.899472 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "386b301f-1da0-4e9d-865a-cef8d034b057" (UID: "386b301f-1da0-4e9d-865a-cef8d034b057"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:46.899512 kubelet[1402]: I0212 19:18:46.899502 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "386b301f-1da0-4e9d-865a-cef8d034b057" (UID: "386b301f-1da0-4e9d-865a-cef8d034b057"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:46.899536 kubelet[1402]: I0212 19:18:46.899508 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "386b301f-1da0-4e9d-865a-cef8d034b057" (UID: "386b301f-1da0-4e9d-865a-cef8d034b057"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:46.899536 kubelet[1402]: I0212 19:18:46.899520 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-hostproc" (OuterVolumeSpecName: "hostproc") pod "386b301f-1da0-4e9d-865a-cef8d034b057" (UID: "386b301f-1da0-4e9d-865a-cef8d034b057"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:46.899591 kubelet[1402]: I0212 19:18:46.899538 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-cni-path" (OuterVolumeSpecName: "cni-path") pod "386b301f-1da0-4e9d-865a-cef8d034b057" (UID: "386b301f-1da0-4e9d-865a-cef8d034b057"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:46.899591 kubelet[1402]: I0212 19:18:46.899557 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "386b301f-1da0-4e9d-865a-cef8d034b057" (UID: "386b301f-1da0-4e9d-865a-cef8d034b057"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:46.899591 kubelet[1402]: I0212 19:18:46.899572 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "386b301f-1da0-4e9d-865a-cef8d034b057" (UID: "386b301f-1da0-4e9d-865a-cef8d034b057"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:46.899775 kubelet[1402]: I0212 19:18:46.899698 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "386b301f-1da0-4e9d-865a-cef8d034b057" (UID: "386b301f-1da0-4e9d-865a-cef8d034b057"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:46.899775 kubelet[1402]: I0212 19:18:46.899704 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "386b301f-1da0-4e9d-865a-cef8d034b057" (UID: "386b301f-1da0-4e9d-865a-cef8d034b057"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:46.901489 kubelet[1402]: I0212 19:18:46.901448 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386b301f-1da0-4e9d-865a-cef8d034b057-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "386b301f-1da0-4e9d-865a-cef8d034b057" (UID: "386b301f-1da0-4e9d-865a-cef8d034b057"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:18:46.901579 kubelet[1402]: I0212 19:18:46.901562 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/386b301f-1da0-4e9d-865a-cef8d034b057-kube-api-access-hgvj6" (OuterVolumeSpecName: "kube-api-access-hgvj6") pod "386b301f-1da0-4e9d-865a-cef8d034b057" (UID: "386b301f-1da0-4e9d-865a-cef8d034b057"). InnerVolumeSpecName "kube-api-access-hgvj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:18:46.902181 kubelet[1402]: I0212 19:18:46.902135 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/386b301f-1da0-4e9d-865a-cef8d034b057-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "386b301f-1da0-4e9d-865a-cef8d034b057" (UID: "386b301f-1da0-4e9d-865a-cef8d034b057"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:18:46.902337 kubelet[1402]: I0212 19:18:46.902301 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386b301f-1da0-4e9d-865a-cef8d034b057-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "386b301f-1da0-4e9d-865a-cef8d034b057" (UID: "386b301f-1da0-4e9d-865a-cef8d034b057"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:18:46.903642 kubelet[1402]: I0212 19:18:46.903607 1402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386b301f-1da0-4e9d-865a-cef8d034b057-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "386b301f-1da0-4e9d-865a-cef8d034b057" (UID: "386b301f-1da0-4e9d-865a-cef8d034b057"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:18:46.999820 kubelet[1402]: I0212 19:18:46.999785 1402 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-hostproc\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:46.999820 kubelet[1402]: I0212 19:18:46.999816 1402 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-cni-path\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:46.999820 kubelet[1402]: I0212 19:18:46.999828 1402 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/386b301f-1da0-4e9d-865a-cef8d034b057-hubble-tls\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:46.999989 kubelet[1402]: I0212 19:18:46.999842 1402 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hgvj6\" (UniqueName: \"kubernetes.io/projected/386b301f-1da0-4e9d-865a-cef8d034b057-kube-api-access-hgvj6\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:46.999989 kubelet[1402]: I0212 19:18:46.999852 1402 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/386b301f-1da0-4e9d-865a-cef8d034b057-cilium-config-path\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:46.999989 kubelet[1402]: I0212 19:18:46.999861 1402 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-lib-modules\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:46.999989 kubelet[1402]: I0212 19:18:46.999870 1402 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-cilium-cgroup\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:46.999989 kubelet[1402]: I0212 19:18:46.999879 1402 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-host-proc-sys-kernel\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:46.999989 kubelet[1402]: I0212 19:18:46.999888 1402 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/386b301f-1da0-4e9d-865a-cef8d034b057-clustermesh-secrets\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:46.999989 kubelet[1402]: I0212 19:18:46.999896 1402 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-xtables-lock\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:46.999989 kubelet[1402]: I0212 19:18:46.999905 1402 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/386b301f-1da0-4e9d-865a-cef8d034b057-cilium-ipsec-secrets\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:47.000151 kubelet[1402]: I0212 19:18:46.999914 1402 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-host-proc-sys-net\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:47.000151 kubelet[1402]: I0212 19:18:46.999922 1402 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-etc-cni-netd\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:47.000151 kubelet[1402]: I0212 19:18:46.999931 1402 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/386b301f-1da0-4e9d-865a-cef8d034b057-cilium-run\") on node \"10.0.0.62\" DevicePath \"\"" Feb 12 19:18:47.300456 systemd[1]: var-lib-kubelet-pods-386b301f\x2d1da0\x2d4e9d\x2d865a\x2dcef8d034b057-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhgvj6.mount: Deactivated successfully. Feb 12 19:18:47.300546 systemd[1]: var-lib-kubelet-pods-386b301f\x2d1da0\x2d4e9d\x2d865a\x2dcef8d034b057-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:18:47.300598 systemd[1]: var-lib-kubelet-pods-386b301f\x2d1da0\x2d4e9d\x2d865a\x2dcef8d034b057-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:18:47.300648 systemd[1]: var-lib-kubelet-pods-386b301f\x2d1da0\x2d4e9d\x2d865a\x2dcef8d034b057-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:18:47.634578 kubelet[1402]: E0212 19:18:47.634537 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:47.666639 env[1135]: time="2024-02-12T19:18:47.666592147Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:47.668541 env[1135]: time="2024-02-12T19:18:47.668503906Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:47.670301 env[1135]: time="2024-02-12T19:18:47.670272056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:18:47.671117 env[1135]: time="2024-02-12T19:18:47.670730964Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 12 19:18:47.672800 env[1135]: time="2024-02-12T19:18:47.672766090Z" level=info msg="CreateContainer within sandbox \"e54ccf8a7fb16c441ae700e04b69acce4bd4715b27def29ea20ccf4c276fb5d7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:18:47.681195 env[1135]: time="2024-02-12T19:18:47.681151891Z" level=info msg="CreateContainer within sandbox \"e54ccf8a7fb16c441ae700e04b69acce4bd4715b27def29ea20ccf4c276fb5d7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"541116109b15adeeb79fad3f0509af6d6a74ace75751503bd091d418bac860a6\"" Feb 12 19:18:47.681636 env[1135]: time="2024-02-12T19:18:47.681588238Z" level=info msg="StartContainer for \"541116109b15adeeb79fad3f0509af6d6a74ace75751503bd091d418bac860a6\"" Feb 12 19:18:47.704641 systemd[1]: Started cri-containerd-541116109b15adeeb79fad3f0509af6d6a74ace75751503bd091d418bac860a6.scope. Feb 12 19:18:47.772474 env[1135]: time="2024-02-12T19:18:47.772406354Z" level=info msg="StartContainer for \"541116109b15adeeb79fad3f0509af6d6a74ace75751503bd091d418bac860a6\" returns successfully" Feb 12 19:18:47.791504 kubelet[1402]: E0212 19:18:47.789686 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:47.793149 systemd[1]: Removed slice kubepods-burstable-pod386b301f_1da0_4e9d_865a_cef8d034b057.slice. Feb 12 19:18:47.803281 kubelet[1402]: I0212 19:18:47.803235 1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-gn9ws" podStartSLOduration=0.536708706 podCreationTimestamp="2024-02-12 19:18:46 +0000 UTC" firstStartedPulling="2024-02-12 19:18:46.404844362 +0000 UTC m=+52.668582139" lastFinishedPulling="2024-02-12 19:18:47.671320681 +0000 UTC m=+53.935058418" observedRunningTime="2024-02-12 19:18:47.803000933 +0000 UTC m=+54.066738710" watchObservedRunningTime="2024-02-12 19:18:47.803184985 +0000 UTC m=+54.066922762" Feb 12 19:18:47.840664 kubelet[1402]: I0212 19:18:47.840625 1402 topology_manager.go:215] "Topology Admit Handler" podUID="d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672" podNamespace="kube-system" podName="cilium-xsmwq" Feb 12 19:18:47.846425 systemd[1]: Created slice kubepods-burstable-podd9d042ff_d9ea_4b1a_a34c_f9eb3ea50672.slice. Feb 12 19:18:48.007410 kubelet[1402]: I0212 19:18:48.007265 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672-cilium-config-path\") pod \"cilium-xsmwq\" (UID: \"d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672\") " pod="kube-system/cilium-xsmwq" Feb 12 19:18:48.007410 kubelet[1402]: I0212 19:18:48.007324 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672-hostproc\") pod \"cilium-xsmwq\" (UID: \"d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672\") " pod="kube-system/cilium-xsmwq" Feb 12 19:18:48.007582 kubelet[1402]: I0212 19:18:48.007410 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672-etc-cni-netd\") pod \"cilium-xsmwq\" (UID: \"d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672\") " pod="kube-system/cilium-xsmwq" Feb 12 19:18:48.007582 kubelet[1402]: I0212 19:18:48.007443 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672-clustermesh-secrets\") pod \"cilium-xsmwq\" (UID: \"d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672\") " pod="kube-system/cilium-xsmwq" Feb 12 19:18:48.007582 kubelet[1402]: I0212 19:18:48.007472 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672-host-proc-sys-net\") pod \"cilium-xsmwq\" (UID: \"d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672\") " pod="kube-system/cilium-xsmwq" Feb 12 19:18:48.007582 kubelet[1402]: I0212 19:18:48.007496 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672-host-proc-sys-kernel\") pod \"cilium-xsmwq\" (UID: \"d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672\") " pod="kube-system/cilium-xsmwq" Feb 12 19:18:48.007582 kubelet[1402]: I0212 19:18:48.007517 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672-cilium-run\") pod \"cilium-xsmwq\" (UID: \"d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672\") " pod="kube-system/cilium-xsmwq" Feb 12 19:18:48.007582 kubelet[1402]: I0212 19:18:48.007534 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672-bpf-maps\") pod \"cilium-xsmwq\" (UID: \"d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672\") " pod="kube-system/cilium-xsmwq" Feb 12 19:18:48.007727 kubelet[1402]: I0212 19:18:48.007555 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672-cilium-ipsec-secrets\") pod \"cilium-xsmwq\" (UID: \"d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672\") " pod="kube-system/cilium-xsmwq" Feb 12 19:18:48.007727 kubelet[1402]: I0212 19:18:48.007572 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672-cilium-cgroup\") pod \"cilium-xsmwq\" (UID: \"d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672\") " pod="kube-system/cilium-xsmwq" Feb 12 19:18:48.007727 kubelet[1402]: I0212 19:18:48.007590 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672-cni-path\") pod \"cilium-xsmwq\" (UID: \"d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672\") " pod="kube-system/cilium-xsmwq" Feb 12 19:18:48.007727 kubelet[1402]: I0212 19:18:48.007611 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672-lib-modules\") pod \"cilium-xsmwq\" (UID: \"d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672\") " pod="kube-system/cilium-xsmwq" Feb 12 19:18:48.007727 kubelet[1402]: I0212 19:18:48.007640 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672-xtables-lock\") pod \"cilium-xsmwq\" (UID: \"d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672\") " pod="kube-system/cilium-xsmwq" Feb 12 19:18:48.007727 kubelet[1402]: I0212 19:18:48.007658 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672-hubble-tls\") pod \"cilium-xsmwq\" (UID: \"d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672\") " pod="kube-system/cilium-xsmwq" Feb 12 19:18:48.007902 kubelet[1402]: I0212 19:18:48.007676 1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws8ms\" (UniqueName: \"kubernetes.io/projected/d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672-kube-api-access-ws8ms\") pod \"cilium-xsmwq\" (UID: \"d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672\") " pod="kube-system/cilium-xsmwq" Feb 12 19:18:48.158872 kubelet[1402]: E0212 19:18:48.158848 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:48.159369 env[1135]: time="2024-02-12T19:18:48.159334403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xsmwq,Uid:d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672,Namespace:kube-system,Attempt:0,}" Feb 12 19:18:48.172246 env[1135]: time="2024-02-12T19:18:48.172182491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:18:48.172327 env[1135]: time="2024-02-12T19:18:48.172261015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:18:48.172327 env[1135]: time="2024-02-12T19:18:48.172287257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:18:48.172487 env[1135]: time="2024-02-12T19:18:48.172451707Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9675eec3c6bdd49adc993f48e68574c988d8574e74cf7ae78a6027a82a1bc65 pid=3044 runtime=io.containerd.runc.v2 Feb 12 19:18:48.182574 systemd[1]: Started cri-containerd-c9675eec3c6bdd49adc993f48e68574c988d8574e74cf7ae78a6027a82a1bc65.scope. Feb 12 19:18:48.214347 env[1135]: time="2024-02-12T19:18:48.214292966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xsmwq,Uid:d9d042ff-d9ea-4b1a-a34c-f9eb3ea50672,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9675eec3c6bdd49adc993f48e68574c988d8574e74cf7ae78a6027a82a1bc65\"" Feb 12 19:18:48.215133 kubelet[1402]: E0212 19:18:48.214965 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:48.216988 env[1135]: time="2024-02-12T19:18:48.216955165Z" level=info msg="CreateContainer within sandbox \"c9675eec3c6bdd49adc993f48e68574c988d8574e74cf7ae78a6027a82a1bc65\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:18:48.227455 env[1135]: time="2024-02-12T19:18:48.227414710Z" level=info msg="CreateContainer within sandbox \"c9675eec3c6bdd49adc993f48e68574c988d8574e74cf7ae78a6027a82a1bc65\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3d0881f579b55dd32de9429bcc98533bc54079d5651f4d6647d24f8f2991f169\"" Feb 12 19:18:48.227951 env[1135]: time="2024-02-12T19:18:48.227924981Z" level=info msg="StartContainer for \"3d0881f579b55dd32de9429bcc98533bc54079d5651f4d6647d24f8f2991f169\"" Feb 12 19:18:48.240629 systemd[1]: Started cri-containerd-3d0881f579b55dd32de9429bcc98533bc54079d5651f4d6647d24f8f2991f169.scope. Feb 12 19:18:48.275404 env[1135]: time="2024-02-12T19:18:48.274592209Z" level=info msg="StartContainer for \"3d0881f579b55dd32de9429bcc98533bc54079d5651f4d6647d24f8f2991f169\" returns successfully" Feb 12 19:18:48.282865 systemd[1]: cri-containerd-3d0881f579b55dd32de9429bcc98533bc54079d5651f4d6647d24f8f2991f169.scope: Deactivated successfully. Feb 12 19:18:48.304456 env[1135]: time="2024-02-12T19:18:48.304406790Z" level=info msg="shim disconnected" id=3d0881f579b55dd32de9429bcc98533bc54079d5651f4d6647d24f8f2991f169 Feb 12 19:18:48.304456 env[1135]: time="2024-02-12T19:18:48.304450912Z" level=warning msg="cleaning up after shim disconnected" id=3d0881f579b55dd32de9429bcc98533bc54079d5651f4d6647d24f8f2991f169 namespace=k8s.io Feb 12 19:18:48.304628 env[1135]: time="2024-02-12T19:18:48.304460473Z" level=info msg="cleaning up dead shim" Feb 12 19:18:48.310640 env[1135]: time="2024-02-12T19:18:48.310610360Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3129 runtime=io.containerd.runc.v2\n" Feb 12 19:18:48.635703 kubelet[1402]: E0212 19:18:48.635664 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:48.685841 kubelet[1402]: I0212 19:18:48.685805 1402 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="386b301f-1da0-4e9d-865a-cef8d034b057" path="/var/lib/kubelet/pods/386b301f-1da0-4e9d-865a-cef8d034b057/volumes" Feb 12 19:18:48.792260 kubelet[1402]: E0212 19:18:48.792181 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:48.792260 kubelet[1402]: E0212 19:18:48.792202 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:48.794162 env[1135]: time="2024-02-12T19:18:48.794116605Z" level=info msg="CreateContainer within sandbox \"c9675eec3c6bdd49adc993f48e68574c988d8574e74cf7ae78a6027a82a1bc65\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:18:48.806676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3324961490.mount: Deactivated successfully. Feb 12 19:18:48.808393 env[1135]: time="2024-02-12T19:18:48.808355176Z" level=info msg="CreateContainer within sandbox \"c9675eec3c6bdd49adc993f48e68574c988d8574e74cf7ae78a6027a82a1bc65\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2ec82fcb46a5942dd8e09beaa752ac1d0816c9c5f4c7d7e0c3a119e06ab39b9f\"" Feb 12 19:18:48.809035 env[1135]: time="2024-02-12T19:18:48.808999294Z" level=info msg="StartContainer for \"2ec82fcb46a5942dd8e09beaa752ac1d0816c9c5f4c7d7e0c3a119e06ab39b9f\"" Feb 12 19:18:48.830786 systemd[1]: Started cri-containerd-2ec82fcb46a5942dd8e09beaa752ac1d0816c9c5f4c7d7e0c3a119e06ab39b9f.scope. Feb 12 19:18:48.857685 env[1135]: time="2024-02-12T19:18:48.857629519Z" level=info msg="StartContainer for \"2ec82fcb46a5942dd8e09beaa752ac1d0816c9c5f4c7d7e0c3a119e06ab39b9f\" returns successfully" Feb 12 19:18:48.864192 systemd[1]: cri-containerd-2ec82fcb46a5942dd8e09beaa752ac1d0816c9c5f4c7d7e0c3a119e06ab39b9f.scope: Deactivated successfully. Feb 12 19:18:48.887216 env[1135]: time="2024-02-12T19:18:48.887099720Z" level=info msg="shim disconnected" id=2ec82fcb46a5942dd8e09beaa752ac1d0816c9c5f4c7d7e0c3a119e06ab39b9f Feb 12 19:18:48.887216 env[1135]: time="2024-02-12T19:18:48.887144922Z" level=warning msg="cleaning up after shim disconnected" id=2ec82fcb46a5942dd8e09beaa752ac1d0816c9c5f4c7d7e0c3a119e06ab39b9f namespace=k8s.io Feb 12 19:18:48.887216 env[1135]: time="2024-02-12T19:18:48.887153563Z" level=info msg="cleaning up dead shim" Feb 12 19:18:48.893684 env[1135]: time="2024-02-12T19:18:48.893631510Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3191 runtime=io.containerd.runc.v2\n" Feb 12 19:18:49.299008 systemd[1]: run-containerd-runc-k8s.io-2ec82fcb46a5942dd8e09beaa752ac1d0816c9c5f4c7d7e0c3a119e06ab39b9f-runc.0GYueF.mount: Deactivated successfully. Feb 12 19:18:49.299111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ec82fcb46a5942dd8e09beaa752ac1d0816c9c5f4c7d7e0c3a119e06ab39b9f-rootfs.mount: Deactivated successfully. Feb 12 19:18:49.635858 kubelet[1402]: E0212 19:18:49.635804 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:49.656449 kubelet[1402]: E0212 19:18:49.656434 1402 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:18:49.795478 kubelet[1402]: E0212 19:18:49.795452 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:49.797587 env[1135]: time="2024-02-12T19:18:49.797540620Z" level=info msg="CreateContainer within sandbox \"c9675eec3c6bdd49adc993f48e68574c988d8574e74cf7ae78a6027a82a1bc65\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:18:49.811572 env[1135]: time="2024-02-12T19:18:49.811524745Z" level=info msg="CreateContainer within sandbox \"c9675eec3c6bdd49adc993f48e68574c988d8574e74cf7ae78a6027a82a1bc65\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"52491f54aa18196b33453c7040ddab3b5b5149aeb515515e0844f6dc85ad429b\"" Feb 12 19:18:49.812441 env[1135]: time="2024-02-12T19:18:49.812413796Z" level=info msg="StartContainer for \"52491f54aa18196b33453c7040ddab3b5b5149aeb515515e0844f6dc85ad429b\"" Feb 12 19:18:49.828739 systemd[1]: Started cri-containerd-52491f54aa18196b33453c7040ddab3b5b5149aeb515515e0844f6dc85ad429b.scope. Feb 12 19:18:49.861520 env[1135]: time="2024-02-12T19:18:49.861476220Z" level=info msg="StartContainer for \"52491f54aa18196b33453c7040ddab3b5b5149aeb515515e0844f6dc85ad429b\" returns successfully" Feb 12 19:18:49.864427 systemd[1]: cri-containerd-52491f54aa18196b33453c7040ddab3b5b5149aeb515515e0844f6dc85ad429b.scope: Deactivated successfully. Feb 12 19:18:49.885726 env[1135]: time="2024-02-12T19:18:49.885682053Z" level=info msg="shim disconnected" id=52491f54aa18196b33453c7040ddab3b5b5149aeb515515e0844f6dc85ad429b Feb 12 19:18:49.885996 env[1135]: time="2024-02-12T19:18:49.885936668Z" level=warning msg="cleaning up after shim disconnected" id=52491f54aa18196b33453c7040ddab3b5b5149aeb515515e0844f6dc85ad429b namespace=k8s.io Feb 12 19:18:49.886069 env[1135]: time="2024-02-12T19:18:49.886054155Z" level=info msg="cleaning up dead shim" Feb 12 19:18:49.892516 env[1135]: time="2024-02-12T19:18:49.892473524Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3248 runtime=io.containerd.runc.v2\n" Feb 12 19:18:50.299068 systemd[1]: run-containerd-runc-k8s.io-52491f54aa18196b33453c7040ddab3b5b5149aeb515515e0844f6dc85ad429b-runc.ZSL0EE.mount: Deactivated successfully. Feb 12 19:18:50.299175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52491f54aa18196b33453c7040ddab3b5b5149aeb515515e0844f6dc85ad429b-rootfs.mount: Deactivated successfully. Feb 12 19:18:50.636283 kubelet[1402]: E0212 19:18:50.636252 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:50.798458 kubelet[1402]: E0212 19:18:50.798412 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:50.800363 env[1135]: time="2024-02-12T19:18:50.800328192Z" level=info msg="CreateContainer within sandbox \"c9675eec3c6bdd49adc993f48e68574c988d8574e74cf7ae78a6027a82a1bc65\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:18:50.818199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1145168373.mount: Deactivated successfully. Feb 12 19:18:50.820722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152564751.mount: Deactivated successfully. Feb 12 19:18:50.820878 env[1135]: time="2024-02-12T19:18:50.820724405Z" level=info msg="CreateContainer within sandbox \"c9675eec3c6bdd49adc993f48e68574c988d8574e74cf7ae78a6027a82a1bc65\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"85884c8ab126e7529ddd03a86d56122fe0090d39549aef5e3828691c0fd9c26a\"" Feb 12 19:18:50.821815 env[1135]: time="2024-02-12T19:18:50.821784304Z" level=info msg="StartContainer for \"85884c8ab126e7529ddd03a86d56122fe0090d39549aef5e3828691c0fd9c26a\"" Feb 12 19:18:50.836459 systemd[1]: Started cri-containerd-85884c8ab126e7529ddd03a86d56122fe0090d39549aef5e3828691c0fd9c26a.scope. Feb 12 19:18:50.870622 systemd[1]: cri-containerd-85884c8ab126e7529ddd03a86d56122fe0090d39549aef5e3828691c0fd9c26a.scope: Deactivated successfully. Feb 12 19:18:50.871258 env[1135]: time="2024-02-12T19:18:50.871218648Z" level=info msg="StartContainer for \"85884c8ab126e7529ddd03a86d56122fe0090d39549aef5e3828691c0fd9c26a\" returns successfully" Feb 12 19:18:50.899381 env[1135]: time="2024-02-12T19:18:50.899255445Z" level=info msg="shim disconnected" id=85884c8ab126e7529ddd03a86d56122fe0090d39549aef5e3828691c0fd9c26a Feb 12 19:18:50.899381 env[1135]: time="2024-02-12T19:18:50.899314928Z" level=warning msg="cleaning up after shim disconnected" id=85884c8ab126e7529ddd03a86d56122fe0090d39549aef5e3828691c0fd9c26a namespace=k8s.io Feb 12 19:18:50.899381 env[1135]: time="2024-02-12T19:18:50.899324889Z" level=info msg="cleaning up dead shim" Feb 12 19:18:50.907734 env[1135]: time="2024-02-12T19:18:50.907670192Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3304 runtime=io.containerd.runc.v2\n" Feb 12 19:18:51.299120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85884c8ab126e7529ddd03a86d56122fe0090d39549aef5e3828691c0fd9c26a-rootfs.mount: Deactivated successfully. Feb 12 19:18:51.636541 kubelet[1402]: E0212 19:18:51.636487 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:51.802117 kubelet[1402]: E0212 19:18:51.802065 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:51.804670 env[1135]: time="2024-02-12T19:18:51.804622779Z" level=info msg="CreateContainer within sandbox \"c9675eec3c6bdd49adc993f48e68574c988d8574e74cf7ae78a6027a82a1bc65\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:18:51.817449 env[1135]: time="2024-02-12T19:18:51.817395303Z" level=info msg="CreateContainer within sandbox \"c9675eec3c6bdd49adc993f48e68574c988d8574e74cf7ae78a6027a82a1bc65\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e341b69b14be8a5c1229d33f0df5cfc462038218a90c1d03205c647a01459647\"" Feb 12 19:18:51.818096 env[1135]: time="2024-02-12T19:18:51.818039698Z" level=info msg="StartContainer for \"e341b69b14be8a5c1229d33f0df5cfc462038218a90c1d03205c647a01459647\"" Feb 12 19:18:51.842519 systemd[1]: Started cri-containerd-e341b69b14be8a5c1229d33f0df5cfc462038218a90c1d03205c647a01459647.scope. Feb 12 19:18:51.883072 env[1135]: time="2024-02-12T19:18:51.883029542Z" level=info msg="StartContainer for \"e341b69b14be8a5c1229d33f0df5cfc462038218a90c1d03205c647a01459647\" returns successfully" Feb 12 19:18:52.132966 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 12 19:18:52.299286 systemd[1]: run-containerd-runc-k8s.io-e341b69b14be8a5c1229d33f0df5cfc462038218a90c1d03205c647a01459647-runc.7v0rJk.mount: Deactivated successfully. Feb 12 19:18:52.637628 kubelet[1402]: E0212 19:18:52.637579 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:52.807440 kubelet[1402]: E0212 19:18:52.807394 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:52.822316 kubelet[1402]: I0212 19:18:52.822232 1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-xsmwq" podStartSLOduration=5.8221975310000005 podCreationTimestamp="2024-02-12 19:18:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:18:52.821481094 +0000 UTC m=+59.085218911" watchObservedRunningTime="2024-02-12 19:18:52.822197531 +0000 UTC m=+59.085935308" Feb 12 19:18:53.638160 kubelet[1402]: E0212 19:18:53.638125 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:54.160936 kubelet[1402]: E0212 19:18:54.160905 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:54.503622 systemd[1]: run-containerd-runc-k8s.io-e341b69b14be8a5c1229d33f0df5cfc462038218a90c1d03205c647a01459647-runc.rAusPW.mount: Deactivated successfully. Feb 12 19:18:54.596757 kubelet[1402]: E0212 19:18:54.596710 1402 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:54.604067 env[1135]: time="2024-02-12T19:18:54.604028293Z" level=info msg="StopPodSandbox for \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\"" Feb 12 19:18:54.604463 env[1135]: time="2024-02-12T19:18:54.604112537Z" level=info msg="TearDown network for sandbox \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\" successfully" Feb 12 19:18:54.604463 env[1135]: time="2024-02-12T19:18:54.604145219Z" level=info msg="StopPodSandbox for \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\" returns successfully" Feb 12 19:18:54.604889 env[1135]: time="2024-02-12T19:18:54.604858574Z" level=info msg="RemovePodSandbox for \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\"" Feb 12 19:18:54.604958 env[1135]: time="2024-02-12T19:18:54.604893455Z" level=info msg="Forcibly stopping sandbox \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\"" Feb 12 19:18:54.605030 env[1135]: time="2024-02-12T19:18:54.604960179Z" level=info msg="TearDown network for sandbox \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\" successfully" Feb 12 19:18:54.609394 env[1135]: time="2024-02-12T19:18:54.609328751Z" level=info msg="RemovePodSandbox \"b8e94c0f654d3e2a5754e71df396b7a07653ef48a9a387ec1e3282a11c32cb63\" returns successfully" Feb 12 19:18:54.639031 kubelet[1402]: E0212 19:18:54.639003 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:54.793948 systemd-networkd[1046]: lxc_health: Link UP Feb 12 19:18:54.801311 systemd-networkd[1046]: lxc_health: Gained carrier Feb 12 19:18:54.802335 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:18:55.640026 kubelet[1402]: E0212 19:18:55.639988 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:55.869222 systemd-networkd[1046]: lxc_health: Gained IPv6LL Feb 12 19:18:56.161372 kubelet[1402]: E0212 19:18:56.161345 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:56.641028 kubelet[1402]: E0212 19:18:56.640997 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:56.641378 systemd[1]: run-containerd-runc-k8s.io-e341b69b14be8a5c1229d33f0df5cfc462038218a90c1d03205c647a01459647-runc.PJpTFh.mount: Deactivated successfully. Feb 12 19:18:56.813268 kubelet[1402]: E0212 19:18:56.813052 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:57.641335 kubelet[1402]: E0212 19:18:57.641299 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:57.815360 kubelet[1402]: E0212 19:18:57.815337 1402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:58.642840 kubelet[1402]: E0212 19:18:58.642779 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:18:58.785913 systemd[1]: run-containerd-runc-k8s.io-e341b69b14be8a5c1229d33f0df5cfc462038218a90c1d03205c647a01459647-runc.F7o5vu.mount: Deactivated successfully. Feb 12 19:18:59.643292 kubelet[1402]: E0212 19:18:59.643256 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:19:00.643945 kubelet[1402]: E0212 19:19:00.643906 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:19:00.923669 systemd[1]: run-containerd-runc-k8s.io-e341b69b14be8a5c1229d33f0df5cfc462038218a90c1d03205c647a01459647-runc.aZky9L.mount: Deactivated successfully. Feb 12 19:19:00.971262 kubelet[1402]: E0212 19:19:00.971224 1402 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49686->127.0.0.1:43863: write tcp 127.0.0.1:49686->127.0.0.1:43863: write: broken pipe Feb 12 19:19:01.644580 kubelet[1402]: E0212 19:19:01.644518 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:19:02.645171 kubelet[1402]: E0212 19:19:02.645126 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"