Feb 9 18:34:37.027793 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 18:34:37.027811 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 18:34:37.027819 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 9 18:34:37.027826 kernel: printk: bootconsole [pl11] enabled Feb 9 18:34:37.027831 kernel: efi: EFI v2.70 by EDK II Feb 9 18:34:37.027836 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef2d698 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 9 18:34:37.027843 kernel: random: crng init done Feb 9 18:34:37.027849 kernel: ACPI: Early table checksum verification disabled Feb 9 18:34:37.027854 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 9 18:34:37.027859 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:34:37.027865 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:34:37.027872 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 18:34:37.027878 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:34:37.027883 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:34:37.027890 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:34:37.027896 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:34:37.027902 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:34:37.027909 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:34:37.027915 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 9 18:34:37.027921 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:34:37.027926 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 9 18:34:37.027932 kernel: NUMA: Failed to initialise from firmware Feb 9 18:34:37.027938 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 18:34:37.027943 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Feb 9 18:34:37.027949 kernel: Zone ranges: Feb 9 18:34:37.027955 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 9 18:34:37.027960 kernel: DMA32 empty Feb 9 18:34:37.027967 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 18:34:37.027973 kernel: Movable zone start for each node Feb 9 18:34:37.027978 kernel: Early memory node ranges Feb 9 18:34:37.027984 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 9 18:34:37.027990 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 9 18:34:37.027995 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 9 18:34:37.028001 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 9 18:34:37.028007 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 9 18:34:37.028012 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 9 18:34:37.028018 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 9 18:34:37.028024 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 9 18:34:37.028029 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 18:34:37.028037 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 18:34:37.028045 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 9 18:34:37.028051 kernel: psci: probing for conduit method from ACPI. Feb 9 18:34:37.028057 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 18:34:37.028063 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 18:34:37.028070 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 9 18:34:37.028076 kernel: psci: SMC Calling Convention v1.4 Feb 9 18:34:37.028082 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 9 18:34:37.028088 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 9 18:34:37.028095 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 18:34:37.028101 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 18:34:37.028114 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 18:34:37.028121 kernel: Detected PIPT I-cache on CPU0 Feb 9 18:34:37.028127 kernel: CPU features: detected: GIC system register CPU interface Feb 9 18:34:37.028133 kernel: CPU features: detected: Hardware dirty bit management Feb 9 18:34:37.028139 kernel: CPU features: detected: Spectre-BHB Feb 9 18:34:37.028145 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 18:34:37.028153 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 18:34:37.028159 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 18:34:37.028165 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 9 18:34:37.028174 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 9 18:34:37.028182 kernel: Policy zone: Normal Feb 9 18:34:37.028189 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:34:37.028196 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:34:37.028202 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 18:34:37.028208 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:34:37.028214 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:34:37.028221 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 9 18:34:37.028231 kernel: Memory: 3991936K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202224K reserved, 0K cma-reserved) Feb 9 18:34:37.028237 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 18:34:37.028243 kernel: trace event string verifier disabled Feb 9 18:34:37.028249 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 18:34:37.028255 kernel: rcu: RCU event tracing is enabled. Feb 9 18:34:37.028261 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 18:34:37.028268 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 18:34:37.028276 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:34:37.028282 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:34:37.028289 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 18:34:37.028296 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 18:34:37.028302 kernel: GICv3: 960 SPIs implemented Feb 9 18:34:37.028308 kernel: GICv3: 0 Extended SPIs implemented Feb 9 18:34:37.028314 kernel: GICv3: Distributor has no Range Selector support Feb 9 18:34:37.028320 kernel: Root IRQ handler: gic_handle_irq Feb 9 18:34:37.028329 kernel: GICv3: 16 PPIs implemented Feb 9 18:34:37.028335 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 9 18:34:37.028341 kernel: ITS: No ITS available, not enabling LPIs Feb 9 18:34:37.028347 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:34:37.028353 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 18:34:37.028359 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 18:34:37.028365 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 18:34:37.028373 kernel: Console: colour dummy device 80x25 Feb 9 18:34:37.028380 kernel: printk: console [tty1] enabled Feb 9 18:34:37.028386 kernel: ACPI: Core revision 20210730 Feb 9 18:34:37.028393 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 18:34:37.028402 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:34:37.028408 kernel: LSM: Security Framework initializing Feb 9 18:34:37.028414 kernel: SELinux: Initializing. Feb 9 18:34:37.028420 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:34:37.028427 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:34:37.028434 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 9 18:34:37.028441 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 9 18:34:37.028450 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:34:37.028456 kernel: Remapping and enabling EFI services. Feb 9 18:34:37.028462 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:34:37.028468 kernel: Detected PIPT I-cache on CPU1 Feb 9 18:34:37.028474 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 9 18:34:37.028481 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:34:37.028500 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 18:34:37.028509 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 18:34:37.028516 kernel: SMP: Total of 2 processors activated. Feb 9 18:34:37.028522 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 18:34:37.028528 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 9 18:34:37.028535 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 18:34:37.028544 kernel: CPU features: detected: CRC32 instructions Feb 9 18:34:37.028550 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 18:34:37.028556 kernel: CPU features: detected: LSE atomic instructions Feb 9 18:34:37.028562 kernel: CPU features: detected: Privileged Access Never Feb 9 18:34:37.028570 kernel: CPU: All CPU(s) started at EL1 Feb 9 18:34:37.028576 kernel: alternatives: patching kernel code Feb 9 18:34:37.028587 kernel: devtmpfs: initialized Feb 9 18:34:37.028594 kernel: KASLR enabled Feb 9 18:34:37.028604 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:34:37.028611 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 18:34:37.028617 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:34:37.028624 kernel: SMBIOS 3.1.0 present. Feb 9 18:34:37.028630 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 18:34:37.028637 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:34:37.028645 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 18:34:37.028652 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 18:34:37.028658 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 18:34:37.028665 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:34:37.028671 kernel: audit: type=2000 audit(0.092:1): state=initialized audit_enabled=0 res=1 Feb 9 18:34:37.028681 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:34:37.028687 kernel: cpuidle: using governor menu Feb 9 18:34:37.028695 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 18:34:37.028702 kernel: ASID allocator initialised with 32768 entries Feb 9 18:34:37.028709 kernel: ACPI: bus type PCI registered Feb 9 18:34:37.028715 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:34:37.028722 kernel: Serial: AMBA PL011 UART driver Feb 9 18:34:37.028729 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:34:37.028735 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 18:34:37.028745 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:34:37.028752 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 18:34:37.028760 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:34:37.028767 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 18:34:37.028773 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:34:37.028780 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:34:37.028789 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:34:37.028797 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:34:37.028803 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:34:37.028810 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:34:37.028816 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:34:37.028825 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 18:34:37.028831 kernel: ACPI: Interpreter enabled Feb 9 18:34:37.028838 kernel: ACPI: Using GIC for interrupt routing Feb 9 18:34:37.028848 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 9 18:34:37.028855 kernel: printk: console [ttyAMA0] enabled Feb 9 18:34:37.028861 kernel: printk: bootconsole [pl11] disabled Feb 9 18:34:37.028868 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 9 18:34:37.028874 kernel: iommu: Default domain type: Translated Feb 9 18:34:37.028881 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 18:34:37.028889 kernel: vgaarb: loaded Feb 9 18:34:37.028896 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:34:37.028902 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:34:37.028912 kernel: PTP clock support registered Feb 9 18:34:37.028918 kernel: Registered efivars operations Feb 9 18:34:37.028924 kernel: No ACPI PMU IRQ for CPU0 Feb 9 18:34:37.028931 kernel: No ACPI PMU IRQ for CPU1 Feb 9 18:34:37.028938 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 18:34:37.028944 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:34:37.028952 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:34:37.028958 kernel: pnp: PnP ACPI init Feb 9 18:34:37.028965 kernel: pnp: PnP ACPI: found 0 devices Feb 9 18:34:37.028974 kernel: NET: Registered PF_INET protocol family Feb 9 18:34:37.028981 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 18:34:37.028988 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 18:34:37.028994 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:34:37.029001 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:34:37.029008 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 18:34:37.029016 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 18:34:37.029022 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:34:37.029029 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:34:37.029036 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:34:37.029042 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:34:37.029052 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 9 18:34:37.029060 kernel: kvm [1]: HYP mode not available Feb 9 18:34:37.029067 kernel: Initialise system trusted keyrings Feb 9 18:34:37.029073 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 18:34:37.029081 kernel: Key type asymmetric registered Feb 9 18:34:37.029088 kernel: Asymmetric key parser 'x509' registered Feb 9 18:34:37.029097 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:34:37.029104 kernel: io scheduler mq-deadline registered Feb 9 18:34:37.029111 kernel: io scheduler kyber registered Feb 9 18:34:37.029117 kernel: io scheduler bfq registered Feb 9 18:34:37.029124 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:34:37.029130 kernel: thunder_xcv, ver 1.0 Feb 9 18:34:37.029137 kernel: thunder_bgx, ver 1.0 Feb 9 18:34:37.029147 kernel: nicpf, ver 1.0 Feb 9 18:34:37.029154 kernel: nicvf, ver 1.0 Feb 9 18:34:37.029276 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 18:34:37.029346 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T18:34:36 UTC (1707503676) Feb 9 18:34:37.029355 kernel: efifb: probing for efifb Feb 9 18:34:37.029362 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 18:34:37.029369 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 18:34:37.029380 kernel: efifb: scrolling: redraw Feb 9 18:34:37.029389 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 18:34:37.029395 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 18:34:37.029402 kernel: fb0: EFI VGA frame buffer device Feb 9 18:34:37.029409 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 9 18:34:37.029418 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 18:34:37.029425 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:34:37.029432 kernel: Segment Routing with IPv6 Feb 9 18:34:37.029438 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:34:37.029445 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:34:37.032533 kernel: Key type dns_resolver registered Feb 9 18:34:37.032545 kernel: registered taskstats version 1 Feb 9 18:34:37.032552 kernel: Loading compiled-in X.509 certificates Feb 9 18:34:37.032559 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 18:34:37.032566 kernel: Key type .fscrypt registered Feb 9 18:34:37.032573 kernel: Key type fscrypt-provisioning registered Feb 9 18:34:37.032580 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:34:37.032586 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:34:37.032593 kernel: ima: No architecture policies found Feb 9 18:34:37.032603 kernel: Freeing unused kernel memory: 34688K Feb 9 18:34:37.032610 kernel: Run /init as init process Feb 9 18:34:37.032616 kernel: with arguments: Feb 9 18:34:37.032623 kernel: /init Feb 9 18:34:37.032630 kernel: with environment: Feb 9 18:34:37.032636 kernel: HOME=/ Feb 9 18:34:37.032643 kernel: TERM=linux Feb 9 18:34:37.032649 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:34:37.032658 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:34:37.032669 systemd[1]: Detected virtualization microsoft. Feb 9 18:34:37.032676 systemd[1]: Detected architecture arm64. Feb 9 18:34:37.032683 systemd[1]: Running in initrd. Feb 9 18:34:37.032690 systemd[1]: No hostname configured, using default hostname. Feb 9 18:34:37.032697 systemd[1]: Hostname set to . Feb 9 18:34:37.032704 systemd[1]: Initializing machine ID from random generator. Feb 9 18:34:37.032711 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:34:37.032720 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:34:37.032727 systemd[1]: Reached target cryptsetup.target. Feb 9 18:34:37.032734 systemd[1]: Reached target paths.target. Feb 9 18:34:37.032741 systemd[1]: Reached target slices.target. Feb 9 18:34:37.032748 systemd[1]: Reached target swap.target. Feb 9 18:34:37.032755 systemd[1]: Reached target timers.target. Feb 9 18:34:37.032762 systemd[1]: Listening on iscsid.socket. Feb 9 18:34:37.032769 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:34:37.032778 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:34:37.032785 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:34:37.032792 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:34:37.032799 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:34:37.032806 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:34:37.032813 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:34:37.032820 systemd[1]: Reached target sockets.target. Feb 9 18:34:37.032828 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:34:37.032835 systemd[1]: Finished network-cleanup.service. Feb 9 18:34:37.032843 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:34:37.032850 systemd[1]: Starting systemd-journald.service... Feb 9 18:34:37.032857 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:34:37.032864 systemd[1]: Starting systemd-resolved.service... Feb 9 18:34:37.032871 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:34:37.032881 systemd-journald[276]: Journal started Feb 9 18:34:37.032927 systemd-journald[276]: Runtime Journal (/run/log/journal/64eaa1ac2836468c94a6bca194f9066a) is 8.0M, max 78.6M, 70.6M free. Feb 9 18:34:37.011537 systemd-modules-load[277]: Inserted module 'overlay' Feb 9 18:34:37.053052 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:34:37.061310 systemd-modules-load[277]: Inserted module 'br_netfilter' Feb 9 18:34:37.082034 kernel: Bridge firewalling registered Feb 9 18:34:37.082055 systemd[1]: Started systemd-journald.service. Feb 9 18:34:37.074766 systemd-resolved[278]: Positive Trust Anchors: Feb 9 18:34:37.124400 kernel: audit: type=1130 audit(1707503677.081:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.124422 kernel: SCSI subsystem initialized Feb 9 18:34:37.124431 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:34:37.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.074774 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:34:37.164615 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:34:37.164639 kernel: audit: type=1130 audit(1707503677.128:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.164651 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:34:37.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.074801 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:34:37.221132 kernel: audit: type=1130 audit(1707503677.201:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.076847 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 9 18:34:37.282038 kernel: audit: type=1130 audit(1707503677.227:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.282063 kernel: audit: type=1130 audit(1707503677.252:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.104629 systemd[1]: Started systemd-resolved.service. Feb 9 18:34:37.150643 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:34:37.326223 kernel: audit: type=1130 audit(1707503677.285:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.169779 systemd-modules-load[277]: Inserted module 'dm_multipath' Feb 9 18:34:37.201866 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:34:37.228128 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:34:37.376195 kernel: audit: type=1130 audit(1707503677.353:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.253094 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:34:37.285726 systemd[1]: Reached target nss-lookup.target. Feb 9 18:34:37.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.295589 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:34:37.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.331884 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:34:37.446096 kernel: audit: type=1130 audit(1707503677.386:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.446541 kernel: audit: type=1130 audit(1707503677.413:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.341728 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:34:37.349366 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:34:37.373817 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:34:37.463300 dracut-cmdline[298]: dracut-dracut-053 Feb 9 18:34:37.387185 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:34:37.473101 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:34:37.417700 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:34:37.564519 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:34:37.575509 kernel: iscsi: registered transport (tcp) Feb 9 18:34:37.594977 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:34:37.595019 kernel: QLogic iSCSI HBA Driver Feb 9 18:34:37.630515 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:34:37.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:37.636616 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:34:37.694511 kernel: raid6: neonx8 gen() 13801 MB/s Feb 9 18:34:37.715501 kernel: raid6: neonx8 xor() 10818 MB/s Feb 9 18:34:37.736506 kernel: raid6: neonx4 gen() 13570 MB/s Feb 9 18:34:37.758504 kernel: raid6: neonx4 xor() 11244 MB/s Feb 9 18:34:37.778514 kernel: raid6: neonx2 gen() 13104 MB/s Feb 9 18:34:37.799505 kernel: raid6: neonx2 xor() 10234 MB/s Feb 9 18:34:37.821504 kernel: raid6: neonx1 gen() 10504 MB/s Feb 9 18:34:37.842500 kernel: raid6: neonx1 xor() 8795 MB/s Feb 9 18:34:37.863500 kernel: raid6: int64x8 gen() 6291 MB/s Feb 9 18:34:37.885501 kernel: raid6: int64x8 xor() 3543 MB/s Feb 9 18:34:37.906504 kernel: raid6: int64x4 gen() 7255 MB/s Feb 9 18:34:37.928501 kernel: raid6: int64x4 xor() 3856 MB/s Feb 9 18:34:37.949504 kernel: raid6: int64x2 gen() 6152 MB/s Feb 9 18:34:37.970499 kernel: raid6: int64x2 xor() 3320 MB/s Feb 9 18:34:37.992504 kernel: raid6: int64x1 gen() 5040 MB/s Feb 9 18:34:38.017405 kernel: raid6: int64x1 xor() 2646 MB/s Feb 9 18:34:38.017424 kernel: raid6: using algorithm neonx8 gen() 13801 MB/s Feb 9 18:34:38.017440 kernel: raid6: .... xor() 10818 MB/s, rmw enabled Feb 9 18:34:38.023213 kernel: raid6: using neon recovery algorithm Feb 9 18:34:38.045359 kernel: xor: measuring software checksum speed Feb 9 18:34:38.045381 kernel: 8regs : 17293 MB/sec Feb 9 18:34:38.054276 kernel: 32regs : 20760 MB/sec Feb 9 18:34:38.054286 kernel: arm64_neon : 27731 MB/sec Feb 9 18:34:38.054294 kernel: xor: using function: arm64_neon (27731 MB/sec) Feb 9 18:34:38.114508 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 18:34:38.124602 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:34:38.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.133000 audit: BPF prog-id=7 op=LOAD Feb 9 18:34:38.133000 audit: BPF prog-id=8 op=LOAD Feb 9 18:34:38.134294 systemd[1]: Starting systemd-udevd.service... Feb 9 18:34:38.152721 systemd-udevd[476]: Using default interface naming scheme 'v252'. Feb 9 18:34:38.158733 systemd[1]: Started systemd-udevd.service. Feb 9 18:34:38.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.170459 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:34:38.186252 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Feb 9 18:34:38.217446 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:34:38.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.223161 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:34:38.258553 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:34:38.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.317544 kernel: hv_vmbus: Vmbus version:5.3 Feb 9 18:34:38.317594 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 18:34:38.337615 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 18:34:38.337660 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 18:34:38.338508 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 18:34:38.362514 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 18:34:38.362569 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 18:34:38.362583 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 18:34:38.368493 kernel: scsi host1: storvsc_host_t Feb 9 18:34:38.373003 kernel: scsi host0: storvsc_host_t Feb 9 18:34:38.380168 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 18:34:38.388500 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 18:34:38.408121 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 18:34:38.408305 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 18:34:38.415480 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 18:34:38.420197 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 18:34:38.424650 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 18:34:38.425651 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 18:34:38.425774 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 18:34:38.425855 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 18:34:38.444568 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 18:34:38.444599 kernel: hv_netvsc 000d3a6e-367b-000d-3a6e-367b000d3a6e eth0: VF slot 1 added Feb 9 18:34:38.444744 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 18:34:38.458513 kernel: hv_vmbus: registering driver hv_pci Feb 9 18:34:38.458571 kernel: hv_pci 88f76ea1-40ab-4de3-bcea-ded910863cbf: PCI VMBus probing: Using version 0x10004 Feb 9 18:34:38.482353 kernel: hv_pci 88f76ea1-40ab-4de3-bcea-ded910863cbf: PCI host bridge to bus 40ab:00 Feb 9 18:34:38.482516 kernel: pci_bus 40ab:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 9 18:34:38.482627 kernel: pci_bus 40ab:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 18:34:38.498014 kernel: pci 40ab:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 9 18:34:38.510761 kernel: pci 40ab:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 18:34:38.534508 kernel: pci 40ab:00:02.0: enabling Extended Tags Feb 9 18:34:38.557601 kernel: pci 40ab:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 40ab:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 9 18:34:38.570560 kernel: pci_bus 40ab:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 18:34:38.570717 kernel: pci 40ab:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 18:34:38.614536 kernel: mlx5_core 40ab:00:02.0: firmware version: 16.30.1284 Feb 9 18:34:38.775516 kernel: mlx5_core 40ab:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 9 18:34:38.833635 kernel: hv_netvsc 000d3a6e-367b-000d-3a6e-367b000d3a6e eth0: VF registering: eth1 Feb 9 18:34:38.833927 kernel: mlx5_core 40ab:00:02.0 eth1: joined to eth0 Feb 9 18:34:38.848515 kernel: mlx5_core 40ab:00:02.0 enP16555s1: renamed from eth1 Feb 9 18:34:38.858392 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:34:38.933001 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (542) Feb 9 18:34:38.943502 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:34:39.061309 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:34:39.072091 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:34:39.088117 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:34:39.107945 systemd[1]: Starting disk-uuid.service... Feb 9 18:34:39.125509 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 18:34:39.133515 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 18:34:40.143045 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 18:34:40.143102 disk-uuid[605]: The operation has completed successfully. Feb 9 18:34:40.197056 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:34:40.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:40.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:40.197151 systemd[1]: Finished disk-uuid.service. Feb 9 18:34:40.207138 systemd[1]: Starting verity-setup.service... Feb 9 18:34:40.256506 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 18:34:40.444913 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:34:40.451505 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:34:40.464703 systemd[1]: Finished verity-setup.service. Feb 9 18:34:40.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:40.522516 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:34:40.523048 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:34:40.527200 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:34:40.527937 systemd[1]: Starting ignition-setup.service... Feb 9 18:34:40.535974 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:34:40.576924 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:34:40.576967 kernel: BTRFS info (device sda6): using free space tree Feb 9 18:34:40.576982 kernel: BTRFS info (device sda6): has skinny extents Feb 9 18:34:40.613879 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:34:40.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:40.623000 audit: BPF prog-id=9 op=LOAD Feb 9 18:34:40.625067 systemd[1]: Starting systemd-networkd.service... Feb 9 18:34:40.652091 systemd-networkd[843]: lo: Link UP Feb 9 18:34:40.652104 systemd-networkd[843]: lo: Gained carrier Feb 9 18:34:40.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:40.652823 systemd-networkd[843]: Enumeration completed Feb 9 18:34:40.656347 systemd[1]: Started systemd-networkd.service. Feb 9 18:34:40.661833 systemd-networkd[843]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:34:40.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:40.662610 systemd[1]: Reached target network.target. Feb 9 18:34:40.707899 iscsid[854]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:34:40.707899 iscsid[854]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 18:34:40.707899 iscsid[854]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:34:40.707899 iscsid[854]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:34:40.707899 iscsid[854]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:34:40.707899 iscsid[854]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:34:40.707899 iscsid[854]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:34:40.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:40.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:40.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:40.675225 systemd[1]: Starting iscsiuio.service... Feb 9 18:34:40.684616 systemd[1]: Started iscsiuio.service. Feb 9 18:34:40.690483 systemd[1]: Starting iscsid.service... Feb 9 18:34:40.703436 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:34:40.703856 systemd[1]: Started iscsid.service. Feb 9 18:34:40.713332 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:34:40.746334 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:34:40.751586 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:34:40.770130 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:34:40.775616 systemd[1]: Reached target remote-fs.target. Feb 9 18:34:40.786860 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:34:40.794586 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:34:40.876510 kernel: mlx5_core 40ab:00:02.0 enP16555s1: Link up Feb 9 18:34:40.925981 kernel: hv_netvsc 000d3a6e-367b-000d-3a6e-367b000d3a6e eth0: Data path switched to VF: enP16555s1 Feb 9 18:34:40.926177 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:34:40.926886 systemd-networkd[843]: enP16555s1: Link UP Feb 9 18:34:40.927068 systemd-networkd[843]: eth0: Link UP Feb 9 18:34:40.927515 systemd-networkd[843]: eth0: Gained carrier Feb 9 18:34:40.941930 systemd-networkd[843]: enP16555s1: Gained carrier Feb 9 18:34:40.942696 systemd[1]: Finished ignition-setup.service. Feb 9 18:34:40.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:40.951792 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:34:40.965562 systemd-networkd[843]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 18:34:42.854622 systemd-networkd[843]: eth0: Gained IPv6LL Feb 9 18:34:43.716645 ignition[870]: Ignition 2.14.0 Feb 9 18:34:43.719869 ignition[870]: Stage: fetch-offline Feb 9 18:34:43.720033 ignition[870]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:34:43.720096 ignition[870]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:34:43.825387 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:34:43.825577 ignition[870]: parsed url from cmdline: "" Feb 9 18:34:43.833366 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:34:43.871558 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 18:34:43.871590 kernel: audit: type=1130 audit(1707503683.839:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:43.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:43.825581 ignition[870]: no config URL provided Feb 9 18:34:43.840804 systemd[1]: Starting ignition-fetch.service... Feb 9 18:34:43.825587 ignition[870]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:34:43.825595 ignition[870]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:34:43.825600 ignition[870]: failed to fetch config: resource requires networking Feb 9 18:34:43.826019 ignition[870]: Ignition finished successfully Feb 9 18:34:43.860693 ignition[876]: Ignition 2.14.0 Feb 9 18:34:43.860699 ignition[876]: Stage: fetch Feb 9 18:34:43.860862 ignition[876]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:34:43.860892 ignition[876]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:34:43.864002 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:34:43.870290 ignition[876]: parsed url from cmdline: "" Feb 9 18:34:43.870295 ignition[876]: no config URL provided Feb 9 18:34:43.870302 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:34:43.870321 ignition[876]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:34:43.870351 ignition[876]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 18:34:43.963355 ignition[876]: GET result: OK Feb 9 18:34:43.963524 ignition[876]: config has been read from IMDS userdata Feb 9 18:34:43.963601 ignition[876]: parsing config with SHA512: da377544103fabd1e50bebd3e1f4ab0572ceb0b311d1f068f797fb5a2226be48554f3c6f599f088272585217b326d96fb8c4e63e217f5bee8923f866aa4bfdc5 Feb 9 18:34:44.015000 unknown[876]: fetched base config from "system" Feb 9 18:34:44.019980 unknown[876]: fetched base config from "system" Feb 9 18:34:44.019987 unknown[876]: fetched user config from "azure" Feb 9 18:34:44.020638 ignition[876]: fetch: fetch complete Feb 9 18:34:44.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:44.026068 systemd[1]: Finished ignition-fetch.service. Feb 9 18:34:44.068582 kernel: audit: type=1130 audit(1707503684.035:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:44.020644 ignition[876]: fetch: fetch passed Feb 9 18:34:44.037014 systemd[1]: Starting ignition-kargs.service... Feb 9 18:34:44.020688 ignition[876]: Ignition finished successfully Feb 9 18:34:44.070104 ignition[882]: Ignition 2.14.0 Feb 9 18:34:44.112125 kernel: audit: type=1130 audit(1707503684.086:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:44.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:44.081447 systemd[1]: Finished ignition-kargs.service. Feb 9 18:34:44.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:44.070111 ignition[882]: Stage: kargs Feb 9 18:34:44.150482 kernel: audit: type=1130 audit(1707503684.122:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:44.087366 systemd[1]: Starting ignition-disks.service... Feb 9 18:34:44.070223 ignition[882]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:34:44.117716 systemd[1]: Finished ignition-disks.service. Feb 9 18:34:44.070240 ignition[882]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:34:44.122847 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:34:44.073145 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:34:44.147254 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:34:44.075839 ignition[882]: kargs: kargs passed Feb 9 18:34:44.155689 systemd[1]: Reached target local-fs.target. Feb 9 18:34:44.075901 ignition[882]: Ignition finished successfully Feb 9 18:34:44.165131 systemd[1]: Reached target sysinit.target. Feb 9 18:34:44.096518 ignition[888]: Ignition 2.14.0 Feb 9 18:34:44.176909 systemd[1]: Reached target basic.target. Feb 9 18:34:44.096524 ignition[888]: Stage: disks Feb 9 18:34:44.190916 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:34:44.096615 ignition[888]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:34:44.096636 ignition[888]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:34:44.099059 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:34:44.116755 ignition[888]: disks: disks passed Feb 9 18:34:44.116811 ignition[888]: Ignition finished successfully Feb 9 18:34:44.295375 systemd-fsck[896]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 9 18:34:44.310862 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:34:44.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:44.341429 systemd[1]: Mounting sysroot.mount... Feb 9 18:34:44.350857 kernel: audit: type=1130 audit(1707503684.315:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:44.362519 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:34:44.362745 systemd[1]: Mounted sysroot.mount. Feb 9 18:34:44.367243 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:34:44.437622 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:34:44.442517 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 18:34:44.455298 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:34:44.455335 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:34:44.471497 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:34:44.538807 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:34:44.544683 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:34:44.569535 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (907) Feb 9 18:34:44.577435 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:34:44.594347 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:34:44.594368 kernel: BTRFS info (device sda6): using free space tree Feb 9 18:34:44.594377 kernel: BTRFS info (device sda6): has skinny extents Feb 9 18:34:44.598104 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:34:44.612332 initrd-setup-root[938]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:34:44.627398 initrd-setup-root[946]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:34:44.652150 initrd-setup-root[954]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:34:45.154497 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:34:45.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:45.179989 systemd[1]: Starting ignition-mount.service... Feb 9 18:34:45.191008 kernel: audit: type=1130 audit(1707503685.159:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:45.191308 systemd[1]: Starting sysroot-boot.service... Feb 9 18:34:45.196141 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 18:34:45.196263 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 18:34:45.227972 ignition[974]: INFO : Ignition 2.14.0 Feb 9 18:34:45.233043 ignition[974]: INFO : Stage: mount Feb 9 18:34:45.233043 ignition[974]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:34:45.233043 ignition[974]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:34:45.286752 kernel: audit: type=1130 audit(1707503685.251:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:45.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:45.246722 systemd[1]: Finished ignition-mount.service. Feb 9 18:34:45.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:45.312550 ignition[974]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:34:45.312550 ignition[974]: INFO : mount: mount passed Feb 9 18:34:45.312550 ignition[974]: INFO : Ignition finished successfully Feb 9 18:34:45.332197 kernel: audit: type=1130 audit(1707503685.290:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:45.254008 systemd[1]: Finished sysroot-boot.service. Feb 9 18:34:45.766624 coreos-metadata[906]: Feb 09 18:34:45.766 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 18:34:45.776041 coreos-metadata[906]: Feb 09 18:34:45.775 INFO Fetch successful Feb 9 18:34:45.803152 coreos-metadata[906]: Feb 09 18:34:45.803 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 18:34:45.818707 coreos-metadata[906]: Feb 09 18:34:45.818 INFO Fetch successful Feb 9 18:34:45.824804 coreos-metadata[906]: Feb 09 18:34:45.824 INFO wrote hostname ci-3510.3.2-a-b879aa43fa to /sysroot/etc/hostname Feb 9 18:34:45.837344 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 18:34:45.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:45.866793 systemd[1]: Starting ignition-files.service... Feb 9 18:34:45.875818 kernel: audit: type=1130 audit(1707503685.842:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:45.878834 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:34:45.898516 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (985) Feb 9 18:34:45.912544 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:34:45.912559 kernel: BTRFS info (device sda6): using free space tree Feb 9 18:34:45.912568 kernel: BTRFS info (device sda6): has skinny extents Feb 9 18:34:45.922152 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:34:45.940449 ignition[1004]: INFO : Ignition 2.14.0 Feb 9 18:34:45.940449 ignition[1004]: INFO : Stage: files Feb 9 18:34:45.952792 ignition[1004]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:34:45.952792 ignition[1004]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:34:45.952792 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:34:45.952792 ignition[1004]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:34:45.952792 ignition[1004]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:34:45.952792 ignition[1004]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:34:46.040113 ignition[1004]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:34:46.048903 ignition[1004]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:34:46.065468 unknown[1004]: wrote ssh authorized keys file for user: core Feb 9 18:34:46.072100 ignition[1004]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:34:46.072100 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 18:34:46.072100 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 18:34:46.560593 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 18:34:46.717685 ignition[1004]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 18:34:46.734858 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 18:34:46.734858 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:34:46.734858 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 18:34:46.846264 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 18:34:47.065027 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:34:47.077265 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 18:34:47.077265 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 18:34:47.425988 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 18:34:47.716303 ignition[1004]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 18:34:47.733343 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 18:34:47.733343 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:34:47.733343 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubectl: attempt #1 Feb 9 18:34:47.971335 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 18:34:48.281663 ignition[1004]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 14be61ec35669a27acf2df0380afb85b9b42311d50ca1165718421c5f605df1119ec9ae314696a674051712e80deeaa65e62d2d62ed4d107fe99d0aaf419dafc Feb 9 18:34:48.298760 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:34:48.298760 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:34:48.298760 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Feb 9 18:34:48.358961 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 18:34:48.636918 ignition[1004]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Feb 9 18:34:48.653298 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:34:48.653298 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:34:48.653298 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Feb 9 18:34:48.696058 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 18:34:49.280170 ignition[1004]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Feb 9 18:34:49.298845 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:34:49.298845 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:34:49.298845 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:34:49.298845 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 18:34:49.298845 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 18:34:49.671889 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 18:34:49.741915 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 18:34:49.753268 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:34:49.753268 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:34:49.753268 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:34:49.753268 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:34:49.753268 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:34:49.753268 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:34:49.753268 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:34:49.753268 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:34:50.122748 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:34:50.133789 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:34:50.133789 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 18:34:50.133789 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:34:50.176356 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1007) Feb 9 18:34:50.176379 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3577715464" Feb 9 18:34:50.176379 ignition[1004]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3577715464": device or resource busy Feb 9 18:34:50.176379 ignition[1004]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3577715464", trying btrfs: device or resource busy Feb 9 18:34:50.176379 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3577715464" Feb 9 18:34:50.229755 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3577715464" Feb 9 18:34:50.229755 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem3577715464" Feb 9 18:34:50.229755 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem3577715464" Feb 9 18:34:50.229755 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 18:34:50.229755 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 18:34:50.229755 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:34:50.229755 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem22449664" Feb 9 18:34:50.229755 ignition[1004]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem22449664": device or resource busy Feb 9 18:34:50.229755 ignition[1004]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem22449664", trying btrfs: device or resource busy Feb 9 18:34:50.229755 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem22449664" Feb 9 18:34:50.229755 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem22449664" Feb 9 18:34:50.229755 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem22449664" Feb 9 18:34:50.229755 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem22449664" Feb 9 18:34:50.229755 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 18:34:50.229755 ignition[1004]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 9 18:34:50.229755 ignition[1004]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 9 18:34:50.466695 kernel: audit: type=1130 audit(1707503690.234:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.466726 kernel: audit: type=1130 audit(1707503690.295:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.466736 kernel: audit: type=1131 audit(1707503690.295:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.189978 systemd[1]: mnt-oem3577715464.mount: Deactivated successfully. Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(1a): [started] processing unit "prepare-critools.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(1a): [finished] processing unit "prepare-critools.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(1c): [started] processing unit "prepare-helm.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(1c): [finished] processing unit "prepare-helm.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(1e): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(1e): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(20): [started] setting preset to enabled for "waagent.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(20): [finished] setting preset to enabled for "waagent.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(21): [started] setting preset to enabled for "nvidia.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(21): [finished] setting preset to enabled for "nvidia.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(22): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:34:50.472926 ignition[1004]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:34:50.798944 kernel: audit: type=1130 audit(1707503690.529:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.798971 kernel: audit: type=1130 audit(1707503690.610:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.798981 kernel: audit: type=1131 audit(1707503690.637:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.798990 kernel: audit: type=1130 audit(1707503690.735:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.205410 systemd[1]: mnt-oem22449664.mount: Deactivated successfully. Feb 9 18:34:50.810048 ignition[1004]: INFO : files: op(23): [started] setting preset to enabled for "prepare-helm.service" Feb 9 18:34:50.810048 ignition[1004]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 18:34:50.810048 ignition[1004]: INFO : files: op(24): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:34:50.810048 ignition[1004]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:34:50.810048 ignition[1004]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:34:50.810048 ignition[1004]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:34:50.810048 ignition[1004]: INFO : files: files passed Feb 9 18:34:50.810048 ignition[1004]: INFO : Ignition finished successfully Feb 9 18:34:50.941917 kernel: audit: type=1131 audit(1707503690.845:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.221023 systemd[1]: Finished ignition-files.service. Feb 9 18:34:50.952880 initrd-setup-root-after-ignition[1029]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:34:50.236285 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:34:50.266108 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:34:50.266958 systemd[1]: Starting ignition-quench.service... Feb 9 18:34:50.282019 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:34:50.282130 systemd[1]: Finished ignition-quench.service. Feb 9 18:34:50.520718 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:34:51.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.558932 systemd[1]: Reached target ignition-complete.target. Feb 9 18:34:50.572903 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:34:50.600405 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:34:51.107339 kernel: audit: type=1131 audit(1707503691.035:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.107363 kernel: audit: type=1131 audit(1707503691.081:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.600581 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:34:50.638001 systemd[1]: Reached target initrd-fs.target. Feb 9 18:34:51.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.657188 systemd[1]: Reached target initrd.target. Feb 9 18:34:51.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.694459 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:34:50.703282 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:34:50.729761 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:34:51.173760 ignition[1042]: INFO : Ignition 2.14.0 Feb 9 18:34:51.173760 ignition[1042]: INFO : Stage: umount Feb 9 18:34:51.173760 ignition[1042]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:34:51.173760 ignition[1042]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:34:51.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.212903 iscsid[854]: iscsid shutting down. Feb 9 18:34:50.773969 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:34:51.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.235275 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:34:51.235275 ignition[1042]: INFO : umount: umount passed Feb 9 18:34:51.235275 ignition[1042]: INFO : Ignition finished successfully Feb 9 18:34:51.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.797497 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:34:51.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.804212 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:34:51.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.815634 systemd[1]: Stopped target timers.target. Feb 9 18:34:51.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.831068 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:34:51.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.831132 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:34:51.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.875459 systemd[1]: Stopped target initrd.target. Feb 9 18:34:51.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.890876 systemd[1]: Stopped target basic.target. Feb 9 18:34:50.907599 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:34:50.925214 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:34:50.936165 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:34:50.947947 systemd[1]: Stopped target remote-fs.target. Feb 9 18:34:50.958049 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:34:51.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.973749 systemd[1]: Stopped target sysinit.target. Feb 9 18:34:50.985048 systemd[1]: Stopped target local-fs.target. Feb 9 18:34:51.003095 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:34:51.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.014316 systemd[1]: Stopped target swap.target. Feb 9 18:34:51.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.025213 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:34:51.025272 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:34:51.061979 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:34:51.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.072644 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:34:51.072700 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:34:51.082032 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:34:51.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.082074 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:34:51.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.494000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:34:51.113712 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:34:51.113759 systemd[1]: Stopped ignition-files.service. Feb 9 18:34:51.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.123445 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 18:34:51.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.123485 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 18:34:51.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.138054 systemd[1]: Stopping ignition-mount.service... Feb 9 18:34:51.153553 systemd[1]: Stopping iscsid.service... Feb 9 18:34:51.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.167711 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:34:51.167792 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:34:51.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.193940 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:34:51.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.220468 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:34:51.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.220573 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:34:51.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.230163 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:34:51.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.230209 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:34:51.682403 kernel: hv_netvsc 000d3a6e-367b-000d-3a6e-367b000d3a6e eth0: Data path switched from VF: enP16555s1 Feb 9 18:34:51.248836 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 18:34:51.248949 systemd[1]: Stopped iscsid.service. Feb 9 18:34:51.259415 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:34:51.259875 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:34:51.259955 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:34:51.269334 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:34:51.269408 systemd[1]: Stopped ignition-mount.service. Feb 9 18:34:51.279193 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:34:51.279245 systemd[1]: Stopped ignition-disks.service. Feb 9 18:34:51.289724 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:34:51.289770 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:34:51.299650 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 18:34:51.299687 systemd[1]: Stopped ignition-fetch.service. Feb 9 18:34:51.311314 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:34:51.311356 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:34:51.322310 systemd[1]: Stopped target paths.target. Feb 9 18:34:51.332271 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:34:51.340511 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:34:51.348855 systemd[1]: Stopped target slices.target. Feb 9 18:34:51.359245 systemd[1]: Stopped target sockets.target. Feb 9 18:34:51.370931 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:34:51.370975 systemd[1]: Closed iscsid.socket. Feb 9 18:34:51.375865 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:34:51.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:51.375909 systemd[1]: Stopped ignition-setup.service. Feb 9 18:34:51.386443 systemd[1]: Stopping iscsiuio.service... Feb 9 18:34:51.401248 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:34:51.401357 systemd[1]: Stopped iscsiuio.service. Feb 9 18:34:51.411126 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:34:51.411223 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:34:51.421556 systemd[1]: Stopped target network.target. Feb 9 18:34:51.430691 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:34:51.430738 systemd[1]: Closed iscsiuio.socket. Feb 9 18:34:51.441584 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:34:51.898080 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Feb 9 18:34:51.441627 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:34:51.451698 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:34:51.460534 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:34:51.470657 systemd-networkd[843]: eth0: DHCPv6 lease lost Feb 9 18:34:51.897000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:34:51.472682 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:34:51.472781 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:34:51.482724 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:34:51.482823 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:34:51.495306 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:34:51.495345 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:34:51.508054 systemd[1]: Stopping network-cleanup.service... Feb 9 18:34:51.519361 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:34:51.519464 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:34:51.526229 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:34:51.526286 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:34:51.544194 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:34:51.544260 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:34:51.550791 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:34:51.563333 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:34:51.563880 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:34:51.564017 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:34:51.573808 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:34:51.573851 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:34:51.584352 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:34:51.584387 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:34:51.590530 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:34:51.590586 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:34:51.603021 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:34:51.603063 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:34:51.614280 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:34:51.614317 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:34:51.626068 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:34:51.637969 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:34:51.638022 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:34:51.645006 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:34:51.645107 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:34:51.801635 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:34:51.801749 systemd[1]: Stopped network-cleanup.service. Feb 9 18:34:51.812040 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:34:51.825941 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:34:51.845023 systemd[1]: Switching root. Feb 9 18:34:51.899188 systemd-journald[276]: Journal stopped Feb 9 18:35:03.489411 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:35:03.489442 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:35:03.489456 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:35:03.489467 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:35:03.489475 kernel: SELinux: policy capability open_perms=1 Feb 9 18:35:03.489483 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:35:03.489504 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:35:03.489514 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:35:03.489522 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:35:03.489530 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:35:03.489540 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:35:03.489550 systemd[1]: Successfully loaded SELinux policy in 291.854ms. Feb 9 18:35:03.489560 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.063ms. Feb 9 18:35:03.489571 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:35:03.489582 systemd[1]: Detected virtualization microsoft. Feb 9 18:35:03.489592 systemd[1]: Detected architecture arm64. Feb 9 18:35:03.489600 systemd[1]: Detected first boot. Feb 9 18:35:03.489610 systemd[1]: Hostname set to . Feb 9 18:35:03.489618 systemd[1]: Initializing machine ID from random generator. Feb 9 18:35:03.489627 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:35:03.489636 kernel: kauditd_printk_skb: 40 callbacks suppressed Feb 9 18:35:03.489646 kernel: audit: type=1400 audit(1707503696.101:88): avc: denied { associate } for pid=1075 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 18:35:03.489658 kernel: audit: type=1300 audit(1707503696.101:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=4000145324 a1=40000c6618 a2=40000ccac0 a3=32 items=0 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:03.489669 kernel: audit: type=1327 audit(1707503696.101:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:35:03.489679 kernel: audit: type=1400 audit(1707503696.111:89): avc: denied { associate } for pid=1075 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 18:35:03.489689 kernel: audit: type=1300 audit(1707503696.111:89): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145409 a2=1ed a3=0 items=2 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:03.489697 kernel: audit: type=1307 audit(1707503696.111:89): cwd="/" Feb 9 18:35:03.489708 kernel: audit: type=1302 audit(1707503696.111:89): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:03.489717 kernel: audit: type=1302 audit(1707503696.111:89): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:03.489726 kernel: audit: type=1327 audit(1707503696.111:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:35:03.489735 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:35:03.489744 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:35:03.489754 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:35:03.489764 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:35:03.489774 kernel: audit: type=1334 audit(1707503702.730:90): prog-id=12 op=LOAD Feb 9 18:35:03.489783 kernel: audit: type=1334 audit(1707503702.730:91): prog-id=3 op=UNLOAD Feb 9 18:35:03.489792 kernel: audit: type=1334 audit(1707503702.737:92): prog-id=13 op=LOAD Feb 9 18:35:03.489800 kernel: audit: type=1334 audit(1707503702.743:93): prog-id=14 op=LOAD Feb 9 18:35:03.489808 kernel: audit: type=1334 audit(1707503702.743:94): prog-id=4 op=UNLOAD Feb 9 18:35:03.489817 kernel: audit: type=1334 audit(1707503702.743:95): prog-id=5 op=UNLOAD Feb 9 18:35:03.489829 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 18:35:03.489838 kernel: audit: type=1334 audit(1707503702.750:96): prog-id=15 op=LOAD Feb 9 18:35:03.489848 systemd[1]: Stopped initrd-switch-root.service. Feb 9 18:35:03.489857 kernel: audit: type=1334 audit(1707503702.750:97): prog-id=12 op=UNLOAD Feb 9 18:35:03.489867 kernel: audit: type=1334 audit(1707503702.757:98): prog-id=16 op=LOAD Feb 9 18:35:03.489876 kernel: audit: type=1334 audit(1707503702.763:99): prog-id=17 op=LOAD Feb 9 18:35:03.489885 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 18:35:03.489894 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:35:03.489904 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:35:03.489914 systemd[1]: Created slice system-getty.slice. Feb 9 18:35:03.489924 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:35:03.489933 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:35:03.489942 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:35:03.489952 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:35:03.489961 systemd[1]: Created slice user.slice. Feb 9 18:35:03.489970 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:35:03.489979 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:35:03.489989 systemd[1]: Set up automount boot.automount. Feb 9 18:35:03.489999 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:35:03.490009 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 18:35:03.490018 systemd[1]: Stopped target initrd-fs.target. Feb 9 18:35:03.490027 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 18:35:03.490036 systemd[1]: Reached target integritysetup.target. Feb 9 18:35:03.490046 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:35:03.490055 systemd[1]: Reached target remote-fs.target. Feb 9 18:35:03.490065 systemd[1]: Reached target slices.target. Feb 9 18:35:03.490076 systemd[1]: Reached target swap.target. Feb 9 18:35:03.490085 systemd[1]: Reached target torcx.target. Feb 9 18:35:03.490094 systemd[1]: Reached target veritysetup.target. Feb 9 18:35:03.490103 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:35:03.490112 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:35:03.490122 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:35:03.490132 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:35:03.490142 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:35:03.490152 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:35:03.490161 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:35:03.490170 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:35:03.490180 systemd[1]: Mounting media.mount... Feb 9 18:35:03.490189 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:35:03.490199 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:35:03.490209 systemd[1]: Mounting tmp.mount... Feb 9 18:35:03.490218 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:35:03.490228 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:35:03.490238 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:35:03.490247 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:35:03.490257 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:35:03.490267 systemd[1]: Starting modprobe@drm.service... Feb 9 18:35:03.490276 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:35:03.490285 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:35:03.490296 systemd[1]: Starting modprobe@loop.service... Feb 9 18:35:03.490306 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:35:03.490315 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 18:35:03.490325 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 18:35:03.490334 kernel: fuse: init (API version 7.34) Feb 9 18:35:03.490343 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 18:35:03.490353 kernel: loop: module loaded Feb 9 18:35:03.490362 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 18:35:03.490372 systemd[1]: Stopped systemd-journald.service. Feb 9 18:35:03.490382 systemd[1]: systemd-journald.service: Consumed 3.754s CPU time. Feb 9 18:35:03.490391 systemd[1]: Starting systemd-journald.service... Feb 9 18:35:03.490401 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:35:03.490410 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:35:03.490419 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:35:03.490429 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:35:03.490438 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 18:35:03.490451 systemd-journald[1181]: Journal started Feb 9 18:35:03.490543 systemd-journald[1181]: Runtime Journal (/run/log/journal/1379aec5e08a4a1892edcc45a8be914b) is 8.0M, max 78.6M, 70.6M free. Feb 9 18:34:54.150000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 18:34:54.807000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:34:54.807000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:34:54.807000 audit: BPF prog-id=10 op=LOAD Feb 9 18:34:54.807000 audit: BPF prog-id=10 op=UNLOAD Feb 9 18:34:54.807000 audit: BPF prog-id=11 op=LOAD Feb 9 18:34:54.807000 audit: BPF prog-id=11 op=UNLOAD Feb 9 18:34:56.101000 audit[1075]: AVC avc: denied { associate } for pid=1075 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 18:34:56.101000 audit[1075]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=4000145324 a1=40000c6618 a2=40000ccac0 a3=32 items=0 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.101000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:34:56.111000 audit[1075]: AVC avc: denied { associate } for pid=1075 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 18:34:56.111000 audit[1075]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145409 a2=1ed a3=0 items=2 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.111000 audit: CWD cwd="/" Feb 9 18:34:56.111000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:34:56.111000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:34:56.111000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:35:02.730000 audit: BPF prog-id=12 op=LOAD Feb 9 18:35:02.730000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:35:02.737000 audit: BPF prog-id=13 op=LOAD Feb 9 18:35:02.743000 audit: BPF prog-id=14 op=LOAD Feb 9 18:35:02.743000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:35:02.743000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:35:02.750000 audit: BPF prog-id=15 op=LOAD Feb 9 18:35:02.750000 audit: BPF prog-id=12 op=UNLOAD Feb 9 18:35:02.757000 audit: BPF prog-id=16 op=LOAD Feb 9 18:35:02.763000 audit: BPF prog-id=17 op=LOAD Feb 9 18:35:02.763000 audit: BPF prog-id=13 op=UNLOAD Feb 9 18:35:02.763000 audit: BPF prog-id=14 op=UNLOAD Feb 9 18:35:02.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:02.796000 audit: BPF prog-id=15 op=UNLOAD Feb 9 18:35:02.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:02.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.408000 audit: BPF prog-id=18 op=LOAD Feb 9 18:35:03.408000 audit: BPF prog-id=19 op=LOAD Feb 9 18:35:03.408000 audit: BPF prog-id=20 op=LOAD Feb 9 18:35:03.408000 audit: BPF prog-id=16 op=UNLOAD Feb 9 18:35:03.408000 audit: BPF prog-id=17 op=UNLOAD Feb 9 18:35:03.485000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:35:03.485000 audit[1181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff724ba70 a2=4000 a3=1 items=0 ppid=1 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:03.485000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:34:56.023431 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:34:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:35:02.729238 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:34:56.070224 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:34:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:35:02.764873 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 18:34:56.070245 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:34:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:35:02.765239 systemd[1]: systemd-journald.service: Consumed 3.754s CPU time. Feb 9 18:34:56.070282 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:34:56Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 18:34:56.070293 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:34:56Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 18:34:56.070331 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:34:56Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 18:34:56.070342 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:34:56Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 18:34:56.070565 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:34:56Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 18:34:56.070598 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:34:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:34:56.070610 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:34:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:34:56.086572 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:34:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 18:34:56.086629 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:34:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 18:34:56.086657 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:34:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 18:34:56.086672 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:34:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 18:34:56.086691 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:34:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 18:34:56.086705 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:34:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 18:35:01.583673 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:35:01Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:35:01.583934 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:35:01Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:35:01.584042 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:35:01Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:35:01.584195 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:35:01Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:35:01.584242 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:35:01Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 18:35:01.584297 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T18:35:01Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 18:35:03.498697 systemd[1]: Stopped verity-setup.service. Feb 9 18:35:03.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.517905 systemd[1]: Started systemd-journald.service. Feb 9 18:35:03.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.518738 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:35:03.524259 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:35:03.529760 systemd[1]: Mounted media.mount. Feb 9 18:35:03.534520 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:35:03.539714 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:35:03.545138 systemd[1]: Mounted tmp.mount. Feb 9 18:35:03.549555 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:35:03.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.555520 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:35:03.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.561594 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:35:03.561721 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:35:03.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.567704 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:35:03.567817 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:35:03.573729 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:35:03.573881 systemd[1]: Finished modprobe@drm.service. Feb 9 18:35:03.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.580762 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:35:03.580883 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:35:03.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.587446 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:35:03.587578 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:35:03.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.593421 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:35:03.593554 systemd[1]: Finished modprobe@loop.service. Feb 9 18:35:03.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.599316 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:35:03.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.605468 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:35:03.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.612878 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:35:03.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.618862 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:35:03.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.624855 systemd[1]: Reached target network-pre.target. Feb 9 18:35:03.631198 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:35:03.637120 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:35:03.641818 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:35:03.643202 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:35:03.649269 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:35:03.654560 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:35:03.655506 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:35:03.660727 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:35:03.661705 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:35:03.667318 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:35:03.673297 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:35:03.680293 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:35:03.686645 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:35:03.693399 udevadm[1195]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 18:35:03.699122 systemd-journald[1181]: Time spent on flushing to /var/log/journal/1379aec5e08a4a1892edcc45a8be914b is 14.051ms for 1134 entries. Feb 9 18:35:03.699122 systemd-journald[1181]: System Journal (/var/log/journal/1379aec5e08a4a1892edcc45a8be914b) is 8.0M, max 2.6G, 2.6G free. Feb 9 18:35:03.774233 systemd-journald[1181]: Received client request to flush runtime journal. Feb 9 18:35:03.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:03.719445 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:35:03.724471 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:35:03.731995 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:35:03.775182 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:35:03.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:04.236472 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:35:04.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:04.714642 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:35:04.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:04.720000 audit: BPF prog-id=21 op=LOAD Feb 9 18:35:04.720000 audit: BPF prog-id=22 op=LOAD Feb 9 18:35:04.721000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:35:04.721000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:35:04.722187 systemd[1]: Starting systemd-udevd.service... Feb 9 18:35:04.740945 systemd-udevd[1198]: Using default interface naming scheme 'v252'. Feb 9 18:35:04.914379 systemd[1]: Started systemd-udevd.service. Feb 9 18:35:04.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:04.925000 audit: BPF prog-id=23 op=LOAD Feb 9 18:35:04.927942 systemd[1]: Starting systemd-networkd.service... Feb 9 18:35:04.956207 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 18:35:05.016547 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 18:35:05.016000 audit[1208]: AVC avc: denied { confidentiality } for pid=1208 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 18:35:05.027000 audit: BPF prog-id=24 op=LOAD Feb 9 18:35:05.027000 audit: BPF prog-id=25 op=LOAD Feb 9 18:35:05.027000 audit: BPF prog-id=26 op=LOAD Feb 9 18:35:05.028942 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:35:05.052328 kernel: hv_vmbus: registering driver hv_balloon Feb 9 18:35:05.054198 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 18:35:05.054236 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 18:35:05.054323 kernel: hv_vmbus: registering driver hv_utils Feb 9 18:35:05.062032 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 18:35:05.062108 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 18:35:05.062135 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 18:35:05.083004 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 18:35:05.083116 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 18:35:05.083146 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 18:35:05.083170 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 9 18:35:05.370005 kernel: Console: switching to colour dummy device 80x25 Feb 9 18:35:05.378489 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 18:35:05.016000 audit[1208]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaac45bf5a0 a1=aa2c a2=ffffb92524b0 a3=aaaac451d010 items=12 ppid=1198 pid=1208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:05.016000 audit: CWD cwd="/" Feb 9 18:35:05.016000 audit: PATH item=0 name=(null) inode=6767 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:05.016000 audit: PATH item=1 name=(null) inode=10750 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:05.016000 audit: PATH item=2 name=(null) inode=10750 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:05.016000 audit: PATH item=3 name=(null) inode=10751 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:05.016000 audit: PATH item=4 name=(null) inode=10750 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:05.016000 audit: PATH item=5 name=(null) inode=10752 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:05.016000 audit: PATH item=6 name=(null) inode=10750 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:05.016000 audit: PATH item=7 name=(null) inode=10753 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:05.016000 audit: PATH item=8 name=(null) inode=10750 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:05.016000 audit: PATH item=9 name=(null) inode=10754 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:05.016000 audit: PATH item=10 name=(null) inode=10750 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:05.016000 audit: PATH item=11 name=(null) inode=10755 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:05.016000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 18:35:05.422733 systemd[1]: Started systemd-userdbd.service. Feb 9 18:35:05.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:05.667304 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1209) Feb 9 18:35:05.682210 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:35:05.689809 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:35:05.694964 systemd-networkd[1219]: lo: Link UP Feb 9 18:35:05.695234 systemd-networkd[1219]: lo: Gained carrier Feb 9 18:35:05.695767 systemd-networkd[1219]: Enumeration completed Feb 9 18:35:05.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:05.696085 systemd[1]: Started systemd-networkd.service. Feb 9 18:35:05.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:05.702023 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:35:05.708027 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:35:05.727249 systemd-networkd[1219]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:35:05.777312 kernel: mlx5_core 40ab:00:02.0 enP16555s1: Link up Feb 9 18:35:05.804312 kernel: hv_netvsc 000d3a6e-367b-000d-3a6e-367b000d3a6e eth0: Data path switched to VF: enP16555s1 Feb 9 18:35:05.805184 systemd-networkd[1219]: enP16555s1: Link UP Feb 9 18:35:05.805536 systemd-networkd[1219]: eth0: Link UP Feb 9 18:35:05.805545 systemd-networkd[1219]: eth0: Gained carrier Feb 9 18:35:05.809756 systemd-networkd[1219]: enP16555s1: Gained carrier Feb 9 18:35:05.817424 systemd-networkd[1219]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 18:35:06.016530 lvm[1275]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:35:06.056191 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:35:06.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:06.062096 systemd[1]: Reached target cryptsetup.target. Feb 9 18:35:06.068636 systemd[1]: Starting lvm2-activation.service... Feb 9 18:35:06.072619 lvm[1277]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:35:06.096210 systemd[1]: Finished lvm2-activation.service. Feb 9 18:35:06.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:06.101991 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:35:06.107946 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:35:06.107975 systemd[1]: Reached target local-fs.target. Feb 9 18:35:06.113878 systemd[1]: Reached target machines.target. Feb 9 18:35:06.120804 systemd[1]: Starting ldconfig.service... Feb 9 18:35:06.125193 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:35:06.125270 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:35:06.126383 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:35:06.132570 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:35:06.140667 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:35:06.146736 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:35:06.146793 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:35:06.147788 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:35:06.192345 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1279 (bootctl) Feb 9 18:35:06.193575 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:35:06.263329 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:35:06.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:06.765500 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:35:07.005997 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:35:07.007755 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:35:07.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:07.030723 systemd-fsck[1287]: fsck.fat 4.2 (2021-01-31) Feb 9 18:35:07.030723 systemd-fsck[1287]: /dev/sda1: 236 files, 113719/258078 clusters Feb 9 18:35:07.033431 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:35:07.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:07.047239 systemd[1]: Mounting boot.mount... Feb 9 18:35:07.057476 systemd[1]: Mounted boot.mount. Feb 9 18:35:07.063101 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:35:07.069066 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:35:07.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:07.146411 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:35:07.446443 systemd-networkd[1219]: eth0: Gained IPv6LL Feb 9 18:35:07.452150 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:35:07.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:08.280854 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:35:08.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:08.288098 systemd[1]: Starting audit-rules.service... Feb 9 18:35:08.291372 kernel: kauditd_printk_skb: 78 callbacks suppressed Feb 9 18:35:08.291436 kernel: audit: type=1130 audit(1707503708.285:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:08.316676 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:35:08.322895 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:35:08.328000 audit: BPF prog-id=27 op=LOAD Feb 9 18:35:08.330607 systemd[1]: Starting systemd-resolved.service... Feb 9 18:35:08.340402 kernel: audit: type=1334 audit(1707503708.328:162): prog-id=27 op=LOAD Feb 9 18:35:08.340000 audit: BPF prog-id=28 op=LOAD Feb 9 18:35:08.343262 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:35:08.352766 kernel: audit: type=1334 audit(1707503708.340:163): prog-id=28 op=LOAD Feb 9 18:35:08.354372 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:35:08.384477 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:35:08.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:08.410563 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:35:08.421309 kernel: audit: type=1130 audit(1707503708.389:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:08.431000 audit[1299]: SYSTEM_BOOT pid=1299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:35:08.454703 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:35:08.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:08.479801 kernel: audit: type=1127 audit(1707503708.431:165): pid=1299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:35:08.480436 kernel: audit: type=1130 audit(1707503708.459:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:08.524605 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:35:08.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:08.531442 systemd[1]: Reached target time-set.target. Feb 9 18:35:08.555971 kernel: audit: type=1130 audit(1707503708.529:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:08.555908 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:35:08.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:08.584321 kernel: audit: type=1130 audit(1707503708.560:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:08.601526 systemd-resolved[1297]: Positive Trust Anchors: Feb 9 18:35:08.601536 systemd-resolved[1297]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:35:08.601562 systemd-resolved[1297]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:35:08.637007 systemd-resolved[1297]: Using system hostname 'ci-3510.3.2-a-b879aa43fa'. Feb 9 18:35:08.638863 systemd[1]: Started systemd-resolved.service. Feb 9 18:35:08.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:08.647468 systemd[1]: Reached target network.target. Feb 9 18:35:08.674200 kernel: audit: type=1130 audit(1707503708.643:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:08.674447 systemd[1]: Reached target network-online.target. Feb 9 18:35:08.681181 systemd[1]: Reached target nss-lookup.target. Feb 9 18:35:08.773607 augenrules[1314]: No rules Feb 9 18:35:08.772000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:35:08.774688 systemd[1]: Finished audit-rules.service. Feb 9 18:35:08.797110 kernel: audit: type=1305 audit(1707503708.772:170): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:35:08.772000 audit[1314]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc3e22300 a2=420 a3=0 items=0 ppid=1293 pid=1314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:08.772000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:35:08.939086 systemd-timesyncd[1298]: Contacted time server 23.131.160.7:123 (0.flatcar.pool.ntp.org). Feb 9 18:35:08.939319 systemd-timesyncd[1298]: Initial clock synchronization to Fri 2024-02-09 18:35:08.907858 UTC. Feb 9 18:35:14.631961 ldconfig[1278]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:35:14.640948 systemd[1]: Finished ldconfig.service. Feb 9 18:35:14.647011 systemd[1]: Starting systemd-update-done.service... Feb 9 18:35:14.683169 systemd[1]: Finished systemd-update-done.service. Feb 9 18:35:14.689160 systemd[1]: Reached target sysinit.target. Feb 9 18:35:14.694578 systemd[1]: Started motdgen.path. Feb 9 18:35:14.699097 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:35:14.706478 systemd[1]: Started logrotate.timer. Feb 9 18:35:14.711166 systemd[1]: Started mdadm.timer. Feb 9 18:35:14.715487 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:35:14.721315 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:35:14.721347 systemd[1]: Reached target paths.target. Feb 9 18:35:14.726606 systemd[1]: Reached target timers.target. Feb 9 18:35:14.732153 systemd[1]: Listening on dbus.socket. Feb 9 18:35:14.737470 systemd[1]: Starting docker.socket... Feb 9 18:35:14.758534 systemd[1]: Listening on sshd.socket. Feb 9 18:35:14.763356 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:35:14.763827 systemd[1]: Listening on docker.socket. Feb 9 18:35:14.768893 systemd[1]: Reached target sockets.target. Feb 9 18:35:14.774230 systemd[1]: Reached target basic.target. Feb 9 18:35:14.779229 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:35:14.779256 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:35:14.780392 systemd[1]: Starting containerd.service... Feb 9 18:35:14.785524 systemd[1]: Starting dbus.service... Feb 9 18:35:14.790828 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:35:14.796951 systemd[1]: Starting extend-filesystems.service... Feb 9 18:35:14.804772 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:35:14.805880 systemd[1]: Starting motdgen.service... Feb 9 18:35:14.811014 systemd[1]: Started nvidia.service. Feb 9 18:35:14.816594 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:35:14.823488 systemd[1]: Starting prepare-critools.service... Feb 9 18:35:14.829747 systemd[1]: Starting prepare-helm.service... Feb 9 18:35:14.835527 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:35:14.841621 systemd[1]: Starting sshd-keygen.service... Feb 9 18:35:14.848374 systemd[1]: Starting systemd-logind.service... Feb 9 18:35:14.853624 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:35:14.853686 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:35:14.854114 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 18:35:14.854778 systemd[1]: Starting update-engine.service... Feb 9 18:35:14.860500 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:35:14.872813 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:35:14.872986 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:35:14.883079 jq[1344]: true Feb 9 18:35:14.883361 jq[1324]: false Feb 9 18:35:14.897897 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:35:14.898062 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:35:14.917651 extend-filesystems[1325]: Found sda Feb 9 18:35:14.921947 extend-filesystems[1325]: Found sda1 Feb 9 18:35:14.921947 extend-filesystems[1325]: Found sda2 Feb 9 18:35:14.921947 extend-filesystems[1325]: Found sda3 Feb 9 18:35:14.921947 extend-filesystems[1325]: Found usr Feb 9 18:35:14.921947 extend-filesystems[1325]: Found sda4 Feb 9 18:35:14.921947 extend-filesystems[1325]: Found sda6 Feb 9 18:35:14.921947 extend-filesystems[1325]: Found sda7 Feb 9 18:35:14.921947 extend-filesystems[1325]: Found sda9 Feb 9 18:35:14.921947 extend-filesystems[1325]: Checking size of /dev/sda9 Feb 9 18:35:14.978158 jq[1352]: true Feb 9 18:35:14.930851 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:35:14.931019 systemd[1]: Finished motdgen.service. Feb 9 18:35:14.987377 env[1356]: time="2024-02-09T18:35:14.987332917Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:35:15.011876 tar[1346]: ./ Feb 9 18:35:15.011876 tar[1346]: ./loopback Feb 9 18:35:15.012513 tar[1347]: crictl Feb 9 18:35:15.014440 tar[1348]: linux-arm64/helm Feb 9 18:35:15.034207 extend-filesystems[1325]: Old size kept for /dev/sda9 Feb 9 18:35:15.034207 extend-filesystems[1325]: Found sr0 Feb 9 18:35:15.099104 bash[1373]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:35:15.040752 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:35:15.099304 env[1356]: time="2024-02-09T18:35:15.036313308Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:35:15.099304 env[1356]: time="2024-02-09T18:35:15.036459596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:35:15.099304 env[1356]: time="2024-02-09T18:35:15.042656206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:35:15.099304 env[1356]: time="2024-02-09T18:35:15.042690001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:35:15.099304 env[1356]: time="2024-02-09T18:35:15.042937917Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:35:15.099304 env[1356]: time="2024-02-09T18:35:15.042957212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:35:15.099304 env[1356]: time="2024-02-09T18:35:15.042971113Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:35:15.099304 env[1356]: time="2024-02-09T18:35:15.042980821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:35:15.099304 env[1356]: time="2024-02-09T18:35:15.043060397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:35:15.099304 env[1356]: time="2024-02-09T18:35:15.043255621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:35:15.040919 systemd[1]: Finished extend-filesystems.service. Feb 9 18:35:15.099661 env[1356]: time="2024-02-09T18:35:15.043403428Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:35:15.099661 env[1356]: time="2024-02-09T18:35:15.043422363Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:35:15.099661 env[1356]: time="2024-02-09T18:35:15.043474255Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:35:15.099661 env[1356]: time="2024-02-09T18:35:15.043485640Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:35:15.099661 env[1356]: time="2024-02-09T18:35:15.062220957Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:35:15.099661 env[1356]: time="2024-02-09T18:35:15.062261065Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:35:15.099661 env[1356]: time="2024-02-09T18:35:15.062273848Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:35:15.099661 env[1356]: time="2024-02-09T18:35:15.062321626Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:35:15.099661 env[1356]: time="2024-02-09T18:35:15.062338004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:35:15.099661 env[1356]: time="2024-02-09T18:35:15.062351746Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:35:15.099661 env[1356]: time="2024-02-09T18:35:15.062364969Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:35:15.099661 env[1356]: time="2024-02-09T18:35:15.062691102Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:35:15.099661 env[1356]: time="2024-02-09T18:35:15.062706801Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:35:15.042837 systemd-logind[1339]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 18:35:15.100138 env[1356]: time="2024-02-09T18:35:15.062720583Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:35:15.100138 env[1356]: time="2024-02-09T18:35:15.062733207Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:35:15.100138 env[1356]: time="2024-02-09T18:35:15.062746230Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:35:15.100138 env[1356]: time="2024-02-09T18:35:15.065376347Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:35:15.100138 env[1356]: time="2024-02-09T18:35:15.067120464Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:35:15.100138 env[1356]: time="2024-02-09T18:35:15.067366383Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:35:15.100138 env[1356]: time="2024-02-09T18:35:15.067392748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:35:15.100138 env[1356]: time="2024-02-09T18:35:15.067406530Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:35:15.100138 env[1356]: time="2024-02-09T18:35:15.067482111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:35:15.100138 env[1356]: time="2024-02-09T18:35:15.067501326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:35:15.100138 env[1356]: time="2024-02-09T18:35:15.067513590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:35:15.100138 env[1356]: time="2024-02-09T18:35:15.067524376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:35:15.100138 env[1356]: time="2024-02-09T18:35:15.067535921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:35:15.100138 env[1356]: time="2024-02-09T18:35:15.067550062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:35:15.042992 systemd-logind[1339]: New seat seat0. Feb 9 18:35:15.100578 env[1356]: time="2024-02-09T18:35:15.067560848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:35:15.100578 env[1356]: time="2024-02-09T18:35:15.067572313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:35:15.100578 env[1356]: time="2024-02-09T18:35:15.067585296Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:35:15.100578 env[1356]: time="2024-02-09T18:35:15.074667107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:35:15.100578 env[1356]: time="2024-02-09T18:35:15.075252860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:35:15.100578 env[1356]: time="2024-02-09T18:35:15.075298041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:35:15.100578 env[1356]: time="2024-02-09T18:35:15.075315818Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:35:15.100578 env[1356]: time="2024-02-09T18:35:15.075666878Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:35:15.100578 env[1356]: time="2024-02-09T18:35:15.075681259Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:35:15.100578 env[1356]: time="2024-02-09T18:35:15.075965248Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:35:15.100578 env[1356]: time="2024-02-09T18:35:15.076028924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:35:15.081861 systemd[1]: Started containerd.service. Feb 9 18:35:15.100919 env[1356]: time="2024-02-09T18:35:15.076258983Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:35:15.100919 env[1356]: time="2024-02-09T18:35:15.076324697Z" level=info msg="Connect containerd service" Feb 9 18:35:15.100919 env[1356]: time="2024-02-09T18:35:15.076358573Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:35:15.100919 env[1356]: time="2024-02-09T18:35:15.081462053Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:35:15.100919 env[1356]: time="2024-02-09T18:35:15.081692311Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:35:15.100919 env[1356]: time="2024-02-09T18:35:15.081727306Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:35:15.100919 env[1356]: time="2024-02-09T18:35:15.081769810Z" level=info msg="containerd successfully booted in 0.095040s" Feb 9 18:35:15.100919 env[1356]: time="2024-02-09T18:35:15.090268806Z" level=info msg="Start subscribing containerd event" Feb 9 18:35:15.100919 env[1356]: time="2024-02-09T18:35:15.090330485Z" level=info msg="Start recovering state" Feb 9 18:35:15.100919 env[1356]: time="2024-02-09T18:35:15.090397118Z" level=info msg="Start event monitor" Feb 9 18:35:15.100919 env[1356]: time="2024-02-09T18:35:15.090419329Z" level=info msg="Start snapshots syncer" Feb 9 18:35:15.100919 env[1356]: time="2024-02-09T18:35:15.090429435Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:35:15.100919 env[1356]: time="2024-02-09T18:35:15.090437225Z" level=info msg="Start streaming server" Feb 9 18:35:15.102936 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:35:15.130302 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 18:35:15.145125 tar[1346]: ./bandwidth Feb 9 18:35:15.164184 dbus-daemon[1323]: [system] SELinux support is enabled Feb 9 18:35:15.164380 systemd[1]: Started dbus.service. Feb 9 18:35:15.170488 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:35:15.171006 dbus-daemon[1323]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 18:35:15.170516 systemd[1]: Reached target system-config.target. Feb 9 18:35:15.178639 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:35:15.178663 systemd[1]: Reached target user-config.target. Feb 9 18:35:15.186562 systemd[1]: Started systemd-logind.service. Feb 9 18:35:15.270758 tar[1346]: ./ptp Feb 9 18:35:15.393441 tar[1346]: ./vlan Feb 9 18:35:15.486808 tar[1346]: ./host-device Feb 9 18:35:15.570328 tar[1346]: ./tuning Feb 9 18:35:15.583985 tar[1348]: linux-arm64/LICENSE Feb 9 18:35:15.584085 tar[1348]: linux-arm64/README.md Feb 9 18:35:15.590006 systemd[1]: Finished prepare-helm.service. Feb 9 18:35:15.624844 tar[1346]: ./vrf Feb 9 18:35:15.645752 update_engine[1342]: I0209 18:35:15.619513 1342 main.cc:92] Flatcar Update Engine starting Feb 9 18:35:15.657358 tar[1346]: ./sbr Feb 9 18:35:15.690466 tar[1346]: ./tap Feb 9 18:35:15.729374 tar[1346]: ./dhcp Feb 9 18:35:15.736653 systemd[1]: Started update-engine.service. Feb 9 18:35:15.742990 update_engine[1342]: I0209 18:35:15.742918 1342 update_check_scheduler.cc:74] Next update check in 7m1s Feb 9 18:35:15.756627 systemd[1]: Started locksmithd.service. Feb 9 18:35:15.838591 tar[1346]: ./static Feb 9 18:35:15.866006 tar[1346]: ./firewall Feb 9 18:35:15.907371 tar[1346]: ./macvlan Feb 9 18:35:15.945987 tar[1346]: ./dummy Feb 9 18:35:15.977394 systemd[1]: Finished prepare-critools.service. Feb 9 18:35:15.992954 tar[1346]: ./bridge Feb 9 18:35:16.029202 tar[1346]: ./ipvlan Feb 9 18:35:16.062053 tar[1346]: ./portmap Feb 9 18:35:16.093575 tar[1346]: ./host-local Feb 9 18:35:16.199150 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:35:17.562615 locksmithd[1430]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:35:19.254766 sshd_keygen[1343]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:35:19.270944 systemd[1]: Finished sshd-keygen.service. Feb 9 18:35:19.279176 systemd[1]: Starting issuegen.service... Feb 9 18:35:19.285291 systemd[1]: Started waagent.service. Feb 9 18:35:19.291583 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:35:19.291752 systemd[1]: Finished issuegen.service. Feb 9 18:35:19.298900 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:35:19.334707 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:35:19.341979 systemd[1]: Started getty@tty1.service. Feb 9 18:35:19.348478 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 18:35:19.354633 systemd[1]: Reached target getty.target. Feb 9 18:35:19.360970 systemd[1]: Reached target multi-user.target. Feb 9 18:35:19.369581 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:35:19.384027 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:35:19.384185 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:35:19.390496 systemd[1]: Startup finished in 731ms (kernel) + 17.050s (initrd) + 25.425s (userspace) = 43.207s. Feb 9 18:35:20.069746 login[1452]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 18:35:20.070097 login[1451]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 18:35:20.139464 systemd[1]: Created slice user-500.slice. Feb 9 18:35:20.140666 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:35:20.143150 systemd-logind[1339]: New session 1 of user core. Feb 9 18:35:20.178787 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:35:20.180228 systemd[1]: Starting user@500.service... Feb 9 18:35:20.592527 (systemd)[1455]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:21.071128 login[1452]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 18:35:21.074621 systemd-logind[1339]: New session 2 of user core. Feb 9 18:35:21.247396 systemd[1455]: Queued start job for default target default.target. Feb 9 18:35:21.248617 systemd[1455]: Reached target paths.target. Feb 9 18:35:21.248747 systemd[1455]: Reached target sockets.target. Feb 9 18:35:21.248824 systemd[1455]: Reached target timers.target. Feb 9 18:35:21.248893 systemd[1455]: Reached target basic.target. Feb 9 18:35:21.249012 systemd[1455]: Reached target default.target. Feb 9 18:35:21.249080 systemd[1]: Started user@500.service. Feb 9 18:35:21.249607 systemd[1455]: Startup finished in 651ms. Feb 9 18:35:21.249947 systemd[1]: Started session-1.scope. Feb 9 18:35:21.250540 systemd[1]: Started session-2.scope. Feb 9 18:35:26.144464 waagent[1449]: 2024-02-09T18:35:26.144359Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 18:35:26.151681 waagent[1449]: 2024-02-09T18:35:26.151602Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 18:35:26.156574 waagent[1449]: 2024-02-09T18:35:26.156515Z INFO Daemon Daemon Python: 3.9.16 Feb 9 18:35:26.161754 waagent[1449]: 2024-02-09T18:35:26.161674Z INFO Daemon Daemon Run daemon Feb 9 18:35:26.166101 waagent[1449]: 2024-02-09T18:35:26.166028Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 18:35:26.183460 waagent[1449]: 2024-02-09T18:35:26.183326Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 18:35:26.199335 waagent[1449]: 2024-02-09T18:35:26.199181Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 18:35:26.209329 waagent[1449]: 2024-02-09T18:35:26.209212Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 18:35:26.214449 waagent[1449]: 2024-02-09T18:35:26.214383Z INFO Daemon Daemon Using waagent for provisioning Feb 9 18:35:26.220539 waagent[1449]: 2024-02-09T18:35:26.220478Z INFO Daemon Daemon Activate resource disk Feb 9 18:35:26.225495 waagent[1449]: 2024-02-09T18:35:26.225437Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 18:35:26.240051 waagent[1449]: 2024-02-09T18:35:26.239986Z INFO Daemon Daemon Found device: None Feb 9 18:35:26.244977 waagent[1449]: 2024-02-09T18:35:26.244918Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 18:35:26.253911 waagent[1449]: 2024-02-09T18:35:26.253847Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 18:35:26.266467 waagent[1449]: 2024-02-09T18:35:26.266406Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 18:35:26.272878 waagent[1449]: 2024-02-09T18:35:26.272820Z INFO Daemon Daemon Running default provisioning handler Feb 9 18:35:26.286294 waagent[1449]: 2024-02-09T18:35:26.286163Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 18:35:26.301580 waagent[1449]: 2024-02-09T18:35:26.301454Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 18:35:26.312019 waagent[1449]: 2024-02-09T18:35:26.311946Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 18:35:26.317458 waagent[1449]: 2024-02-09T18:35:26.317388Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 18:35:26.549814 waagent[1449]: 2024-02-09T18:35:26.547997Z INFO Daemon Daemon Successfully mounted dvd Feb 9 18:35:26.648247 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 18:35:26.684737 waagent[1449]: 2024-02-09T18:35:26.684563Z INFO Daemon Daemon Detect protocol endpoint Feb 9 18:35:26.692498 waagent[1449]: 2024-02-09T18:35:26.692410Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 18:35:26.700758 waagent[1449]: 2024-02-09T18:35:26.700676Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 18:35:26.707982 waagent[1449]: 2024-02-09T18:35:26.707913Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 18:35:26.714555 waagent[1449]: 2024-02-09T18:35:26.714488Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 18:35:26.720641 waagent[1449]: 2024-02-09T18:35:26.720576Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 18:35:26.813917 waagent[1449]: 2024-02-09T18:35:26.813796Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 18:35:26.821056 waagent[1449]: 2024-02-09T18:35:26.821010Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 18:35:26.827139 waagent[1449]: 2024-02-09T18:35:26.827078Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 18:35:27.603468 waagent[1449]: 2024-02-09T18:35:27.603315Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 18:35:27.618783 waagent[1449]: 2024-02-09T18:35:27.618711Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 18:35:27.624647 waagent[1449]: 2024-02-09T18:35:27.624586Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 18:35:27.695446 waagent[1449]: 2024-02-09T18:35:27.695264Z INFO Daemon Daemon Found private key matching thumbprint FE6342A4DFE075C2E9A4547D008288D1B1F455F3 Feb 9 18:35:27.704503 waagent[1449]: 2024-02-09T18:35:27.704428Z INFO Daemon Daemon Certificate with thumbprint F9B3186E14983DD9E5634A68ABFB56748F3C8D0B has no matching private key. Feb 9 18:35:27.715008 waagent[1449]: 2024-02-09T18:35:27.714942Z INFO Daemon Daemon Fetch goal state completed Feb 9 18:35:27.763517 waagent[1449]: 2024-02-09T18:35:27.763463Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 0229ec5c-83a1-43d9-9ec3-915f42e046a3 New eTag: 16479120492666352403] Feb 9 18:35:27.774693 waagent[1449]: 2024-02-09T18:35:27.774625Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 18:35:27.790207 waagent[1449]: 2024-02-09T18:35:27.790130Z INFO Daemon Daemon Starting provisioning Feb 9 18:35:27.795263 waagent[1449]: 2024-02-09T18:35:27.795202Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 18:35:27.799922 waagent[1449]: 2024-02-09T18:35:27.799866Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-b879aa43fa] Feb 9 18:35:27.837526 waagent[1449]: 2024-02-09T18:35:27.837395Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-b879aa43fa] Feb 9 18:35:27.843894 waagent[1449]: 2024-02-09T18:35:27.843820Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 18:35:27.850584 waagent[1449]: 2024-02-09T18:35:27.850523Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 18:35:27.866536 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 18:35:27.866697 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 18:35:27.866756 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 18:35:27.866989 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:35:27.873335 systemd-networkd[1219]: eth0: DHCPv6 lease lost Feb 9 18:35:27.874825 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:35:27.875000 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:35:27.876991 systemd[1]: Starting systemd-networkd.service... Feb 9 18:35:27.903140 systemd-networkd[1501]: enP16555s1: Link UP Feb 9 18:35:27.903415 systemd-networkd[1501]: enP16555s1: Gained carrier Feb 9 18:35:27.904556 systemd-networkd[1501]: eth0: Link UP Feb 9 18:35:27.904643 systemd-networkd[1501]: eth0: Gained carrier Feb 9 18:35:27.905013 systemd-networkd[1501]: lo: Link UP Feb 9 18:35:27.905076 systemd-networkd[1501]: lo: Gained carrier Feb 9 18:35:27.905424 systemd-networkd[1501]: eth0: Gained IPv6LL Feb 9 18:35:27.906356 systemd-networkd[1501]: Enumeration completed Feb 9 18:35:27.906529 systemd[1]: Started systemd-networkd.service. Feb 9 18:35:27.907932 systemd-networkd[1501]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:35:27.908110 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:35:27.911388 waagent[1449]: 2024-02-09T18:35:27.911199Z INFO Daemon Daemon Create user account if not exists Feb 9 18:35:27.917722 waagent[1449]: 2024-02-09T18:35:27.917627Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 18:35:27.923489 waagent[1449]: 2024-02-09T18:35:27.923423Z INFO Daemon Daemon Configure sudoer Feb 9 18:35:27.928844 waagent[1449]: 2024-02-09T18:35:27.928780Z INFO Daemon Daemon Configure sshd Feb 9 18:35:27.933180 waagent[1449]: 2024-02-09T18:35:27.933121Z INFO Daemon Daemon Deploy ssh public key. Feb 9 18:35:27.933347 systemd-networkd[1501]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 18:35:27.944117 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:35:29.169531 waagent[1449]: 2024-02-09T18:35:29.169460Z INFO Daemon Daemon Provisioning complete Feb 9 18:35:29.193088 waagent[1449]: 2024-02-09T18:35:29.193019Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 18:35:29.199855 waagent[1449]: 2024-02-09T18:35:29.199792Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 18:35:29.212750 waagent[1449]: 2024-02-09T18:35:29.212675Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 18:35:29.507396 waagent[1510]: 2024-02-09T18:35:29.507233Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 18:35:29.508473 waagent[1510]: 2024-02-09T18:35:29.508416Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:35:29.508711 waagent[1510]: 2024-02-09T18:35:29.508664Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:35:29.520942 waagent[1510]: 2024-02-09T18:35:29.520873Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 18:35:29.521211 waagent[1510]: 2024-02-09T18:35:29.521164Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 18:35:29.592641 waagent[1510]: 2024-02-09T18:35:29.592508Z INFO ExtHandler ExtHandler Found private key matching thumbprint FE6342A4DFE075C2E9A4547D008288D1B1F455F3 Feb 9 18:35:29.593015 waagent[1510]: 2024-02-09T18:35:29.592962Z INFO ExtHandler ExtHandler Certificate with thumbprint F9B3186E14983DD9E5634A68ABFB56748F3C8D0B has no matching private key. Feb 9 18:35:29.593367 waagent[1510]: 2024-02-09T18:35:29.593313Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 18:35:29.606982 waagent[1510]: 2024-02-09T18:35:29.606931Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: d32dcff9-2182-4729-8cef-31f5f689dcc3 New eTag: 16479120492666352403] Feb 9 18:35:29.607743 waagent[1510]: 2024-02-09T18:35:29.607685Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 18:35:29.691053 waagent[1510]: 2024-02-09T18:35:29.690912Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 18:35:29.715439 waagent[1510]: 2024-02-09T18:35:29.715350Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1510 Feb 9 18:35:29.719420 waagent[1510]: 2024-02-09T18:35:29.719356Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 18:35:29.720868 waagent[1510]: 2024-02-09T18:35:29.720812Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 18:35:29.826523 waagent[1510]: 2024-02-09T18:35:29.826413Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 18:35:29.827054 waagent[1510]: 2024-02-09T18:35:29.827001Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 18:35:29.834847 waagent[1510]: 2024-02-09T18:35:29.834798Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 18:35:29.835479 waagent[1510]: 2024-02-09T18:35:29.835424Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 18:35:29.836744 waagent[1510]: 2024-02-09T18:35:29.836680Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 18:35:29.838171 waagent[1510]: 2024-02-09T18:35:29.838103Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 18:35:29.838492 waagent[1510]: 2024-02-09T18:35:29.838422Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:35:29.839041 waagent[1510]: 2024-02-09T18:35:29.838969Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:35:29.839634 waagent[1510]: 2024-02-09T18:35:29.839568Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 18:35:29.839949 waagent[1510]: 2024-02-09T18:35:29.839890Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 18:35:29.839949 waagent[1510]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 18:35:29.839949 waagent[1510]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 18:35:29.839949 waagent[1510]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 18:35:29.839949 waagent[1510]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:35:29.839949 waagent[1510]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:35:29.839949 waagent[1510]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:35:29.842051 waagent[1510]: 2024-02-09T18:35:29.841890Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 18:35:29.842638 waagent[1510]: 2024-02-09T18:35:29.842557Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:35:29.843395 waagent[1510]: 2024-02-09T18:35:29.843328Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:35:29.843978 waagent[1510]: 2024-02-09T18:35:29.843904Z INFO EnvHandler ExtHandler Configure routes Feb 9 18:35:29.844131 waagent[1510]: 2024-02-09T18:35:29.844083Z INFO EnvHandler ExtHandler Gateway:None Feb 9 18:35:29.844246 waagent[1510]: 2024-02-09T18:35:29.844204Z INFO EnvHandler ExtHandler Routes:None Feb 9 18:35:29.845144 waagent[1510]: 2024-02-09T18:35:29.845086Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 18:35:29.845324 waagent[1510]: 2024-02-09T18:35:29.845228Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 18:35:29.846049 waagent[1510]: 2024-02-09T18:35:29.845958Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 18:35:29.846234 waagent[1510]: 2024-02-09T18:35:29.846165Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 18:35:29.846654 waagent[1510]: 2024-02-09T18:35:29.846582Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 18:35:29.856716 waagent[1510]: 2024-02-09T18:35:29.856647Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 18:35:29.858758 waagent[1510]: 2024-02-09T18:35:29.858695Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 18:35:29.860067 waagent[1510]: 2024-02-09T18:35:29.860003Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 18:35:29.889418 waagent[1510]: 2024-02-09T18:35:29.889269Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1501' Feb 9 18:35:29.899523 waagent[1510]: 2024-02-09T18:35:29.899457Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 18:35:30.117741 waagent[1510]: 2024-02-09T18:35:30.117681Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 18:35:30.216474 waagent[1449]: 2024-02-09T18:35:30.216328Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 18:35:30.220373 waagent[1449]: 2024-02-09T18:35:30.220313Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 18:35:31.343371 waagent[1536]: 2024-02-09T18:35:31.343258Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 18:35:31.344375 waagent[1536]: 2024-02-09T18:35:31.344319Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 18:35:31.344607 waagent[1536]: 2024-02-09T18:35:31.344560Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 18:35:31.352314 waagent[1536]: 2024-02-09T18:35:31.352194Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 18:35:31.352818 waagent[1536]: 2024-02-09T18:35:31.352766Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:35:31.353063 waagent[1536]: 2024-02-09T18:35:31.353014Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:35:31.366561 waagent[1536]: 2024-02-09T18:35:31.366496Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 18:35:31.375145 waagent[1536]: 2024-02-09T18:35:31.375095Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 18:35:31.376248 waagent[1536]: 2024-02-09T18:35:31.376194Z INFO ExtHandler Feb 9 18:35:31.376514 waagent[1536]: 2024-02-09T18:35:31.376463Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f40199bf-9d2d-491b-8bd8-6144c87a5cc0 eTag: 16479120492666352403 source: Fabric] Feb 9 18:35:31.377342 waagent[1536]: 2024-02-09T18:35:31.377269Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 18:35:31.378652 waagent[1536]: 2024-02-09T18:35:31.378593Z INFO ExtHandler Feb 9 18:35:31.378879 waagent[1536]: 2024-02-09T18:35:31.378832Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 18:35:31.385104 waagent[1536]: 2024-02-09T18:35:31.385057Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 18:35:31.385654 waagent[1536]: 2024-02-09T18:35:31.385609Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 18:35:31.405860 waagent[1536]: 2024-02-09T18:35:31.405806Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 18:35:31.473274 waagent[1536]: 2024-02-09T18:35:31.473145Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FE6342A4DFE075C2E9A4547D008288D1B1F455F3', 'hasPrivateKey': True} Feb 9 18:35:31.474560 waagent[1536]: 2024-02-09T18:35:31.474502Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F9B3186E14983DD9E5634A68ABFB56748F3C8D0B', 'hasPrivateKey': False} Feb 9 18:35:31.475720 waagent[1536]: 2024-02-09T18:35:31.475663Z INFO ExtHandler Fetch goal state completed Feb 9 18:35:31.504307 waagent[1536]: 2024-02-09T18:35:31.504216Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1536 Feb 9 18:35:31.508042 waagent[1536]: 2024-02-09T18:35:31.507984Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 18:35:31.509625 waagent[1536]: 2024-02-09T18:35:31.509568Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 18:35:31.514332 waagent[1536]: 2024-02-09T18:35:31.514261Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 18:35:31.514835 waagent[1536]: 2024-02-09T18:35:31.514781Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 18:35:31.522385 waagent[1536]: 2024-02-09T18:35:31.522326Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 18:35:31.522967 waagent[1536]: 2024-02-09T18:35:31.522915Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 18:35:31.528730 waagent[1536]: 2024-02-09T18:35:31.528637Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 18:35:31.532424 waagent[1536]: 2024-02-09T18:35:31.532368Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 18:35:31.533981 waagent[1536]: 2024-02-09T18:35:31.533913Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 18:35:31.534261 waagent[1536]: 2024-02-09T18:35:31.534189Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:35:31.534842 waagent[1536]: 2024-02-09T18:35:31.534772Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:35:31.535472 waagent[1536]: 2024-02-09T18:35:31.535394Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 18:35:31.536215 waagent[1536]: 2024-02-09T18:35:31.536148Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 18:35:31.536659 waagent[1536]: 2024-02-09T18:35:31.536590Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:35:31.536722 waagent[1536]: 2024-02-09T18:35:31.536669Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 18:35:31.536722 waagent[1536]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 18:35:31.536722 waagent[1536]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 18:35:31.536722 waagent[1536]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 18:35:31.536722 waagent[1536]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:35:31.536722 waagent[1536]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:35:31.536722 waagent[1536]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:35:31.538660 waagent[1536]: 2024-02-09T18:35:31.538486Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:35:31.539493 waagent[1536]: 2024-02-09T18:35:31.539412Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 18:35:31.539884 waagent[1536]: 2024-02-09T18:35:31.539813Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 18:35:31.540710 waagent[1536]: 2024-02-09T18:35:31.540614Z INFO EnvHandler ExtHandler Configure routes Feb 9 18:35:31.541236 waagent[1536]: 2024-02-09T18:35:31.541178Z INFO EnvHandler ExtHandler Gateway:None Feb 9 18:35:31.542424 waagent[1536]: 2024-02-09T18:35:31.542127Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 18:35:31.542614 waagent[1536]: 2024-02-09T18:35:31.542548Z INFO EnvHandler ExtHandler Routes:None Feb 9 18:35:31.542807 waagent[1536]: 2024-02-09T18:35:31.542747Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 18:35:31.544886 waagent[1536]: 2024-02-09T18:35:31.544731Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 18:35:31.569248 waagent[1536]: 2024-02-09T18:35:31.569167Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 18:35:31.570774 waagent[1536]: 2024-02-09T18:35:31.570702Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 18:35:31.570774 waagent[1536]: Executing ['ip', '-a', '-o', 'link']: Feb 9 18:35:31.570774 waagent[1536]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 18:35:31.570774 waagent[1536]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6e:36:7b brd ff:ff:ff:ff:ff:ff Feb 9 18:35:31.570774 waagent[1536]: 3: enP16555s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6e:36:7b brd ff:ff:ff:ff:ff:ff\ altname enP16555p0s2 Feb 9 18:35:31.570774 waagent[1536]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 18:35:31.570774 waagent[1536]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 18:35:31.570774 waagent[1536]: 2: eth0 inet 10.200.20.17/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 18:35:31.570774 waagent[1536]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 18:35:31.570774 waagent[1536]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 18:35:31.570774 waagent[1536]: 2: eth0 inet6 fe80::20d:3aff:fe6e:367b/64 scope link \ valid_lft forever preferred_lft forever Feb 9 18:35:31.571217 waagent[1536]: 2024-02-09T18:35:31.571144Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 18:35:31.594108 waagent[1536]: 2024-02-09T18:35:31.594011Z INFO ExtHandler ExtHandler Feb 9 18:35:31.594426 waagent[1536]: 2024-02-09T18:35:31.594366Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: dcd82278-4209-47f6-b1bc-aae6897f1082 correlation 79052633-4368-4171-970c-e64e525a28e8 created: 2024-02-09T18:33:52.770851Z] Feb 9 18:35:31.595478 waagent[1536]: 2024-02-09T18:35:31.595412Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 18:35:31.597381 waagent[1536]: 2024-02-09T18:35:31.597321Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Feb 9 18:35:31.617415 waagent[1536]: 2024-02-09T18:35:31.617340Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 18:35:31.649868 waagent[1536]: 2024-02-09T18:35:31.649781Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9B182E93-3E8D-4BF1-B453-024AF992F8F0;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 18:35:31.785460 waagent[1536]: 2024-02-09T18:35:31.785334Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 18:35:31.785460 waagent[1536]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:35:31.785460 waagent[1536]: pkts bytes target prot opt in out source destination Feb 9 18:35:31.785460 waagent[1536]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:35:31.785460 waagent[1536]: pkts bytes target prot opt in out source destination Feb 9 18:35:31.785460 waagent[1536]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:35:31.785460 waagent[1536]: pkts bytes target prot opt in out source destination Feb 9 18:35:31.785460 waagent[1536]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 18:35:31.785460 waagent[1536]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 18:35:31.785460 waagent[1536]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 18:35:31.793636 waagent[1536]: 2024-02-09T18:35:31.793527Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 18:35:31.793636 waagent[1536]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:35:31.793636 waagent[1536]: pkts bytes target prot opt in out source destination Feb 9 18:35:31.793636 waagent[1536]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:35:31.793636 waagent[1536]: pkts bytes target prot opt in out source destination Feb 9 18:35:31.793636 waagent[1536]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:35:31.793636 waagent[1536]: pkts bytes target prot opt in out source destination Feb 9 18:35:31.793636 waagent[1536]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 18:35:31.793636 waagent[1536]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 18:35:31.793636 waagent[1536]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 18:35:31.794473 waagent[1536]: 2024-02-09T18:35:31.794425Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 18:35:53.501222 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 9 18:35:59.641708 systemd[1]: Created slice system-sshd.slice. Feb 9 18:35:59.642759 systemd[1]: Started sshd@0-10.200.20.17:22-10.200.12.6:55612.service. Feb 9 18:36:00.255920 sshd[1587]: Accepted publickey for core from 10.200.12.6 port 55612 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:00.272690 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:00.277188 systemd[1]: Started session-3.scope. Feb 9 18:36:00.277496 systemd-logind[1339]: New session 3 of user core. Feb 9 18:36:00.643736 systemd[1]: Started sshd@1-10.200.20.17:22-10.200.12.6:55628.service. Feb 9 18:36:01.070841 sshd[1592]: Accepted publickey for core from 10.200.12.6 port 55628 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:01.072421 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:01.075966 systemd-logind[1339]: New session 4 of user core. Feb 9 18:36:01.076418 systemd[1]: Started session-4.scope. Feb 9 18:36:01.183879 update_engine[1342]: I0209 18:36:01.183826 1342 update_attempter.cc:509] Updating boot flags... Feb 9 18:36:01.375935 sshd[1592]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:01.378090 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:36:01.378650 systemd-logind[1339]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:36:01.378751 systemd[1]: sshd@1-10.200.20.17:22-10.200.12.6:55628.service: Deactivated successfully. Feb 9 18:36:01.379896 systemd-logind[1339]: Removed session 4. Feb 9 18:36:01.450555 systemd[1]: Started sshd@2-10.200.20.17:22-10.200.12.6:55642.service. Feb 9 18:36:01.898048 sshd[1637]: Accepted publickey for core from 10.200.12.6 port 55642 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:01.899575 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:01.903619 systemd[1]: Started session-5.scope. Feb 9 18:36:01.904227 systemd-logind[1339]: New session 5 of user core. Feb 9 18:36:02.230676 sshd[1637]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:02.233333 systemd[1]: sshd@2-10.200.20.17:22-10.200.12.6:55642.service: Deactivated successfully. Feb 9 18:36:02.233960 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:36:02.234522 systemd-logind[1339]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:36:02.235139 systemd-logind[1339]: Removed session 5. Feb 9 18:36:02.299956 systemd[1]: Started sshd@3-10.200.20.17:22-10.200.12.6:55644.service. Feb 9 18:36:02.715166 sshd[1643]: Accepted publickey for core from 10.200.12.6 port 55644 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:02.716413 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:02.720080 systemd-logind[1339]: New session 6 of user core. Feb 9 18:36:02.720523 systemd[1]: Started session-6.scope. Feb 9 18:36:03.018010 sshd[1643]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:03.020448 systemd[1]: sshd@3-10.200.20.17:22-10.200.12.6:55644.service: Deactivated successfully. Feb 9 18:36:03.021075 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 18:36:03.021619 systemd-logind[1339]: Session 6 logged out. Waiting for processes to exit. Feb 9 18:36:03.022458 systemd-logind[1339]: Removed session 6. Feb 9 18:36:03.088023 systemd[1]: Started sshd@4-10.200.20.17:22-10.200.12.6:55656.service. Feb 9 18:36:03.509349 sshd[1649]: Accepted publickey for core from 10.200.12.6 port 55656 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:03.510571 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:03.514255 systemd-logind[1339]: New session 7 of user core. Feb 9 18:36:03.514709 systemd[1]: Started session-7.scope. Feb 9 18:36:04.025551 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:36:04.025754 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:36:04.704171 systemd[1]: Starting docker.service... Feb 9 18:36:04.754832 env[1667]: time="2024-02-09T18:36:04.754776617Z" level=info msg="Starting up" Feb 9 18:36:04.756026 env[1667]: time="2024-02-09T18:36:04.756003029Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:36:04.756120 env[1667]: time="2024-02-09T18:36:04.756106183Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:36:04.756184 env[1667]: time="2024-02-09T18:36:04.756168540Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:36:04.756236 env[1667]: time="2024-02-09T18:36:04.756224817Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:36:04.757853 env[1667]: time="2024-02-09T18:36:04.757834248Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:36:04.757949 env[1667]: time="2024-02-09T18:36:04.757935722Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:36:04.758009 env[1667]: time="2024-02-09T18:36:04.757995319Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:36:04.758062 env[1667]: time="2024-02-09T18:36:04.758050756Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:36:04.762329 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1960825773-merged.mount: Deactivated successfully. Feb 9 18:36:04.853001 env[1667]: time="2024-02-09T18:36:04.852960024Z" level=info msg="Loading containers: start." Feb 9 18:36:04.999302 kernel: Initializing XFRM netlink socket Feb 9 18:36:05.020286 env[1667]: time="2024-02-09T18:36:05.020239994Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 18:36:05.161016 systemd-networkd[1501]: docker0: Link UP Feb 9 18:36:05.177312 env[1667]: time="2024-02-09T18:36:05.177260610Z" level=info msg="Loading containers: done." Feb 9 18:36:05.186323 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1312551314-merged.mount: Deactivated successfully. Feb 9 18:36:05.197438 env[1667]: time="2024-02-09T18:36:05.197405005Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 18:36:05.197734 env[1667]: time="2024-02-09T18:36:05.197717069Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 18:36:05.197889 env[1667]: time="2024-02-09T18:36:05.197874260Z" level=info msg="Daemon has completed initialization" Feb 9 18:36:05.225631 systemd[1]: Started docker.service. Feb 9 18:36:05.233842 env[1667]: time="2024-02-09T18:36:05.233796517Z" level=info msg="API listen on /run/docker.sock" Feb 9 18:36:05.249235 systemd[1]: Reloading. Feb 9 18:36:05.312272 /usr/lib/systemd/system-generators/torcx-generator[1797]: time="2024-02-09T18:36:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:36:05.312324 /usr/lib/systemd/system-generators/torcx-generator[1797]: time="2024-02-09T18:36:05Z" level=info msg="torcx already run" Feb 9 18:36:05.382959 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:36:05.382978 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:36:05.398123 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:36:05.477936 systemd[1]: Started kubelet.service. Feb 9 18:36:05.551095 kubelet[1857]: E0209 18:36:05.551049 1857 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 18:36:05.553245 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:36:05.553385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:36:09.948181 env[1356]: time="2024-02-09T18:36:09.946893301Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 9 18:36:10.890352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1299915361.mount: Deactivated successfully. Feb 9 18:36:13.381316 env[1356]: time="2024-02-09T18:36:13.381249528Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:13.387002 env[1356]: time="2024-02-09T18:36:13.386968291Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d19178cf7413f0942a116deaaea447983d297afb5dc7f62456c43839e7aaecfa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:13.390200 env[1356]: time="2024-02-09T18:36:13.390173072Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:13.394079 env[1356]: time="2024-02-09T18:36:13.394055013Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:13.394750 env[1356]: time="2024-02-09T18:36:13.394723934Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:d19178cf7413f0942a116deaaea447983d297afb5dc7f62456c43839e7aaecfa\"" Feb 9 18:36:13.403253 env[1356]: time="2024-02-09T18:36:13.403224608Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 9 18:36:15.601272 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 18:36:15.601472 systemd[1]: Stopped kubelet.service. Feb 9 18:36:15.602903 systemd[1]: Started kubelet.service. Feb 9 18:36:15.651333 kubelet[1881]: E0209 18:36:15.651295 1881 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 18:36:15.654060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:36:15.654180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:36:16.026703 env[1356]: time="2024-02-09T18:36:16.026297248Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:16.036614 env[1356]: time="2024-02-09T18:36:16.036576205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6b9759f115be4c68b4a500b8c1d7bbeaf16e8e887b01eaf79c135b7b267baf95,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:16.041470 env[1356]: time="2024-02-09T18:36:16.041437311Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:16.046431 env[1356]: time="2024-02-09T18:36:16.046391587Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:16.047256 env[1356]: time="2024-02-09T18:36:16.047229644Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:6b9759f115be4c68b4a500b8c1d7bbeaf16e8e887b01eaf79c135b7b267baf95\"" Feb 9 18:36:16.056402 env[1356]: time="2024-02-09T18:36:16.056373406Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 9 18:36:17.778761 env[1356]: time="2024-02-09T18:36:17.778707897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:17.788728 env[1356]: time="2024-02-09T18:36:17.788678009Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:745369ed75bfc0dd1319e4c64383b4ef2cb163cec6630fa288ad3fb6bf6624eb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:17.794351 env[1356]: time="2024-02-09T18:36:17.794321608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:17.800665 env[1356]: time="2024-02-09T18:36:17.800627569Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:17.801400 env[1356]: time="2024-02-09T18:36:17.801374909Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:745369ed75bfc0dd1319e4c64383b4ef2cb163cec6630fa288ad3fb6bf6624eb\"" Feb 9 18:36:17.810911 env[1356]: time="2024-02-09T18:36:17.810868085Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 18:36:18.872638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3249225916.mount: Deactivated successfully. Feb 9 18:36:19.933369 env[1356]: time="2024-02-09T18:36:19.933326132Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:19.944525 env[1356]: time="2024-02-09T18:36:19.944486420Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:19.949130 env[1356]: time="2024-02-09T18:36:19.949104342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:19.953697 env[1356]: time="2024-02-09T18:36:19.953671458Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:19.954244 env[1356]: time="2024-02-09T18:36:19.954219723Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef\"" Feb 9 18:36:19.963471 env[1356]: time="2024-02-09T18:36:19.963447932Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 18:36:20.564009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4054561877.mount: Deactivated successfully. Feb 9 18:36:20.588379 env[1356]: time="2024-02-09T18:36:20.588342379Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:20.599012 env[1356]: time="2024-02-09T18:36:20.598978384Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:20.620798 env[1356]: time="2024-02-09T18:36:20.620763932Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:20.626415 env[1356]: time="2024-02-09T18:36:20.626390349Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:20.626839 env[1356]: time="2024-02-09T18:36:20.626806232Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 18:36:20.636080 env[1356]: time="2024-02-09T18:36:20.636048764Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 9 18:36:21.743913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3885434214.mount: Deactivated successfully. Feb 9 18:36:25.001668 env[1356]: time="2024-02-09T18:36:25.001622600Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:25.009132 env[1356]: time="2024-02-09T18:36:25.009087827Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:25.013504 env[1356]: time="2024-02-09T18:36:25.013478079Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:25.018032 env[1356]: time="2024-02-09T18:36:25.017997097Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:25.018779 env[1356]: time="2024-02-09T18:36:25.018749380Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737\"" Feb 9 18:36:25.028133 env[1356]: time="2024-02-09T18:36:25.028098674Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 18:36:25.744477 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 18:36:25.744607 systemd[1]: Stopped kubelet.service. Feb 9 18:36:25.745995 systemd[1]: Started kubelet.service. Feb 9 18:36:25.750449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234114516.mount: Deactivated successfully. Feb 9 18:36:25.788126 kubelet[1912]: E0209 18:36:25.788074 1912 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 18:36:25.789911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:36:25.790041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:36:26.658400 env[1356]: time="2024-02-09T18:36:26.658350628Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:26.664785 env[1356]: time="2024-02-09T18:36:26.664756449Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:26.668387 env[1356]: time="2024-02-09T18:36:26.668343423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:26.671288 env[1356]: time="2024-02-09T18:36:26.671252739Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:26.671733 env[1356]: time="2024-02-09T18:36:26.671707322Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Feb 9 18:36:31.718477 systemd[1]: Stopped kubelet.service. Feb 9 18:36:31.731501 systemd[1]: Reloading. Feb 9 18:36:31.798801 /usr/lib/systemd/system-generators/torcx-generator[2000]: time="2024-02-09T18:36:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:36:31.798838 /usr/lib/systemd/system-generators/torcx-generator[2000]: time="2024-02-09T18:36:31Z" level=info msg="torcx already run" Feb 9 18:36:31.863997 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:36:31.864019 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:36:31.879082 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:36:31.981562 systemd[1]: Started kubelet.service. Feb 9 18:36:32.021254 kubelet[2059]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:36:32.021583 kubelet[2059]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 18:36:32.021641 kubelet[2059]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:36:32.021767 kubelet[2059]: I0209 18:36:32.021733 2059 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:36:32.702926 kubelet[2059]: I0209 18:36:32.702892 2059 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 18:36:32.702926 kubelet[2059]: I0209 18:36:32.702919 2059 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:36:32.703153 kubelet[2059]: I0209 18:36:32.703133 2059 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 18:36:32.708182 kubelet[2059]: E0209 18:36:32.708157 2059 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:32.708233 kubelet[2059]: I0209 18:36:32.708214 2059 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:36:32.709389 kubelet[2059]: W0209 18:36:32.709374 2059 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:36:32.710027 kubelet[2059]: I0209 18:36:32.710012 2059 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:36:32.710348 kubelet[2059]: I0209 18:36:32.710337 2059 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:36:32.710492 kubelet[2059]: I0209 18:36:32.710470 2059 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:36:32.710614 kubelet[2059]: I0209 18:36:32.710604 2059 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:36:32.710697 kubelet[2059]: I0209 18:36:32.710688 2059 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 18:36:32.710839 kubelet[2059]: I0209 18:36:32.710828 2059 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:36:32.713586 kubelet[2059]: I0209 18:36:32.713569 2059 kubelet.go:405] "Attempting to sync node with API server" Feb 9 18:36:32.714078 kubelet[2059]: I0209 18:36:32.714065 2059 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:36:32.714197 kubelet[2059]: I0209 18:36:32.714186 2059 kubelet.go:309] "Adding apiserver pod source" Feb 9 18:36:32.714261 kubelet[2059]: I0209 18:36:32.714252 2059 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:36:32.714544 kubelet[2059]: W0209 18:36:32.714014 2059 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-b879aa43fa&limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:32.714647 kubelet[2059]: E0209 18:36:32.714634 2059 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-b879aa43fa&limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:32.715021 kubelet[2059]: W0209 18:36:32.714993 2059 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:32.715145 kubelet[2059]: E0209 18:36:32.715134 2059 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:32.715345 kubelet[2059]: I0209 18:36:32.715331 2059 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:36:32.715631 kubelet[2059]: W0209 18:36:32.715616 2059 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:36:32.716054 kubelet[2059]: I0209 18:36:32.716035 2059 server.go:1168] "Started kubelet" Feb 9 18:36:32.724380 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 18:36:32.724476 kubelet[2059]: E0209 18:36:32.718912 2059 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:36:32.724476 kubelet[2059]: E0209 18:36:32.718934 2059 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:36:32.724476 kubelet[2059]: E0209 18:36:32.718996 2059 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b879aa43fa.17b245a71978715b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b879aa43fa", UID:"ci-3510.3.2-a-b879aa43fa", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b879aa43fa"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 36, 32, 716018011, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 36, 32, 716018011, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.20.17:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.17:6443: connect: connection refused'(may retry after sleeping) Feb 9 18:36:32.724476 kubelet[2059]: I0209 18:36:32.720119 2059 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:36:32.724476 kubelet[2059]: I0209 18:36:32.720660 2059 server.go:461] "Adding debug handlers to kubelet server" Feb 9 18:36:32.724652 kubelet[2059]: I0209 18:36:32.721503 2059 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 18:36:32.724814 kubelet[2059]: I0209 18:36:32.724802 2059 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:36:32.726412 kubelet[2059]: I0209 18:36:32.726395 2059 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 18:36:32.726636 kubelet[2059]: I0209 18:36:32.726607 2059 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 18:36:32.726989 kubelet[2059]: W0209 18:36:32.726955 2059 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:32.727086 kubelet[2059]: E0209 18:36:32.727076 2059 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:32.727592 kubelet[2059]: E0209 18:36:32.727575 2059 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b879aa43fa?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="200ms" Feb 9 18:36:32.779579 kubelet[2059]: I0209 18:36:32.779547 2059 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:36:32.781252 kubelet[2059]: I0209 18:36:32.781236 2059 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:36:32.781777 kubelet[2059]: I0209 18:36:32.781765 2059 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 18:36:32.781888 kubelet[2059]: I0209 18:36:32.781877 2059 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 18:36:32.782002 kubelet[2059]: E0209 18:36:32.781991 2059 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 18:36:32.782762 kubelet[2059]: W0209 18:36:32.782725 2059 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:32.783230 kubelet[2059]: E0209 18:36:32.783208 2059 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:32.783374 kubelet[2059]: I0209 18:36:32.783354 2059 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:36:32.783374 kubelet[2059]: I0209 18:36:32.783373 2059 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:36:32.783436 kubelet[2059]: I0209 18:36:32.783389 2059 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:36:32.788338 kubelet[2059]: I0209 18:36:32.788302 2059 policy_none.go:49] "None policy: Start" Feb 9 18:36:32.788930 kubelet[2059]: I0209 18:36:32.788910 2059 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:36:32.788978 kubelet[2059]: I0209 18:36:32.788947 2059 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:36:32.795885 systemd[1]: Created slice kubepods.slice. Feb 9 18:36:32.799746 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 18:36:32.802270 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 18:36:32.807313 kubelet[2059]: I0209 18:36:32.807248 2059 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:36:32.808497 kubelet[2059]: I0209 18:36:32.808474 2059 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:36:32.809274 kubelet[2059]: E0209 18:36:32.809252 2059 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-b879aa43fa\" not found" Feb 9 18:36:32.828449 kubelet[2059]: I0209 18:36:32.828432 2059 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:32.828848 kubelet[2059]: E0209 18:36:32.828833 2059 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:32.883098 kubelet[2059]: I0209 18:36:32.883080 2059 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:36:32.884505 kubelet[2059]: I0209 18:36:32.884481 2059 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:36:32.885694 kubelet[2059]: I0209 18:36:32.885674 2059 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:36:32.891389 systemd[1]: Created slice kubepods-burstable-pod1fd66c0a2b1014958b920312728c3e65.slice. Feb 9 18:36:32.908176 systemd[1]: Created slice kubepods-burstable-pod0e4a5647ed130aae32f8e92aee6e9177.slice. Feb 9 18:36:32.917138 systemd[1]: Created slice kubepods-burstable-podf0b0b0a2c0c4c066efff9f6a0a916536.slice. Feb 9 18:36:32.927351 kubelet[2059]: I0209 18:36:32.927324 2059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fd66c0a2b1014958b920312728c3e65-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-b879aa43fa\" (UID: \"1fd66c0a2b1014958b920312728c3e65\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:32.927441 kubelet[2059]: I0209 18:36:32.927372 2059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fd66c0a2b1014958b920312728c3e65-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-b879aa43fa\" (UID: \"1fd66c0a2b1014958b920312728c3e65\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:32.927441 kubelet[2059]: I0209 18:36:32.927397 2059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f0b0b0a2c0c4c066efff9f6a0a916536-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-b879aa43fa\" (UID: \"f0b0b0a2c0c4c066efff9f6a0a916536\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:32.927441 kubelet[2059]: I0209 18:36:32.927419 2059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f0b0b0a2c0c4c066efff9f6a0a916536-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-b879aa43fa\" (UID: \"f0b0b0a2c0c4c066efff9f6a0a916536\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:32.927514 kubelet[2059]: I0209 18:36:32.927458 2059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f0b0b0a2c0c4c066efff9f6a0a916536-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-b879aa43fa\" (UID: \"f0b0b0a2c0c4c066efff9f6a0a916536\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:32.927514 kubelet[2059]: I0209 18:36:32.927481 2059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fd66c0a2b1014958b920312728c3e65-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-b879aa43fa\" (UID: \"1fd66c0a2b1014958b920312728c3e65\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:32.927514 kubelet[2059]: I0209 18:36:32.927502 2059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f0b0b0a2c0c4c066efff9f6a0a916536-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-b879aa43fa\" (UID: \"f0b0b0a2c0c4c066efff9f6a0a916536\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:32.927591 kubelet[2059]: I0209 18:36:32.927533 2059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0b0b0a2c0c4c066efff9f6a0a916536-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-b879aa43fa\" (UID: \"f0b0b0a2c0c4c066efff9f6a0a916536\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:32.927591 kubelet[2059]: I0209 18:36:32.927555 2059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e4a5647ed130aae32f8e92aee6e9177-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-b879aa43fa\" (UID: \"0e4a5647ed130aae32f8e92aee6e9177\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:32.927988 kubelet[2059]: E0209 18:36:32.927973 2059 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b879aa43fa?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="400ms" Feb 9 18:36:33.032917 kubelet[2059]: I0209 18:36:33.030730 2059 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:33.033325 kubelet[2059]: E0209 18:36:33.033308 2059 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:33.208758 env[1356]: time="2024-02-09T18:36:33.208435327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-b879aa43fa,Uid:1fd66c0a2b1014958b920312728c3e65,Namespace:kube-system,Attempt:0,}" Feb 9 18:36:33.216995 env[1356]: time="2024-02-09T18:36:33.216753300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-b879aa43fa,Uid:0e4a5647ed130aae32f8e92aee6e9177,Namespace:kube-system,Attempt:0,}" Feb 9 18:36:33.219799 env[1356]: time="2024-02-09T18:36:33.219669930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-b879aa43fa,Uid:f0b0b0a2c0c4c066efff9f6a0a916536,Namespace:kube-system,Attempt:0,}" Feb 9 18:36:33.329120 kubelet[2059]: E0209 18:36:33.328750 2059 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b879aa43fa?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="800ms" Feb 9 18:36:33.435587 kubelet[2059]: I0209 18:36:33.435556 2059 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:33.435908 kubelet[2059]: E0209 18:36:33.435888 2059 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:33.602822 kubelet[2059]: W0209 18:36:33.602766 2059 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:33.602946 kubelet[2059]: E0209 18:36:33.602838 2059 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:33.631585 kubelet[2059]: E0209 18:36:33.631486 2059 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b879aa43fa.17b245a71978715b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b879aa43fa", UID:"ci-3510.3.2-a-b879aa43fa", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b879aa43fa"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 36, 32, 716018011, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 36, 32, 716018011, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.20.17:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.17:6443: connect: connection refused'(may retry after sleeping) Feb 9 18:36:33.857100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1982134427.mount: Deactivated successfully. Feb 9 18:36:33.882578 env[1356]: time="2024-02-09T18:36:33.882527527Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:33.901260 env[1356]: time="2024-02-09T18:36:33.901224305Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:33.907971 env[1356]: time="2024-02-09T18:36:33.907938751Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:33.911848 env[1356]: time="2024-02-09T18:36:33.911814650Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:33.920542 env[1356]: time="2024-02-09T18:36:33.920509966Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:33.925077 env[1356]: time="2024-02-09T18:36:33.925049913Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:33.928144 env[1356]: time="2024-02-09T18:36:33.928117112Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:33.932519 env[1356]: time="2024-02-09T18:36:33.932485261Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:33.935037 env[1356]: time="2024-02-09T18:36:33.935010155Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:33.939361 env[1356]: time="2024-02-09T18:36:33.939326087Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:33.941660 env[1356]: time="2024-02-09T18:36:33.941620010Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:33.954875 env[1356]: time="2024-02-09T18:36:33.954834802Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:34.034447 env[1356]: time="2024-02-09T18:36:34.031537211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:36:34.034447 env[1356]: time="2024-02-09T18:36:34.031580391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:36:34.034447 env[1356]: time="2024-02-09T18:36:34.031591067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:36:34.034447 env[1356]: time="2024-02-09T18:36:34.031694379Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/91c2afce6250b1d05bd2b81c70e55f3715dc777ddb3bba75e2d6cfe0fbbc2657 pid=2097 runtime=io.containerd.runc.v2 Feb 9 18:36:34.035802 env[1356]: time="2024-02-09T18:36:34.035722056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:36:34.035802 env[1356]: time="2024-02-09T18:36:34.035749283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:36:34.035802 env[1356]: time="2024-02-09T18:36:34.035758759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:36:34.035990 env[1356]: time="2024-02-09T18:36:34.035858833Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c85dd3590539d85f6ffbcba93f5785dc239b388ff163d05b5ee694d19156d9a pid=2115 runtime=io.containerd.runc.v2 Feb 9 18:36:34.052640 systemd[1]: Started cri-containerd-91c2afce6250b1d05bd2b81c70e55f3715dc777ddb3bba75e2d6cfe0fbbc2657.scope. Feb 9 18:36:34.058863 env[1356]: time="2024-02-09T18:36:34.057713988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:36:34.058863 env[1356]: time="2024-02-09T18:36:34.057763645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:36:34.058863 env[1356]: time="2024-02-09T18:36:34.057775360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:36:34.058863 env[1356]: time="2024-02-09T18:36:34.057929250Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6165d33e4aa77ffd73e6f3587916150ab8127f2b950ea0b21027c75f4c116650 pid=2141 runtime=io.containerd.runc.v2 Feb 9 18:36:34.062022 systemd[1]: Started cri-containerd-5c85dd3590539d85f6ffbcba93f5785dc239b388ff163d05b5ee694d19156d9a.scope. Feb 9 18:36:34.073232 kubelet[2059]: W0209 18:36:34.073172 2059 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-b879aa43fa&limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:34.073232 kubelet[2059]: E0209 18:36:34.073235 2059 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-b879aa43fa&limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:34.084548 systemd[1]: Started cri-containerd-6165d33e4aa77ffd73e6f3587916150ab8127f2b950ea0b21027c75f4c116650.scope. Feb 9 18:36:34.115213 env[1356]: time="2024-02-09T18:36:34.114083104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-b879aa43fa,Uid:1fd66c0a2b1014958b920312728c3e65,Namespace:kube-system,Attempt:0,} returns sandbox id \"91c2afce6250b1d05bd2b81c70e55f3715dc777ddb3bba75e2d6cfe0fbbc2657\"" Feb 9 18:36:34.120609 env[1356]: time="2024-02-09T18:36:34.120573773Z" level=info msg="CreateContainer within sandbox \"91c2afce6250b1d05bd2b81c70e55f3715dc777ddb3bba75e2d6cfe0fbbc2657\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 18:36:34.122571 env[1356]: time="2024-02-09T18:36:34.122543991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-b879aa43fa,Uid:0e4a5647ed130aae32f8e92aee6e9177,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c85dd3590539d85f6ffbcba93f5785dc239b388ff163d05b5ee694d19156d9a\"" Feb 9 18:36:34.125015 env[1356]: time="2024-02-09T18:36:34.124988112Z" level=info msg="CreateContainer within sandbox \"5c85dd3590539d85f6ffbcba93f5785dc239b388ff163d05b5ee694d19156d9a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 18:36:34.125808 kubelet[2059]: W0209 18:36:34.125745 2059 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:34.125808 kubelet[2059]: E0209 18:36:34.125788 2059 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:34.130264 kubelet[2059]: E0209 18:36:34.130228 2059 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b879aa43fa?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="1.6s" Feb 9 18:36:34.132181 env[1356]: time="2024-02-09T18:36:34.132135720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-b879aa43fa,Uid:f0b0b0a2c0c4c066efff9f6a0a916536,Namespace:kube-system,Attempt:0,} returns sandbox id \"6165d33e4aa77ffd73e6f3587916150ab8127f2b950ea0b21027c75f4c116650\"" Feb 9 18:36:34.136900 env[1356]: time="2024-02-09T18:36:34.136815058Z" level=info msg="CreateContainer within sandbox \"6165d33e4aa77ffd73e6f3587916150ab8127f2b950ea0b21027c75f4c116650\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 18:36:34.191539 env[1356]: time="2024-02-09T18:36:34.191497346Z" level=info msg="CreateContainer within sandbox \"91c2afce6250b1d05bd2b81c70e55f3715dc777ddb3bba75e2d6cfe0fbbc2657\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3799dc6c915730d6e93ee50617ef8c7cda3bbb0bc0e5261fd057c42edcb72dfa\"" Feb 9 18:36:34.192418 env[1356]: time="2024-02-09T18:36:34.192394455Z" level=info msg="StartContainer for \"3799dc6c915730d6e93ee50617ef8c7cda3bbb0bc0e5261fd057c42edcb72dfa\"" Feb 9 18:36:34.195229 env[1356]: time="2024-02-09T18:36:34.195190975Z" level=info msg="CreateContainer within sandbox \"5c85dd3590539d85f6ffbcba93f5785dc239b388ff163d05b5ee694d19156d9a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c7f111754e7657c87685f53e5107e4b36f9d6a9b29d0bbc470afc059f1d0087e\"" Feb 9 18:36:34.195796 env[1356]: time="2024-02-09T18:36:34.195774987Z" level=info msg="StartContainer for \"c7f111754e7657c87685f53e5107e4b36f9d6a9b29d0bbc470afc059f1d0087e\"" Feb 9 18:36:34.197977 env[1356]: time="2024-02-09T18:36:34.197935319Z" level=info msg="CreateContainer within sandbox \"6165d33e4aa77ffd73e6f3587916150ab8127f2b950ea0b21027c75f4c116650\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"143360341c3c767b341822e7dd6ea9a23e49922b6479b89c52c65e49abe6aced\"" Feb 9 18:36:34.198438 env[1356]: time="2024-02-09T18:36:34.198397427Z" level=info msg="StartContainer for \"143360341c3c767b341822e7dd6ea9a23e49922b6479b89c52c65e49abe6aced\"" Feb 9 18:36:34.223177 systemd[1]: Started cri-containerd-143360341c3c767b341822e7dd6ea9a23e49922b6479b89c52c65e49abe6aced.scope. Feb 9 18:36:34.226338 systemd[1]: Started cri-containerd-3799dc6c915730d6e93ee50617ef8c7cda3bbb0bc0e5261fd057c42edcb72dfa.scope. Feb 9 18:36:34.235004 kubelet[2059]: W0209 18:36:34.234934 2059 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:34.235004 kubelet[2059]: E0209 18:36:34.234995 2059 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Feb 9 18:36:34.237337 kubelet[2059]: I0209 18:36:34.237308 2059 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:34.237673 kubelet[2059]: E0209 18:36:34.237656 2059 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:34.241593 systemd[1]: Started cri-containerd-c7f111754e7657c87685f53e5107e4b36f9d6a9b29d0bbc470afc059f1d0087e.scope. Feb 9 18:36:34.277712 env[1356]: time="2024-02-09T18:36:34.277672297Z" level=info msg="StartContainer for \"3799dc6c915730d6e93ee50617ef8c7cda3bbb0bc0e5261fd057c42edcb72dfa\" returns successfully" Feb 9 18:36:34.308961 env[1356]: time="2024-02-09T18:36:34.308483952Z" level=info msg="StartContainer for \"143360341c3c767b341822e7dd6ea9a23e49922b6479b89c52c65e49abe6aced\" returns successfully" Feb 9 18:36:34.317264 env[1356]: time="2024-02-09T18:36:34.317198283Z" level=info msg="StartContainer for \"c7f111754e7657c87685f53e5107e4b36f9d6a9b29d0bbc470afc059f1d0087e\" returns successfully" Feb 9 18:36:35.839758 kubelet[2059]: I0209 18:36:35.839719 2059 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:36.957336 kubelet[2059]: E0209 18:36:36.957265 2059 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-b879aa43fa\" not found" node="ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:36.988554 kubelet[2059]: I0209 18:36:36.988512 2059 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:37.717022 kubelet[2059]: I0209 18:36:37.716986 2059 apiserver.go:52] "Watching apiserver" Feb 9 18:36:37.726849 kubelet[2059]: I0209 18:36:37.726828 2059 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 18:36:37.751774 kubelet[2059]: I0209 18:36:37.751758 2059 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:36:39.380719 kubelet[2059]: W0209 18:36:39.380693 2059 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 18:36:40.432864 kubelet[2059]: W0209 18:36:40.432836 2059 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 18:36:40.578205 systemd[1]: Reloading. Feb 9 18:36:40.663339 /usr/lib/systemd/system-generators/torcx-generator[2348]: time="2024-02-09T18:36:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:36:40.663364 /usr/lib/systemd/system-generators/torcx-generator[2348]: time="2024-02-09T18:36:40Z" level=info msg="torcx already run" Feb 9 18:36:40.736773 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:36:40.736793 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:36:40.752209 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:36:40.865995 kubelet[2059]: I0209 18:36:40.865947 2059 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:36:40.868107 systemd[1]: Stopping kubelet.service... Feb 9 18:36:40.888082 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 18:36:40.888290 systemd[1]: Stopped kubelet.service. Feb 9 18:36:40.890036 systemd[1]: Started kubelet.service. Feb 9 18:36:40.964592 kubelet[2407]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:36:40.964592 kubelet[2407]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 18:36:40.964592 kubelet[2407]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:36:40.964909 kubelet[2407]: I0209 18:36:40.964662 2407 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:36:40.968662 kubelet[2407]: I0209 18:36:40.968642 2407 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 18:36:40.968807 kubelet[2407]: I0209 18:36:40.968796 2407 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:36:40.969066 kubelet[2407]: I0209 18:36:40.969052 2407 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 18:36:40.970571 kubelet[2407]: I0209 18:36:40.970554 2407 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 18:36:40.971775 kubelet[2407]: I0209 18:36:40.971744 2407 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:36:40.975056 kubelet[2407]: W0209 18:36:40.975020 2407 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:36:40.975872 kubelet[2407]: I0209 18:36:40.975856 2407 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:36:40.976146 kubelet[2407]: I0209 18:36:40.976133 2407 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:36:40.976274 kubelet[2407]: I0209 18:36:40.976263 2407 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:36:40.976435 kubelet[2407]: I0209 18:36:40.976423 2407 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:36:40.976496 kubelet[2407]: I0209 18:36:40.976488 2407 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 18:36:40.976571 kubelet[2407]: I0209 18:36:40.976562 2407 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:36:40.981583 kubelet[2407]: I0209 18:36:40.981559 2407 kubelet.go:405] "Attempting to sync node with API server" Feb 9 18:36:40.981583 kubelet[2407]: I0209 18:36:40.981584 2407 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:36:40.982340 kubelet[2407]: I0209 18:36:40.982324 2407 kubelet.go:309] "Adding apiserver pod source" Feb 9 18:36:40.982461 kubelet[2407]: I0209 18:36:40.982451 2407 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:36:40.988672 kubelet[2407]: I0209 18:36:40.987689 2407 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:36:40.989337 kubelet[2407]: I0209 18:36:40.989322 2407 server.go:1168] "Started kubelet" Feb 9 18:36:40.991622 kubelet[2407]: I0209 18:36:40.991606 2407 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:36:40.999206 kubelet[2407]: E0209 18:36:40.999182 2407 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:36:40.999206 kubelet[2407]: E0209 18:36:40.999209 2407 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:36:41.013141 kubelet[2407]: I0209 18:36:41.013116 2407 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:36:41.018184 sudo[2423]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 18:36:41.018469 sudo[2423]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 18:36:41.036122 kubelet[2407]: I0209 18:36:41.036096 2407 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 18:36:41.037768 kubelet[2407]: I0209 18:36:41.037751 2407 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 18:36:41.038694 kubelet[2407]: I0209 18:36:41.038679 2407 server.go:461] "Adding debug handlers to kubelet server" Feb 9 18:36:41.044360 kubelet[2407]: I0209 18:36:41.043911 2407 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 18:36:41.060652 kubelet[2407]: I0209 18:36:41.060629 2407 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:36:41.062259 kubelet[2407]: I0209 18:36:41.062228 2407 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:36:41.062259 kubelet[2407]: I0209 18:36:41.062257 2407 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 18:36:41.062379 kubelet[2407]: I0209 18:36:41.062295 2407 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 18:36:41.062379 kubelet[2407]: E0209 18:36:41.062344 2407 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 18:36:41.111498 kubelet[2407]: I0209 18:36:41.111474 2407 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:36:41.111746 kubelet[2407]: I0209 18:36:41.111730 2407 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:36:41.111847 kubelet[2407]: I0209 18:36:41.111836 2407 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:36:41.112117 kubelet[2407]: I0209 18:36:41.112105 2407 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 18:36:41.112321 kubelet[2407]: I0209 18:36:41.112272 2407 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 18:36:41.112398 kubelet[2407]: I0209 18:36:41.112389 2407 policy_none.go:49] "None policy: Start" Feb 9 18:36:41.113365 kubelet[2407]: I0209 18:36:41.113261 2407 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:36:41.113522 kubelet[2407]: I0209 18:36:41.113510 2407 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:36:41.113792 kubelet[2407]: I0209 18:36:41.113779 2407 state_mem.go:75] "Updated machine memory state" Feb 9 18:36:41.121180 kubelet[2407]: I0209 18:36:41.118501 2407 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:36:41.126308 kubelet[2407]: I0209 18:36:41.126263 2407 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:36:41.147922 kubelet[2407]: I0209 18:36:41.147694 2407 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:41.160738 kubelet[2407]: I0209 18:36:41.160706 2407 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:41.160858 kubelet[2407]: I0209 18:36:41.160791 2407 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:41.163065 kubelet[2407]: I0209 18:36:41.163037 2407 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:36:41.163153 kubelet[2407]: I0209 18:36:41.163123 2407 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:36:41.163182 kubelet[2407]: I0209 18:36:41.163168 2407 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:36:41.176828 kubelet[2407]: W0209 18:36:41.176799 2407 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 18:36:41.183926 kubelet[2407]: W0209 18:36:41.183894 2407 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 18:36:41.184029 kubelet[2407]: E0209 18:36:41.183967 2407 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-b879aa43fa\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:41.184140 kubelet[2407]: W0209 18:36:41.184120 2407 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 18:36:41.184194 kubelet[2407]: E0209 18:36:41.184179 2407 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-b879aa43fa\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:41.245608 kubelet[2407]: I0209 18:36:41.245520 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0b0b0a2c0c4c066efff9f6a0a916536-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-b879aa43fa\" (UID: \"f0b0b0a2c0c4c066efff9f6a0a916536\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:41.245608 kubelet[2407]: I0209 18:36:41.245560 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fd66c0a2b1014958b920312728c3e65-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-b879aa43fa\" (UID: \"1fd66c0a2b1014958b920312728c3e65\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:41.245608 kubelet[2407]: I0209 18:36:41.245592 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f0b0b0a2c0c4c066efff9f6a0a916536-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-b879aa43fa\" (UID: \"f0b0b0a2c0c4c066efff9f6a0a916536\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:41.245769 kubelet[2407]: I0209 18:36:41.245615 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f0b0b0a2c0c4c066efff9f6a0a916536-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-b879aa43fa\" (UID: \"f0b0b0a2c0c4c066efff9f6a0a916536\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:41.245769 kubelet[2407]: I0209 18:36:41.245640 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f0b0b0a2c0c4c066efff9f6a0a916536-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-b879aa43fa\" (UID: \"f0b0b0a2c0c4c066efff9f6a0a916536\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:41.245769 kubelet[2407]: I0209 18:36:41.245669 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e4a5647ed130aae32f8e92aee6e9177-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-b879aa43fa\" (UID: \"0e4a5647ed130aae32f8e92aee6e9177\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:41.245769 kubelet[2407]: I0209 18:36:41.245689 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fd66c0a2b1014958b920312728c3e65-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-b879aa43fa\" (UID: \"1fd66c0a2b1014958b920312728c3e65\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:41.245769 kubelet[2407]: I0209 18:36:41.245718 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fd66c0a2b1014958b920312728c3e65-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-b879aa43fa\" (UID: \"1fd66c0a2b1014958b920312728c3e65\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:41.245901 kubelet[2407]: I0209 18:36:41.245749 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f0b0b0a2c0c4c066efff9f6a0a916536-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-b879aa43fa\" (UID: \"f0b0b0a2c0c4c066efff9f6a0a916536\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b879aa43fa" Feb 9 18:36:41.569600 sudo[2423]: pam_unix(sudo:session): session closed for user root Feb 9 18:36:41.988108 kubelet[2407]: I0209 18:36:41.988069 2407 apiserver.go:52] "Watching apiserver" Feb 9 18:36:42.044521 kubelet[2407]: I0209 18:36:42.044491 2407 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 18:36:42.050830 kubelet[2407]: I0209 18:36:42.050809 2407 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:36:42.106268 kubelet[2407]: I0209 18:36:42.106234 2407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-b879aa43fa" podStartSLOduration=1.106200471 podCreationTimestamp="2024-02-09 18:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:36:42.105605374 +0000 UTC m=+1.211704337" watchObservedRunningTime="2024-02-09 18:36:42.106200471 +0000 UTC m=+1.212299474" Feb 9 18:36:42.125942 kubelet[2407]: I0209 18:36:42.125907 2407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-b879aa43fa" podStartSLOduration=2.125872501 podCreationTimestamp="2024-02-09 18:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:36:42.115511063 +0000 UTC m=+1.221610066" watchObservedRunningTime="2024-02-09 18:36:42.125872501 +0000 UTC m=+1.231971504" Feb 9 18:36:42.137164 kubelet[2407]: I0209 18:36:42.137133 2407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b879aa43fa" podStartSLOduration=3.137106372 podCreationTimestamp="2024-02-09 18:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:36:42.126983845 +0000 UTC m=+1.233082808" watchObservedRunningTime="2024-02-09 18:36:42.137106372 +0000 UTC m=+1.243205375" Feb 9 18:36:43.250528 sudo[1652]: pam_unix(sudo:session): session closed for user root Feb 9 18:36:43.324107 sshd[1649]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:43.326884 systemd-logind[1339]: Session 7 logged out. Waiting for processes to exit. Feb 9 18:36:43.327051 systemd[1]: sshd@4-10.200.20.17:22-10.200.12.6:55656.service: Deactivated successfully. Feb 9 18:36:43.327792 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 18:36:43.327974 systemd[1]: session-7.scope: Consumed 6.633s CPU time. Feb 9 18:36:43.328553 systemd-logind[1339]: Removed session 7. Feb 9 18:36:53.458262 kubelet[2407]: I0209 18:36:53.458225 2407 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 18:36:53.458961 env[1356]: time="2024-02-09T18:36:53.458882395Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:36:53.459186 kubelet[2407]: I0209 18:36:53.459052 2407 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 18:36:53.751122 kubelet[2407]: I0209 18:36:53.751003 2407 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:36:53.756208 systemd[1]: Created slice kubepods-besteffort-pod44ca77ca_be74_4ec0_b2b1_9df69db59950.slice. Feb 9 18:36:53.765762 kubelet[2407]: I0209 18:36:53.765730 2407 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:36:53.771010 systemd[1]: Created slice kubepods-burstable-pod55d51b3d_6c49_4bca_8284_c4a993836db0.slice. Feb 9 18:36:53.805846 kubelet[2407]: I0209 18:36:53.805801 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-etc-cni-netd\") pod \"cilium-2zrzr\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " pod="kube-system/cilium-2zrzr" Feb 9 18:36:53.805846 kubelet[2407]: I0209 18:36:53.805844 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/55d51b3d-6c49-4bca-8284-c4a993836db0-hubble-tls\") pod \"cilium-2zrzr\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " pod="kube-system/cilium-2zrzr" Feb 9 18:36:53.806023 kubelet[2407]: I0209 18:36:53.805867 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwdft\" (UniqueName: \"kubernetes.io/projected/44ca77ca-be74-4ec0-b2b1-9df69db59950-kube-api-access-dwdft\") pod \"kube-proxy-np42d\" (UID: \"44ca77ca-be74-4ec0-b2b1-9df69db59950\") " pod="kube-system/kube-proxy-np42d" Feb 9 18:36:53.806023 kubelet[2407]: I0209 18:36:53.805887 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-hostproc\") pod \"cilium-2zrzr\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " pod="kube-system/cilium-2zrzr" Feb 9 18:36:53.806023 kubelet[2407]: I0209 18:36:53.805905 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-bpf-maps\") pod \"cilium-2zrzr\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " pod="kube-system/cilium-2zrzr" Feb 9 18:36:53.806023 kubelet[2407]: I0209 18:36:53.805923 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44ca77ca-be74-4ec0-b2b1-9df69db59950-xtables-lock\") pod \"kube-proxy-np42d\" (UID: \"44ca77ca-be74-4ec0-b2b1-9df69db59950\") " pod="kube-system/kube-proxy-np42d" Feb 9 18:36:53.806023 kubelet[2407]: I0209 18:36:53.805942 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44ca77ca-be74-4ec0-b2b1-9df69db59950-lib-modules\") pod \"kube-proxy-np42d\" (UID: \"44ca77ca-be74-4ec0-b2b1-9df69db59950\") " pod="kube-system/kube-proxy-np42d" Feb 9 18:36:53.806023 kubelet[2407]: I0209 18:36:53.805960 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-xtables-lock\") pod \"cilium-2zrzr\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " pod="kube-system/cilium-2zrzr" Feb 9 18:36:53.806231 kubelet[2407]: I0209 18:36:53.805980 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55d51b3d-6c49-4bca-8284-c4a993836db0-cilium-config-path\") pod \"cilium-2zrzr\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " pod="kube-system/cilium-2zrzr" Feb 9 18:36:53.806231 kubelet[2407]: I0209 18:36:53.805998 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-host-proc-sys-net\") pod \"cilium-2zrzr\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " pod="kube-system/cilium-2zrzr" Feb 9 18:36:53.806231 kubelet[2407]: I0209 18:36:53.806025 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-host-proc-sys-kernel\") pod \"cilium-2zrzr\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " pod="kube-system/cilium-2zrzr" Feb 9 18:36:53.806231 kubelet[2407]: I0209 18:36:53.806043 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-cilium-cgroup\") pod \"cilium-2zrzr\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " pod="kube-system/cilium-2zrzr" Feb 9 18:36:53.806231 kubelet[2407]: I0209 18:36:53.806060 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-cni-path\") pod \"cilium-2zrzr\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " pod="kube-system/cilium-2zrzr" Feb 9 18:36:53.806231 kubelet[2407]: I0209 18:36:53.806077 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-lib-modules\") pod \"cilium-2zrzr\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " pod="kube-system/cilium-2zrzr" Feb 9 18:36:53.806397 kubelet[2407]: I0209 18:36:53.806098 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpcfc\" (UniqueName: \"kubernetes.io/projected/55d51b3d-6c49-4bca-8284-c4a993836db0-kube-api-access-wpcfc\") pod \"cilium-2zrzr\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " pod="kube-system/cilium-2zrzr" Feb 9 18:36:53.806397 kubelet[2407]: I0209 18:36:53.806116 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/44ca77ca-be74-4ec0-b2b1-9df69db59950-kube-proxy\") pod \"kube-proxy-np42d\" (UID: \"44ca77ca-be74-4ec0-b2b1-9df69db59950\") " pod="kube-system/kube-proxy-np42d" Feb 9 18:36:53.806397 kubelet[2407]: I0209 18:36:53.806134 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-cilium-run\") pod \"cilium-2zrzr\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " pod="kube-system/cilium-2zrzr" Feb 9 18:36:53.806397 kubelet[2407]: I0209 18:36:53.806153 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/55d51b3d-6c49-4bca-8284-c4a993836db0-clustermesh-secrets\") pod \"cilium-2zrzr\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " pod="kube-system/cilium-2zrzr" Feb 9 18:36:53.808904 kubelet[2407]: I0209 18:36:53.808874 2407 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:36:53.813639 systemd[1]: Created slice kubepods-besteffort-podae0e75a6_2aca_4cb3_8508_2a30c9b250d7.slice. Feb 9 18:36:53.907119 kubelet[2407]: I0209 18:36:53.907075 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae0e75a6-2aca-4cb3-8508-2a30c9b250d7-cilium-config-path\") pod \"cilium-operator-574c4bb98d-gfxx7\" (UID: \"ae0e75a6-2aca-4cb3-8508-2a30c9b250d7\") " pod="kube-system/cilium-operator-574c4bb98d-gfxx7" Feb 9 18:36:53.907589 kubelet[2407]: I0209 18:36:53.907563 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkvnf\" (UniqueName: \"kubernetes.io/projected/ae0e75a6-2aca-4cb3-8508-2a30c9b250d7-kube-api-access-vkvnf\") pod \"cilium-operator-574c4bb98d-gfxx7\" (UID: \"ae0e75a6-2aca-4cb3-8508-2a30c9b250d7\") " pod="kube-system/cilium-operator-574c4bb98d-gfxx7" Feb 9 18:36:53.923612 kubelet[2407]: E0209 18:36:53.923580 2407 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 9 18:36:53.923612 kubelet[2407]: E0209 18:36:53.923610 2407 projected.go:198] Error preparing data for projected volume kube-api-access-wpcfc for pod kube-system/cilium-2zrzr: configmap "kube-root-ca.crt" not found Feb 9 18:36:53.923740 kubelet[2407]: E0209 18:36:53.923660 2407 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55d51b3d-6c49-4bca-8284-c4a993836db0-kube-api-access-wpcfc podName:55d51b3d-6c49-4bca-8284-c4a993836db0 nodeName:}" failed. No retries permitted until 2024-02-09 18:36:54.423642358 +0000 UTC m=+13.529741361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wpcfc" (UniqueName: "kubernetes.io/projected/55d51b3d-6c49-4bca-8284-c4a993836db0-kube-api-access-wpcfc") pod "cilium-2zrzr" (UID: "55d51b3d-6c49-4bca-8284-c4a993836db0") : configmap "kube-root-ca.crt" not found Feb 9 18:36:53.925140 kubelet[2407]: E0209 18:36:53.925110 2407 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 9 18:36:53.925229 kubelet[2407]: E0209 18:36:53.925159 2407 projected.go:198] Error preparing data for projected volume kube-api-access-dwdft for pod kube-system/kube-proxy-np42d: configmap "kube-root-ca.crt" not found Feb 9 18:36:53.925229 kubelet[2407]: E0209 18:36:53.925194 2407 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/44ca77ca-be74-4ec0-b2b1-9df69db59950-kube-api-access-dwdft podName:44ca77ca-be74-4ec0-b2b1-9df69db59950 nodeName:}" failed. No retries permitted until 2024-02-09 18:36:54.425182672 +0000 UTC m=+13.531281675 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dwdft" (UniqueName: "kubernetes.io/projected/44ca77ca-be74-4ec0-b2b1-9df69db59950-kube-api-access-dwdft") pod "kube-proxy-np42d" (UID: "44ca77ca-be74-4ec0-b2b1-9df69db59950") : configmap "kube-root-ca.crt" not found Feb 9 18:36:54.117269 env[1356]: time="2024-02-09T18:36:54.116892658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-gfxx7,Uid:ae0e75a6-2aca-4cb3-8508-2a30c9b250d7,Namespace:kube-system,Attempt:0,}" Feb 9 18:36:54.152526 env[1356]: time="2024-02-09T18:36:54.152454732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:36:54.152526 env[1356]: time="2024-02-09T18:36:54.152491521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:36:54.152526 env[1356]: time="2024-02-09T18:36:54.152502558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:36:54.152951 env[1356]: time="2024-02-09T18:36:54.152897846Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520 pid=2487 runtime=io.containerd.runc.v2 Feb 9 18:36:54.164904 systemd[1]: Started cri-containerd-3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520.scope. Feb 9 18:36:54.201422 env[1356]: time="2024-02-09T18:36:54.201370698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-gfxx7,Uid:ae0e75a6-2aca-4cb3-8508-2a30c9b250d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520\"" Feb 9 18:36:54.204546 env[1356]: time="2024-02-09T18:36:54.203503973Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 18:36:54.662919 env[1356]: time="2024-02-09T18:36:54.662885086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-np42d,Uid:44ca77ca-be74-4ec0-b2b1-9df69db59950,Namespace:kube-system,Attempt:0,}" Feb 9 18:36:54.674016 env[1356]: time="2024-02-09T18:36:54.673975341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2zrzr,Uid:55d51b3d-6c49-4bca-8284-c4a993836db0,Namespace:kube-system,Attempt:0,}" Feb 9 18:36:54.707831 env[1356]: time="2024-02-09T18:36:54.701834839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:36:54.707831 env[1356]: time="2024-02-09T18:36:54.701884145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:36:54.707831 env[1356]: time="2024-02-09T18:36:54.701894942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:36:54.707831 env[1356]: time="2024-02-09T18:36:54.702036262Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c2e952d6828f2bc9f75ed7b6c832dd6942c1b9b064112fb17a4e69221b1bca0 pid=2531 runtime=io.containerd.runc.v2 Feb 9 18:36:54.718532 systemd[1]: Started cri-containerd-5c2e952d6828f2bc9f75ed7b6c832dd6942c1b9b064112fb17a4e69221b1bca0.scope. Feb 9 18:36:54.728263 env[1356]: time="2024-02-09T18:36:54.727593134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:36:54.728263 env[1356]: time="2024-02-09T18:36:54.727625765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:36:54.728263 env[1356]: time="2024-02-09T18:36:54.727645399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:36:54.728263 env[1356]: time="2024-02-09T18:36:54.727746890Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414 pid=2559 runtime=io.containerd.runc.v2 Feb 9 18:36:54.744630 systemd[1]: Started cri-containerd-0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414.scope. Feb 9 18:36:54.751498 env[1356]: time="2024-02-09T18:36:54.751455046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-np42d,Uid:44ca77ca-be74-4ec0-b2b1-9df69db59950,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c2e952d6828f2bc9f75ed7b6c832dd6942c1b9b064112fb17a4e69221b1bca0\"" Feb 9 18:36:54.756590 env[1356]: time="2024-02-09T18:36:54.756267481Z" level=info msg="CreateContainer within sandbox \"5c2e952d6828f2bc9f75ed7b6c832dd6942c1b9b064112fb17a4e69221b1bca0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:36:54.772126 env[1356]: time="2024-02-09T18:36:54.772096792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2zrzr,Uid:55d51b3d-6c49-4bca-8284-c4a993836db0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\"" Feb 9 18:36:54.807591 env[1356]: time="2024-02-09T18:36:54.807549657Z" level=info msg="CreateContainer within sandbox \"5c2e952d6828f2bc9f75ed7b6c832dd6942c1b9b064112fb17a4e69221b1bca0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b89e51f87804c72dd716401694db28b84600cb914fc41458fc31c4ac1b689edf\"" Feb 9 18:36:54.809188 env[1356]: time="2024-02-09T18:36:54.808183437Z" level=info msg="StartContainer for \"b89e51f87804c72dd716401694db28b84600cb914fc41458fc31c4ac1b689edf\"" Feb 9 18:36:54.824130 systemd[1]: Started cri-containerd-b89e51f87804c72dd716401694db28b84600cb914fc41458fc31c4ac1b689edf.scope. Feb 9 18:36:54.859716 env[1356]: time="2024-02-09T18:36:54.859674274Z" level=info msg="StartContainer for \"b89e51f87804c72dd716401694db28b84600cb914fc41458fc31c4ac1b689edf\" returns successfully" Feb 9 18:36:56.064002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2370416133.mount: Deactivated successfully. Feb 9 18:36:57.335945 env[1356]: time="2024-02-09T18:36:57.335893515Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:57.342656 env[1356]: time="2024-02-09T18:36:57.342610370Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:57.345927 env[1356]: time="2024-02-09T18:36:57.345898776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:57.346381 env[1356]: time="2024-02-09T18:36:57.346352295Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 18:36:57.349022 env[1356]: time="2024-02-09T18:36:57.348917734Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 18:36:57.350369 env[1356]: time="2024-02-09T18:36:57.350341235Z" level=info msg="CreateContainer within sandbox \"3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 18:36:57.375476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2626459183.mount: Deactivated successfully. Feb 9 18:36:57.381069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1204170940.mount: Deactivated successfully. Feb 9 18:36:57.392409 env[1356]: time="2024-02-09T18:36:57.392355709Z" level=info msg="CreateContainer within sandbox \"3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae\"" Feb 9 18:36:57.394381 env[1356]: time="2024-02-09T18:36:57.393319812Z" level=info msg="StartContainer for \"7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae\"" Feb 9 18:36:57.411517 systemd[1]: Started cri-containerd-7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae.scope. Feb 9 18:36:57.443828 env[1356]: time="2024-02-09T18:36:57.443770404Z" level=info msg="StartContainer for \"7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae\" returns successfully" Feb 9 18:36:58.125497 kubelet[2407]: I0209 18:36:58.125471 2407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-np42d" podStartSLOduration=5.125422767 podCreationTimestamp="2024-02-09 18:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:36:55.118318996 +0000 UTC m=+14.224417999" watchObservedRunningTime="2024-02-09 18:36:58.125422767 +0000 UTC m=+17.231521770" Feb 9 18:37:01.078603 kubelet[2407]: I0209 18:37:01.078569 2407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-gfxx7" podStartSLOduration=4.934219195 podCreationTimestamp="2024-02-09 18:36:53 +0000 UTC" firstStartedPulling="2024-02-09 18:36:54.202582635 +0000 UTC m=+13.308681638" lastFinishedPulling="2024-02-09 18:36:57.346874437 +0000 UTC m=+16.452973480" observedRunningTime="2024-02-09 18:36:58.126423866 +0000 UTC m=+17.232522869" watchObservedRunningTime="2024-02-09 18:37:01.078511037 +0000 UTC m=+20.184610040" Feb 9 18:37:02.493012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1136872118.mount: Deactivated successfully. Feb 9 18:37:05.661700 env[1356]: time="2024-02-09T18:37:05.661657143Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:37:05.669588 env[1356]: time="2024-02-09T18:37:05.669556120Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:37:05.675387 env[1356]: time="2024-02-09T18:37:05.675335536Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:37:05.676030 env[1356]: time="2024-02-09T18:37:05.676001985Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 18:37:05.679655 env[1356]: time="2024-02-09T18:37:05.679622288Z" level=info msg="CreateContainer within sandbox \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:37:05.721554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount317457110.mount: Deactivated successfully. Feb 9 18:37:05.725983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount27511314.mount: Deactivated successfully. Feb 9 18:37:05.764831 env[1356]: time="2024-02-09T18:37:05.764780543Z" level=info msg="CreateContainer within sandbox \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966\"" Feb 9 18:37:05.767159 env[1356]: time="2024-02-09T18:37:05.766671876Z" level=info msg="StartContainer for \"e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966\"" Feb 9 18:37:05.784743 systemd[1]: Started cri-containerd-e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966.scope. Feb 9 18:37:05.821078 systemd[1]: cri-containerd-e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966.scope: Deactivated successfully. Feb 9 18:37:05.822061 env[1356]: time="2024-02-09T18:37:05.821995467Z" level=info msg="StartContainer for \"e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966\" returns successfully" Feb 9 18:37:06.719374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966-rootfs.mount: Deactivated successfully. Feb 9 18:37:06.745496 env[1356]: time="2024-02-09T18:37:06.745433101Z" level=info msg="shim disconnected" id=e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966 Feb 9 18:37:06.745496 env[1356]: time="2024-02-09T18:37:06.745491368Z" level=warning msg="cleaning up after shim disconnected" id=e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966 namespace=k8s.io Feb 9 18:37:06.745496 env[1356]: time="2024-02-09T18:37:06.745501086Z" level=info msg="cleaning up dead shim" Feb 9 18:37:06.752838 env[1356]: time="2024-02-09T18:37:06.752790752Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2848 runtime=io.containerd.runc.v2\n" Feb 9 18:37:07.144917 env[1356]: time="2024-02-09T18:37:07.144873853Z" level=info msg="CreateContainer within sandbox \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:37:07.252494 env[1356]: time="2024-02-09T18:37:07.252445485Z" level=info msg="CreateContainer within sandbox \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e\"" Feb 9 18:37:07.253205 env[1356]: time="2024-02-09T18:37:07.253180925Z" level=info msg="StartContainer for \"c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e\"" Feb 9 18:37:07.271693 systemd[1]: Started cri-containerd-c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e.scope. Feb 9 18:37:07.299554 env[1356]: time="2024-02-09T18:37:07.299490745Z" level=info msg="StartContainer for \"c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e\" returns successfully" Feb 9 18:37:07.308072 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:37:07.308446 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:37:07.308710 systemd[1]: Stopping systemd-sysctl.service... Feb 9 18:37:07.310422 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:37:07.314871 systemd[1]: cri-containerd-c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e.scope: Deactivated successfully. Feb 9 18:37:07.318500 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:37:07.354057 env[1356]: time="2024-02-09T18:37:07.354004783Z" level=info msg="shim disconnected" id=c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e Feb 9 18:37:07.354057 env[1356]: time="2024-02-09T18:37:07.354053692Z" level=warning msg="cleaning up after shim disconnected" id=c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e namespace=k8s.io Feb 9 18:37:07.354057 env[1356]: time="2024-02-09T18:37:07.354063770Z" level=info msg="cleaning up dead shim" Feb 9 18:37:07.361686 env[1356]: time="2024-02-09T18:37:07.361645243Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2912 runtime=io.containerd.runc.v2\n" Feb 9 18:37:07.719165 systemd[1]: run-containerd-runc-k8s.io-c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e-runc.H17LVR.mount: Deactivated successfully. Feb 9 18:37:07.719295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e-rootfs.mount: Deactivated successfully. Feb 9 18:37:08.138736 env[1356]: time="2024-02-09T18:37:08.138672832Z" level=info msg="CreateContainer within sandbox \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:37:08.184511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3060115806.mount: Deactivated successfully. Feb 9 18:37:08.191349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3372217750.mount: Deactivated successfully. Feb 9 18:37:08.215740 env[1356]: time="2024-02-09T18:37:08.215687654Z" level=info msg="CreateContainer within sandbox \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224\"" Feb 9 18:37:08.216258 env[1356]: time="2024-02-09T18:37:08.216234218Z" level=info msg="StartContainer for \"998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224\"" Feb 9 18:37:08.233402 systemd[1]: Started cri-containerd-998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224.scope. Feb 9 18:37:08.264850 systemd[1]: cri-containerd-998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224.scope: Deactivated successfully. Feb 9 18:37:08.277674 env[1356]: time="2024-02-09T18:37:08.277633209Z" level=info msg="StartContainer for \"998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224\" returns successfully" Feb 9 18:37:08.368708 env[1356]: time="2024-02-09T18:37:08.368663564Z" level=info msg="shim disconnected" id=998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224 Feb 9 18:37:08.369026 env[1356]: time="2024-02-09T18:37:08.368996813Z" level=warning msg="cleaning up after shim disconnected" id=998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224 namespace=k8s.io Feb 9 18:37:08.369101 env[1356]: time="2024-02-09T18:37:08.369088474Z" level=info msg="cleaning up dead shim" Feb 9 18:37:08.377683 env[1356]: time="2024-02-09T18:37:08.377650808Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2970 runtime=io.containerd.runc.v2\n" Feb 9 18:37:09.144378 env[1356]: time="2024-02-09T18:37:09.144333415Z" level=info msg="CreateContainer within sandbox \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:37:09.189924 env[1356]: time="2024-02-09T18:37:09.189855530Z" level=info msg="CreateContainer within sandbox \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea\"" Feb 9 18:37:09.190824 env[1356]: time="2024-02-09T18:37:09.190798492Z" level=info msg="StartContainer for \"dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea\"" Feb 9 18:37:09.217150 systemd[1]: Started cri-containerd-dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea.scope. Feb 9 18:37:09.240888 systemd[1]: cri-containerd-dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea.scope: Deactivated successfully. Feb 9 18:37:09.242701 env[1356]: time="2024-02-09T18:37:09.242613891Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55d51b3d_6c49_4bca_8284_c4a993836db0.slice/cri-containerd-dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea.scope/memory.events\": no such file or directory" Feb 9 18:37:09.247819 env[1356]: time="2024-02-09T18:37:09.247778650Z" level=info msg="StartContainer for \"dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea\" returns successfully" Feb 9 18:37:09.282957 env[1356]: time="2024-02-09T18:37:09.282903460Z" level=info msg="shim disconnected" id=dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea Feb 9 18:37:09.282957 env[1356]: time="2024-02-09T18:37:09.282953450Z" level=warning msg="cleaning up after shim disconnected" id=dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea namespace=k8s.io Feb 9 18:37:09.282957 env[1356]: time="2024-02-09T18:37:09.282962688Z" level=info msg="cleaning up dead shim" Feb 9 18:37:09.290423 env[1356]: time="2024-02-09T18:37:09.290378336Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3024 runtime=io.containerd.runc.v2\n" Feb 9 18:37:09.719212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea-rootfs.mount: Deactivated successfully. Feb 9 18:37:10.148045 env[1356]: time="2024-02-09T18:37:10.147882066Z" level=info msg="CreateContainer within sandbox \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:37:10.183268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915013359.mount: Deactivated successfully. Feb 9 18:37:10.192525 env[1356]: time="2024-02-09T18:37:10.192470026Z" level=info msg="CreateContainer within sandbox \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9\"" Feb 9 18:37:10.193236 env[1356]: time="2024-02-09T18:37:10.193199357Z" level=info msg="StartContainer for \"f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9\"" Feb 9 18:37:10.212789 systemd[1]: Started cri-containerd-f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9.scope. Feb 9 18:37:10.249814 env[1356]: time="2024-02-09T18:37:10.249771935Z" level=info msg="StartContainer for \"f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9\" returns successfully" Feb 9 18:37:10.325366 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 18:37:10.331377 kubelet[2407]: I0209 18:37:10.331339 2407 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:37:10.418613 kubelet[2407]: I0209 18:37:10.418518 2407 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:37:10.421257 kubelet[2407]: I0209 18:37:10.421237 2407 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:37:10.424615 systemd[1]: Created slice kubepods-burstable-pod1dc16ecd_228a_413b_90a8_2837eb959c69.slice. Feb 9 18:37:10.429252 systemd[1]: Created slice kubepods-burstable-pod515a63a0_c793_4819_9efd_ef015afba1ff.slice. Feb 9 18:37:10.502555 kubelet[2407]: I0209 18:37:10.502527 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/515a63a0-c793-4819-9efd-ef015afba1ff-config-volume\") pod \"coredns-5d78c9869d-568vr\" (UID: \"515a63a0-c793-4819-9efd-ef015afba1ff\") " pod="kube-system/coredns-5d78c9869d-568vr" Feb 9 18:37:10.502774 kubelet[2407]: I0209 18:37:10.502751 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dc16ecd-228a-413b-90a8-2837eb959c69-config-volume\") pod \"coredns-5d78c9869d-2lp82\" (UID: \"1dc16ecd-228a-413b-90a8-2837eb959c69\") " pod="kube-system/coredns-5d78c9869d-2lp82" Feb 9 18:37:10.502827 kubelet[2407]: I0209 18:37:10.502793 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfcmc\" (UniqueName: \"kubernetes.io/projected/1dc16ecd-228a-413b-90a8-2837eb959c69-kube-api-access-nfcmc\") pod \"coredns-5d78c9869d-2lp82\" (UID: \"1dc16ecd-228a-413b-90a8-2837eb959c69\") " pod="kube-system/coredns-5d78c9869d-2lp82" Feb 9 18:37:10.502827 kubelet[2407]: I0209 18:37:10.502821 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l8wf\" (UniqueName: \"kubernetes.io/projected/515a63a0-c793-4819-9efd-ef015afba1ff-kube-api-access-5l8wf\") pod \"coredns-5d78c9869d-568vr\" (UID: \"515a63a0-c793-4819-9efd-ef015afba1ff\") " pod="kube-system/coredns-5d78c9869d-568vr" Feb 9 18:37:10.728558 env[1356]: time="2024-02-09T18:37:10.728171817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-2lp82,Uid:1dc16ecd-228a-413b-90a8-2837eb959c69,Namespace:kube-system,Attempt:0,}" Feb 9 18:37:10.732070 env[1356]: time="2024-02-09T18:37:10.732031384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-568vr,Uid:515a63a0-c793-4819-9efd-ef015afba1ff,Namespace:kube-system,Attempt:0,}" Feb 9 18:37:10.783310 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 18:37:11.158655 kubelet[2407]: I0209 18:37:11.158626 2407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2zrzr" podStartSLOduration=7.25580513 podCreationTimestamp="2024-02-09 18:36:53 +0000 UTC" firstStartedPulling="2024-02-09 18:36:54.773505592 +0000 UTC m=+13.879604595" lastFinishedPulling="2024-02-09 18:37:05.67629152 +0000 UTC m=+24.782390523" observedRunningTime="2024-02-09 18:37:11.158143308 +0000 UTC m=+30.264242351" watchObservedRunningTime="2024-02-09 18:37:11.158591058 +0000 UTC m=+30.264690061" Feb 9 18:37:12.424343 systemd-networkd[1501]: cilium_host: Link UP Feb 9 18:37:12.424605 systemd-networkd[1501]: cilium_net: Link UP Feb 9 18:37:12.424608 systemd-networkd[1501]: cilium_net: Gained carrier Feb 9 18:37:12.430806 systemd-networkd[1501]: cilium_host: Gained carrier Feb 9 18:37:12.431334 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 18:37:12.595820 systemd-networkd[1501]: cilium_vxlan: Link UP Feb 9 18:37:12.595826 systemd-networkd[1501]: cilium_vxlan: Gained carrier Feb 9 18:37:12.638389 systemd-networkd[1501]: cilium_host: Gained IPv6LL Feb 9 18:37:12.836305 kernel: NET: Registered PF_ALG protocol family Feb 9 18:37:13.398410 systemd-networkd[1501]: cilium_net: Gained IPv6LL Feb 9 18:37:13.478942 systemd-networkd[1501]: lxc_health: Link UP Feb 9 18:37:13.491996 systemd-networkd[1501]: lxc_health: Gained carrier Feb 9 18:37:13.492319 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:37:13.821192 systemd-networkd[1501]: lxc68326c36e366: Link UP Feb 9 18:37:13.834334 kernel: eth0: renamed from tmp5967a Feb 9 18:37:13.840955 systemd-networkd[1501]: lxcb17a58ceae23: Link UP Feb 9 18:37:13.861832 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc68326c36e366: link becomes ready Feb 9 18:37:13.861934 kernel: eth0: renamed from tmp5a65f Feb 9 18:37:13.858800 systemd-networkd[1501]: lxc68326c36e366: Gained carrier Feb 9 18:37:13.883661 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb17a58ceae23: link becomes ready Feb 9 18:37:13.883245 systemd-networkd[1501]: lxcb17a58ceae23: Gained carrier Feb 9 18:37:14.294405 systemd-networkd[1501]: cilium_vxlan: Gained IPv6LL Feb 9 18:37:14.614447 systemd-networkd[1501]: lxc_health: Gained IPv6LL Feb 9 18:37:15.383838 systemd-networkd[1501]: lxcb17a58ceae23: Gained IPv6LL Feb 9 18:37:15.639460 systemd-networkd[1501]: lxc68326c36e366: Gained IPv6LL Feb 9 18:37:17.412346 env[1356]: time="2024-02-09T18:37:17.411565143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:37:17.412346 env[1356]: time="2024-02-09T18:37:17.411605936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:37:17.412346 env[1356]: time="2024-02-09T18:37:17.411616174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:37:17.412346 env[1356]: time="2024-02-09T18:37:17.411719875Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a65f38937e632fbd836efd8aa4ebbbce7f4ed4a32393212bdfa174eb2ddcc76 pid=3580 runtime=io.containerd.runc.v2 Feb 9 18:37:17.427538 env[1356]: time="2024-02-09T18:37:17.427453213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:37:17.427675 env[1356]: time="2024-02-09T18:37:17.427542717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:37:17.427675 env[1356]: time="2024-02-09T18:37:17.427570552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:37:17.427798 env[1356]: time="2024-02-09T18:37:17.427764517Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5967a48d6a36d6b4996d64ff63ef41e3652b2bf8c63f6ad31f1126fae5b38382 pid=3597 runtime=io.containerd.runc.v2 Feb 9 18:37:17.445387 systemd[1]: run-containerd-runc-k8s.io-5a65f38937e632fbd836efd8aa4ebbbce7f4ed4a32393212bdfa174eb2ddcc76-runc.f62voy.mount: Deactivated successfully. Feb 9 18:37:17.450414 systemd[1]: Started cri-containerd-5a65f38937e632fbd836efd8aa4ebbbce7f4ed4a32393212bdfa174eb2ddcc76.scope. Feb 9 18:37:17.466201 systemd[1]: Started cri-containerd-5967a48d6a36d6b4996d64ff63ef41e3652b2bf8c63f6ad31f1126fae5b38382.scope. Feb 9 18:37:17.493532 env[1356]: time="2024-02-09T18:37:17.493480763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-568vr,Uid:515a63a0-c793-4819-9efd-ef015afba1ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a65f38937e632fbd836efd8aa4ebbbce7f4ed4a32393212bdfa174eb2ddcc76\"" Feb 9 18:37:17.496317 env[1356]: time="2024-02-09T18:37:17.496265617Z" level=info msg="CreateContainer within sandbox \"5a65f38937e632fbd836efd8aa4ebbbce7f4ed4a32393212bdfa174eb2ddcc76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:37:17.531482 env[1356]: time="2024-02-09T18:37:17.531434780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-2lp82,Uid:1dc16ecd-228a-413b-90a8-2837eb959c69,Namespace:kube-system,Attempt:0,} returns sandbox id \"5967a48d6a36d6b4996d64ff63ef41e3652b2bf8c63f6ad31f1126fae5b38382\"" Feb 9 18:37:17.535091 env[1356]: time="2024-02-09T18:37:17.535053721Z" level=info msg="CreateContainer within sandbox \"5967a48d6a36d6b4996d64ff63ef41e3652b2bf8c63f6ad31f1126fae5b38382\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:37:17.540974 env[1356]: time="2024-02-09T18:37:17.540928093Z" level=info msg="CreateContainer within sandbox \"5a65f38937e632fbd836efd8aa4ebbbce7f4ed4a32393212bdfa174eb2ddcc76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7415a796a34d1dc403ed96a412398cf2e7c6e18adf52bd3355e14df60a12fb3e\"" Feb 9 18:37:17.541675 env[1356]: time="2024-02-09T18:37:17.541650841Z" level=info msg="StartContainer for \"7415a796a34d1dc403ed96a412398cf2e7c6e18adf52bd3355e14df60a12fb3e\"" Feb 9 18:37:17.556598 systemd[1]: Started cri-containerd-7415a796a34d1dc403ed96a412398cf2e7c6e18adf52bd3355e14df60a12fb3e.scope. Feb 9 18:37:17.578164 env[1356]: time="2024-02-09T18:37:17.578124207Z" level=info msg="CreateContainer within sandbox \"5967a48d6a36d6b4996d64ff63ef41e3652b2bf8c63f6ad31f1126fae5b38382\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e78cd11ff73d27a9d893d211eac50fcb7e99798b193a32453b56574a3ae2202\"" Feb 9 18:37:17.579864 env[1356]: time="2024-02-09T18:37:17.578974812Z" level=info msg="StartContainer for \"9e78cd11ff73d27a9d893d211eac50fcb7e99798b193a32453b56574a3ae2202\"" Feb 9 18:37:17.594848 env[1356]: time="2024-02-09T18:37:17.594795215Z" level=info msg="StartContainer for \"7415a796a34d1dc403ed96a412398cf2e7c6e18adf52bd3355e14df60a12fb3e\" returns successfully" Feb 9 18:37:17.603207 systemd[1]: Started cri-containerd-9e78cd11ff73d27a9d893d211eac50fcb7e99798b193a32453b56574a3ae2202.scope. Feb 9 18:37:17.648889 env[1356]: time="2024-02-09T18:37:17.648827586Z" level=info msg="StartContainer for \"9e78cd11ff73d27a9d893d211eac50fcb7e99798b193a32453b56574a3ae2202\" returns successfully" Feb 9 18:37:18.173516 kubelet[2407]: I0209 18:37:18.173470 2407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-568vr" podStartSLOduration=25.173426539 podCreationTimestamp="2024-02-09 18:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:37:18.173178183 +0000 UTC m=+37.279277186" watchObservedRunningTime="2024-02-09 18:37:18.173426539 +0000 UTC m=+37.279525542" Feb 9 18:37:18.187892 kubelet[2407]: I0209 18:37:18.187863 2407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-2lp82" podStartSLOduration=25.187829362 podCreationTimestamp="2024-02-09 18:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:37:18.186519396 +0000 UTC m=+37.292618399" watchObservedRunningTime="2024-02-09 18:37:18.187829362 +0000 UTC m=+37.293928365" Feb 9 18:39:05.906215 systemd[1]: Started sshd@5-10.200.20.17:22-10.200.12.6:59676.service. Feb 9 18:39:06.353445 sshd[3759]: Accepted publickey for core from 10.200.12.6 port 59676 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:39:06.354765 sshd[3759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:06.359399 systemd[1]: Started session-8.scope. Feb 9 18:39:06.359723 systemd-logind[1339]: New session 8 of user core. Feb 9 18:39:06.803504 sshd[3759]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:06.806165 systemd-logind[1339]: Session 8 logged out. Waiting for processes to exit. Feb 9 18:39:06.806821 systemd[1]: sshd@5-10.200.20.17:22-10.200.12.6:59676.service: Deactivated successfully. Feb 9 18:39:06.807586 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 18:39:06.808339 systemd-logind[1339]: Removed session 8. Feb 9 18:39:11.877365 systemd[1]: Started sshd@6-10.200.20.17:22-10.200.12.6:47918.service. Feb 9 18:39:12.327138 sshd[3773]: Accepted publickey for core from 10.200.12.6 port 47918 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:39:12.328372 sshd[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:12.332789 systemd[1]: Started session-9.scope. Feb 9 18:39:12.333517 systemd-logind[1339]: New session 9 of user core. Feb 9 18:39:12.706105 sshd[3773]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:12.708828 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 18:39:12.708853 systemd-logind[1339]: Session 9 logged out. Waiting for processes to exit. Feb 9 18:39:12.709562 systemd[1]: sshd@6-10.200.20.17:22-10.200.12.6:47918.service: Deactivated successfully. Feb 9 18:39:12.710617 systemd-logind[1339]: Removed session 9. Feb 9 18:39:17.780181 systemd[1]: Started sshd@7-10.200.20.17:22-10.200.12.6:37934.service. Feb 9 18:39:18.227689 sshd[3786]: Accepted publickey for core from 10.200.12.6 port 37934 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:39:18.229034 sshd[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:18.232760 systemd-logind[1339]: New session 10 of user core. Feb 9 18:39:18.233174 systemd[1]: Started session-10.scope. Feb 9 18:39:18.614257 sshd[3786]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:18.616770 systemd-logind[1339]: Session 10 logged out. Waiting for processes to exit. Feb 9 18:39:18.616923 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 18:39:18.617813 systemd-logind[1339]: Removed session 10. Feb 9 18:39:18.618050 systemd[1]: sshd@7-10.200.20.17:22-10.200.12.6:37934.service: Deactivated successfully. Feb 9 18:39:23.689244 systemd[1]: Started sshd@8-10.200.20.17:22-10.200.12.6:37950.service. Feb 9 18:39:24.138798 sshd[3798]: Accepted publickey for core from 10.200.12.6 port 37950 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:39:24.140428 sshd[3798]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:24.144101 systemd-logind[1339]: New session 11 of user core. Feb 9 18:39:24.144598 systemd[1]: Started session-11.scope. Feb 9 18:39:24.532815 sshd[3798]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:24.535791 systemd-logind[1339]: Session 11 logged out. Waiting for processes to exit. Feb 9 18:39:24.535942 systemd[1]: sshd@8-10.200.20.17:22-10.200.12.6:37950.service: Deactivated successfully. Feb 9 18:39:24.536641 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 18:39:24.537350 systemd-logind[1339]: Removed session 11. Feb 9 18:39:24.608128 systemd[1]: Started sshd@9-10.200.20.17:22-10.200.12.6:37956.service. Feb 9 18:39:25.058877 sshd[3811]: Accepted publickey for core from 10.200.12.6 port 37956 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:39:25.060483 sshd[3811]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:25.064364 systemd-logind[1339]: New session 12 of user core. Feb 9 18:39:25.064807 systemd[1]: Started session-12.scope. Feb 9 18:39:26.030524 sshd[3811]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:26.033158 systemd[1]: sshd@9-10.200.20.17:22-10.200.12.6:37956.service: Deactivated successfully. Feb 9 18:39:26.034897 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 18:39:26.035521 systemd-logind[1339]: Session 12 logged out. Waiting for processes to exit. Feb 9 18:39:26.036212 systemd-logind[1339]: Removed session 12. Feb 9 18:39:26.101128 systemd[1]: Started sshd@10-10.200.20.17:22-10.200.12.6:37970.service. Feb 9 18:39:26.522073 sshd[3823]: Accepted publickey for core from 10.200.12.6 port 37970 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:39:26.523649 sshd[3823]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:26.527890 systemd-logind[1339]: New session 13 of user core. Feb 9 18:39:26.528324 systemd[1]: Started session-13.scope. Feb 9 18:39:26.894007 sshd[3823]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:26.896178 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 18:39:26.897036 systemd[1]: sshd@10-10.200.20.17:22-10.200.12.6:37970.service: Deactivated successfully. Feb 9 18:39:26.897259 systemd-logind[1339]: Session 13 logged out. Waiting for processes to exit. Feb 9 18:39:26.899338 systemd-logind[1339]: Removed session 13. Feb 9 18:39:31.970458 systemd[1]: Started sshd@11-10.200.20.17:22-10.200.12.6:42222.service. Feb 9 18:39:32.425364 sshd[3834]: Accepted publickey for core from 10.200.12.6 port 42222 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:39:32.427000 sshd[3834]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:32.430776 systemd-logind[1339]: New session 14 of user core. Feb 9 18:39:32.431260 systemd[1]: Started session-14.scope. Feb 9 18:39:32.826067 sshd[3834]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:32.828712 systemd-logind[1339]: Session 14 logged out. Waiting for processes to exit. Feb 9 18:39:32.828953 systemd[1]: sshd@11-10.200.20.17:22-10.200.12.6:42222.service: Deactivated successfully. Feb 9 18:39:32.829699 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 18:39:32.830470 systemd-logind[1339]: Removed session 14. Feb 9 18:39:37.895679 systemd[1]: Started sshd@12-10.200.20.17:22-10.200.12.6:37862.service. Feb 9 18:39:38.311474 sshd[3846]: Accepted publickey for core from 10.200.12.6 port 37862 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:39:38.313001 sshd[3846]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:38.317204 systemd[1]: Started session-15.scope. Feb 9 18:39:38.317530 systemd-logind[1339]: New session 15 of user core. Feb 9 18:39:38.671915 sshd[3846]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:38.674516 systemd[1]: sshd@12-10.200.20.17:22-10.200.12.6:37862.service: Deactivated successfully. Feb 9 18:39:38.675265 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 18:39:38.676013 systemd-logind[1339]: Session 15 logged out. Waiting for processes to exit. Feb 9 18:39:38.676743 systemd-logind[1339]: Removed session 15. Feb 9 18:39:38.741923 systemd[1]: Started sshd@13-10.200.20.17:22-10.200.12.6:37876.service. Feb 9 18:39:39.156830 sshd[3858]: Accepted publickey for core from 10.200.12.6 port 37876 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:39:39.158434 sshd[3858]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:39.162658 systemd[1]: Started session-16.scope. Feb 9 18:39:39.163235 systemd-logind[1339]: New session 16 of user core. Feb 9 18:39:39.546343 sshd[3858]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:39.549413 systemd[1]: sshd@13-10.200.20.17:22-10.200.12.6:37876.service: Deactivated successfully. Feb 9 18:39:39.550158 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 18:39:39.550769 systemd-logind[1339]: Session 16 logged out. Waiting for processes to exit. Feb 9 18:39:39.551687 systemd-logind[1339]: Removed session 16. Feb 9 18:39:39.615736 systemd[1]: Started sshd@14-10.200.20.17:22-10.200.12.6:37882.service. Feb 9 18:39:40.031538 sshd[3868]: Accepted publickey for core from 10.200.12.6 port 37882 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:39:40.033065 sshd[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:40.037162 systemd[1]: Started session-17.scope. Feb 9 18:39:40.038438 systemd-logind[1339]: New session 17 of user core. Feb 9 18:39:41.065773 sshd[3868]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:41.069220 systemd[1]: sshd@14-10.200.20.17:22-10.200.12.6:37882.service: Deactivated successfully. Feb 9 18:39:41.070009 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 18:39:41.070906 systemd-logind[1339]: Session 17 logged out. Waiting for processes to exit. Feb 9 18:39:41.071848 systemd-logind[1339]: Removed session 17. Feb 9 18:39:41.141190 systemd[1]: Started sshd@15-10.200.20.17:22-10.200.12.6:37884.service. Feb 9 18:39:41.591389 sshd[3887]: Accepted publickey for core from 10.200.12.6 port 37884 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:39:41.593027 sshd[3887]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:41.597341 systemd[1]: Started session-18.scope. Feb 9 18:39:41.597911 systemd-logind[1339]: New session 18 of user core. Feb 9 18:39:42.167778 sshd[3887]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:42.170225 systemd[1]: sshd@15-10.200.20.17:22-10.200.12.6:37884.service: Deactivated successfully. Feb 9 18:39:42.171087 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 18:39:42.172177 systemd-logind[1339]: Session 18 logged out. Waiting for processes to exit. Feb 9 18:39:42.173212 systemd-logind[1339]: Removed session 18. Feb 9 18:39:42.236574 systemd[1]: Started sshd@16-10.200.20.17:22-10.200.12.6:37888.service. Feb 9 18:39:42.651383 sshd[3897]: Accepted publickey for core from 10.200.12.6 port 37888 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:39:42.652985 sshd[3897]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:42.656866 systemd-logind[1339]: New session 19 of user core. Feb 9 18:39:42.657358 systemd[1]: Started session-19.scope. Feb 9 18:39:43.017118 sshd[3897]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:43.019843 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 18:39:43.020507 systemd-logind[1339]: Session 19 logged out. Waiting for processes to exit. Feb 9 18:39:43.020636 systemd[1]: sshd@16-10.200.20.17:22-10.200.12.6:37888.service: Deactivated successfully. Feb 9 18:39:43.021698 systemd-logind[1339]: Removed session 19. Feb 9 18:39:48.087469 systemd[1]: Started sshd@17-10.200.20.17:22-10.200.12.6:52974.service. Feb 9 18:39:48.507408 sshd[3911]: Accepted publickey for core from 10.200.12.6 port 52974 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:39:48.509000 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:48.513217 systemd-logind[1339]: New session 20 of user core. Feb 9 18:39:48.513888 systemd[1]: Started session-20.scope. Feb 9 18:39:48.867976 sshd[3911]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:48.870352 systemd-logind[1339]: Session 20 logged out. Waiting for processes to exit. Feb 9 18:39:48.870604 systemd[1]: sshd@17-10.200.20.17:22-10.200.12.6:52974.service: Deactivated successfully. Feb 9 18:39:48.871337 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 18:39:48.872106 systemd-logind[1339]: Removed session 20. Feb 9 18:39:53.944029 systemd[1]: Started sshd@18-10.200.20.17:22-10.200.12.6:52986.service. Feb 9 18:39:54.394122 sshd[3923]: Accepted publickey for core from 10.200.12.6 port 52986 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:39:54.395813 sshd[3923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:54.400144 systemd[1]: Started session-21.scope. Feb 9 18:39:54.400335 systemd-logind[1339]: New session 21 of user core. Feb 9 18:39:54.778869 sshd[3923]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:54.781590 systemd-logind[1339]: Session 21 logged out. Waiting for processes to exit. Feb 9 18:39:54.782012 systemd[1]: sshd@18-10.200.20.17:22-10.200.12.6:52986.service: Deactivated successfully. Feb 9 18:39:54.782729 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 18:39:54.784045 systemd-logind[1339]: Removed session 21. Feb 9 18:39:59.854187 systemd[1]: Started sshd@19-10.200.20.17:22-10.200.12.6:39262.service. Feb 9 18:40:00.300961 sshd[3937]: Accepted publickey for core from 10.200.12.6 port 39262 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:40:00.302572 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:00.306767 systemd[1]: Started session-22.scope. Feb 9 18:40:00.307364 systemd-logind[1339]: New session 22 of user core. Feb 9 18:40:00.677850 sshd[3937]: pam_unix(sshd:session): session closed for user core Feb 9 18:40:00.680265 systemd-logind[1339]: Session 22 logged out. Waiting for processes to exit. Feb 9 18:40:00.680458 systemd[1]: sshd@19-10.200.20.17:22-10.200.12.6:39262.service: Deactivated successfully. Feb 9 18:40:00.681169 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 18:40:00.681917 systemd-logind[1339]: Removed session 22. Feb 9 18:40:00.754307 systemd[1]: Started sshd@20-10.200.20.17:22-10.200.12.6:39274.service. Feb 9 18:40:01.210783 sshd[3949]: Accepted publickey for core from 10.200.12.6 port 39274 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:40:01.212581 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:01.216874 systemd[1]: Started session-23.scope. Feb 9 18:40:01.218229 systemd-logind[1339]: New session 23 of user core. Feb 9 18:40:03.554202 systemd[1]: run-containerd-runc-k8s.io-f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9-runc.JqSZuQ.mount: Deactivated successfully. Feb 9 18:40:03.559754 env[1356]: time="2024-02-09T18:40:03.559705676Z" level=info msg="StopContainer for \"7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae\" with timeout 30 (s)" Feb 9 18:40:03.560367 env[1356]: time="2024-02-09T18:40:03.560338238Z" level=info msg="Stop container \"7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae\" with signal terminated" Feb 9 18:40:03.576114 env[1356]: time="2024-02-09T18:40:03.576048375Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:40:03.582479 systemd[1]: cri-containerd-7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae.scope: Deactivated successfully. Feb 9 18:40:03.584170 env[1356]: time="2024-02-09T18:40:03.584138220Z" level=info msg="StopContainer for \"f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9\" with timeout 1 (s)" Feb 9 18:40:03.584687 env[1356]: time="2024-02-09T18:40:03.584665362Z" level=info msg="Stop container \"f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9\" with signal terminated" Feb 9 18:40:03.591779 systemd-networkd[1501]: lxc_health: Link DOWN Feb 9 18:40:03.591784 systemd-networkd[1501]: lxc_health: Lost carrier Feb 9 18:40:03.608617 systemd[1]: cri-containerd-f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9.scope: Deactivated successfully. Feb 9 18:40:03.608893 systemd[1]: cri-containerd-f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9.scope: Consumed 6.261s CPU time. Feb 9 18:40:03.614154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae-rootfs.mount: Deactivated successfully. Feb 9 18:40:03.630643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9-rootfs.mount: Deactivated successfully. Feb 9 18:40:03.678458 env[1356]: time="2024-02-09T18:40:03.678410644Z" level=info msg="shim disconnected" id=7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae Feb 9 18:40:03.678827 env[1356]: time="2024-02-09T18:40:03.678798172Z" level=warning msg="cleaning up after shim disconnected" id=7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae namespace=k8s.io Feb 9 18:40:03.678827 env[1356]: time="2024-02-09T18:40:03.678821287Z" level=info msg="cleaning up dead shim" Feb 9 18:40:03.679105 env[1356]: time="2024-02-09T18:40:03.679061362Z" level=info msg="shim disconnected" id=f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9 Feb 9 18:40:03.679158 env[1356]: time="2024-02-09T18:40:03.679104954Z" level=warning msg="cleaning up after shim disconnected" id=f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9 namespace=k8s.io Feb 9 18:40:03.679158 env[1356]: time="2024-02-09T18:40:03.679114512Z" level=info msg="cleaning up dead shim" Feb 9 18:40:03.688827 env[1356]: time="2024-02-09T18:40:03.688778143Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4017 runtime=io.containerd.runc.v2\n" Feb 9 18:40:03.691653 env[1356]: time="2024-02-09T18:40:03.691616491Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4018 runtime=io.containerd.runc.v2\n" Feb 9 18:40:03.693990 env[1356]: time="2024-02-09T18:40:03.693953013Z" level=info msg="StopContainer for \"7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae\" returns successfully" Feb 9 18:40:03.694620 env[1356]: time="2024-02-09T18:40:03.694589774Z" level=info msg="StopPodSandbox for \"3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520\"" Feb 9 18:40:03.694684 env[1356]: time="2024-02-09T18:40:03.694648803Z" level=info msg="Container to stop \"7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:03.701643 env[1356]: time="2024-02-09T18:40:03.701510878Z" level=info msg="StopContainer for \"f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9\" returns successfully" Feb 9 18:40:03.703667 env[1356]: time="2024-02-09T18:40:03.703626682Z" level=info msg="StopPodSandbox for \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\"" Feb 9 18:40:03.704180 env[1356]: time="2024-02-09T18:40:03.704153463Z" level=info msg="Container to stop \"998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:03.704311 env[1356]: time="2024-02-09T18:40:03.704265602Z" level=info msg="Container to stop \"e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:03.704399 env[1356]: time="2024-02-09T18:40:03.704366503Z" level=info msg="Container to stop \"c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:03.704481 env[1356]: time="2024-02-09T18:40:03.704448008Z" level=info msg="Container to stop \"dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:03.704562 env[1356]: time="2024-02-09T18:40:03.704530952Z" level=info msg="Container to stop \"f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:03.709122 systemd[1]: cri-containerd-3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520.scope: Deactivated successfully. Feb 9 18:40:03.711759 systemd[1]: cri-containerd-0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414.scope: Deactivated successfully. Feb 9 18:40:03.745618 env[1356]: time="2024-02-09T18:40:03.745565787Z" level=info msg="shim disconnected" id=0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414 Feb 9 18:40:03.745618 env[1356]: time="2024-02-09T18:40:03.745609619Z" level=warning msg="cleaning up after shim disconnected" id=0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414 namespace=k8s.io Feb 9 18:40:03.745618 env[1356]: time="2024-02-09T18:40:03.745618937Z" level=info msg="cleaning up dead shim" Feb 9 18:40:03.745981 env[1356]: time="2024-02-09T18:40:03.745941117Z" level=info msg="shim disconnected" id=3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520 Feb 9 18:40:03.745981 env[1356]: time="2024-02-09T18:40:03.745977390Z" level=warning msg="cleaning up after shim disconnected" id=3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520 namespace=k8s.io Feb 9 18:40:03.746057 env[1356]: time="2024-02-09T18:40:03.745986708Z" level=info msg="cleaning up dead shim" Feb 9 18:40:03.754710 env[1356]: time="2024-02-09T18:40:03.754647446Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4077 runtime=io.containerd.runc.v2\n" Feb 9 18:40:03.754985 env[1356]: time="2024-02-09T18:40:03.754946030Z" level=info msg="TearDown network for sandbox \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\" successfully" Feb 9 18:40:03.754985 env[1356]: time="2024-02-09T18:40:03.754972905Z" level=info msg="StopPodSandbox for \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\" returns successfully" Feb 9 18:40:03.762867 env[1356]: time="2024-02-09T18:40:03.762830314Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4078 runtime=io.containerd.runc.v2\n" Feb 9 18:40:03.766565 env[1356]: time="2024-02-09T18:40:03.766534020Z" level=info msg="TearDown network for sandbox \"3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520\" successfully" Feb 9 18:40:03.766701 env[1356]: time="2024-02-09T18:40:03.766682752Z" level=info msg="StopPodSandbox for \"3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520\" returns successfully" Feb 9 18:40:03.797319 kubelet[2407]: I0209 18:40:03.797270 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-host-proc-sys-kernel\") pod \"55d51b3d-6c49-4bca-8284-c4a993836db0\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " Feb 9 18:40:03.797319 kubelet[2407]: I0209 18:40:03.797325 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-bpf-maps\") pod \"55d51b3d-6c49-4bca-8284-c4a993836db0\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " Feb 9 18:40:03.797663 kubelet[2407]: I0209 18:40:03.797343 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-lib-modules\") pod \"55d51b3d-6c49-4bca-8284-c4a993836db0\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " Feb 9 18:40:03.797663 kubelet[2407]: I0209 18:40:03.797360 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-cilium-cgroup\") pod \"55d51b3d-6c49-4bca-8284-c4a993836db0\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " Feb 9 18:40:03.797663 kubelet[2407]: I0209 18:40:03.797377 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-cni-path\") pod \"55d51b3d-6c49-4bca-8284-c4a993836db0\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " Feb 9 18:40:03.797663 kubelet[2407]: I0209 18:40:03.797399 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkvnf\" (UniqueName: \"kubernetes.io/projected/ae0e75a6-2aca-4cb3-8508-2a30c9b250d7-kube-api-access-vkvnf\") pod \"ae0e75a6-2aca-4cb3-8508-2a30c9b250d7\" (UID: \"ae0e75a6-2aca-4cb3-8508-2a30c9b250d7\") " Feb 9 18:40:03.797663 kubelet[2407]: I0209 18:40:03.797422 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/55d51b3d-6c49-4bca-8284-c4a993836db0-clustermesh-secrets\") pod \"55d51b3d-6c49-4bca-8284-c4a993836db0\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " Feb 9 18:40:03.797663 kubelet[2407]: I0209 18:40:03.797444 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae0e75a6-2aca-4cb3-8508-2a30c9b250d7-cilium-config-path\") pod \"ae0e75a6-2aca-4cb3-8508-2a30c9b250d7\" (UID: \"ae0e75a6-2aca-4cb3-8508-2a30c9b250d7\") " Feb 9 18:40:03.797812 kubelet[2407]: I0209 18:40:03.797460 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-host-proc-sys-net\") pod \"55d51b3d-6c49-4bca-8284-c4a993836db0\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " Feb 9 18:40:03.797812 kubelet[2407]: I0209 18:40:03.797480 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpcfc\" (UniqueName: \"kubernetes.io/projected/55d51b3d-6c49-4bca-8284-c4a993836db0-kube-api-access-wpcfc\") pod \"55d51b3d-6c49-4bca-8284-c4a993836db0\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " Feb 9 18:40:03.797812 kubelet[2407]: I0209 18:40:03.797515 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-xtables-lock\") pod \"55d51b3d-6c49-4bca-8284-c4a993836db0\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " Feb 9 18:40:03.797812 kubelet[2407]: I0209 18:40:03.797535 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/55d51b3d-6c49-4bca-8284-c4a993836db0-hubble-tls\") pod \"55d51b3d-6c49-4bca-8284-c4a993836db0\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " Feb 9 18:40:03.797812 kubelet[2407]: I0209 18:40:03.797552 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-hostproc\") pod \"55d51b3d-6c49-4bca-8284-c4a993836db0\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " Feb 9 18:40:03.797812 kubelet[2407]: I0209 18:40:03.797569 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-cilium-run\") pod \"55d51b3d-6c49-4bca-8284-c4a993836db0\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " Feb 9 18:40:03.797942 kubelet[2407]: I0209 18:40:03.797587 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-etc-cni-netd\") pod \"55d51b3d-6c49-4bca-8284-c4a993836db0\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " Feb 9 18:40:03.797942 kubelet[2407]: I0209 18:40:03.797608 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55d51b3d-6c49-4bca-8284-c4a993836db0-cilium-config-path\") pod \"55d51b3d-6c49-4bca-8284-c4a993836db0\" (UID: \"55d51b3d-6c49-4bca-8284-c4a993836db0\") " Feb 9 18:40:03.797942 kubelet[2407]: W0209 18:40:03.797792 2407 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/55d51b3d-6c49-4bca-8284-c4a993836db0/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:40:03.799480 kubelet[2407]: I0209 18:40:03.799431 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55d51b3d-6c49-4bca-8284-c4a993836db0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "55d51b3d-6c49-4bca-8284-c4a993836db0" (UID: "55d51b3d-6c49-4bca-8284-c4a993836db0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:40:03.799547 kubelet[2407]: I0209 18:40:03.799490 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "55d51b3d-6c49-4bca-8284-c4a993836db0" (UID: "55d51b3d-6c49-4bca-8284-c4a993836db0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:03.799606 kubelet[2407]: W0209 18:40:03.799457 2407 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/ae0e75a6-2aca-4cb3-8508-2a30c9b250d7/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:40:03.800048 kubelet[2407]: I0209 18:40:03.800021 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "55d51b3d-6c49-4bca-8284-c4a993836db0" (UID: "55d51b3d-6c49-4bca-8284-c4a993836db0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:03.801652 kubelet[2407]: I0209 18:40:03.801612 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae0e75a6-2aca-4cb3-8508-2a30c9b250d7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ae0e75a6-2aca-4cb3-8508-2a30c9b250d7" (UID: "ae0e75a6-2aca-4cb3-8508-2a30c9b250d7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:40:03.801786 kubelet[2407]: I0209 18:40:03.801769 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "55d51b3d-6c49-4bca-8284-c4a993836db0" (UID: "55d51b3d-6c49-4bca-8284-c4a993836db0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:03.801905 kubelet[2407]: I0209 18:40:03.801892 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "55d51b3d-6c49-4bca-8284-c4a993836db0" (UID: "55d51b3d-6c49-4bca-8284-c4a993836db0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:03.801994 kubelet[2407]: I0209 18:40:03.801981 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "55d51b3d-6c49-4bca-8284-c4a993836db0" (UID: "55d51b3d-6c49-4bca-8284-c4a993836db0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:03.802076 kubelet[2407]: I0209 18:40:03.802065 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "55d51b3d-6c49-4bca-8284-c4a993836db0" (UID: "55d51b3d-6c49-4bca-8284-c4a993836db0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:03.802153 kubelet[2407]: I0209 18:40:03.802142 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-cni-path" (OuterVolumeSpecName: "cni-path") pod "55d51b3d-6c49-4bca-8284-c4a993836db0" (UID: "55d51b3d-6c49-4bca-8284-c4a993836db0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:03.802463 kubelet[2407]: I0209 18:40:03.802432 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-hostproc" (OuterVolumeSpecName: "hostproc") pod "55d51b3d-6c49-4bca-8284-c4a993836db0" (UID: "55d51b3d-6c49-4bca-8284-c4a993836db0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:03.802531 kubelet[2407]: I0209 18:40:03.802470 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "55d51b3d-6c49-4bca-8284-c4a993836db0" (UID: "55d51b3d-6c49-4bca-8284-c4a993836db0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:03.802531 kubelet[2407]: I0209 18:40:03.802489 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "55d51b3d-6c49-4bca-8284-c4a993836db0" (UID: "55d51b3d-6c49-4bca-8284-c4a993836db0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:03.802584 kubelet[2407]: I0209 18:40:03.802557 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d51b3d-6c49-4bca-8284-c4a993836db0-kube-api-access-wpcfc" (OuterVolumeSpecName: "kube-api-access-wpcfc") pod "55d51b3d-6c49-4bca-8284-c4a993836db0" (UID: "55d51b3d-6c49-4bca-8284-c4a993836db0"). InnerVolumeSpecName "kube-api-access-wpcfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:03.806260 kubelet[2407]: I0209 18:40:03.804576 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d51b3d-6c49-4bca-8284-c4a993836db0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "55d51b3d-6c49-4bca-8284-c4a993836db0" (UID: "55d51b3d-6c49-4bca-8284-c4a993836db0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:03.807109 kubelet[2407]: I0209 18:40:03.807084 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d51b3d-6c49-4bca-8284-c4a993836db0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "55d51b3d-6c49-4bca-8284-c4a993836db0" (UID: "55d51b3d-6c49-4bca-8284-c4a993836db0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:40:03.807227 kubelet[2407]: I0209 18:40:03.807090 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae0e75a6-2aca-4cb3-8508-2a30c9b250d7-kube-api-access-vkvnf" (OuterVolumeSpecName: "kube-api-access-vkvnf") pod "ae0e75a6-2aca-4cb3-8508-2a30c9b250d7" (UID: "ae0e75a6-2aca-4cb3-8508-2a30c9b250d7"). InnerVolumeSpecName "kube-api-access-vkvnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:03.897912 kubelet[2407]: I0209 18:40:03.897882 2407 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-bpf-maps\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:03.898087 kubelet[2407]: I0209 18:40:03.898077 2407 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-lib-modules\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:03.898171 kubelet[2407]: I0209 18:40:03.898162 2407 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/55d51b3d-6c49-4bca-8284-c4a993836db0-clustermesh-secrets\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:03.898234 kubelet[2407]: I0209 18:40:03.898226 2407 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-cilium-cgroup\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:03.898330 kubelet[2407]: I0209 18:40:03.898321 2407 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-cni-path\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:03.898396 kubelet[2407]: I0209 18:40:03.898387 2407 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vkvnf\" (UniqueName: \"kubernetes.io/projected/ae0e75a6-2aca-4cb3-8508-2a30c9b250d7-kube-api-access-vkvnf\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:03.898501 kubelet[2407]: I0209 18:40:03.898468 2407 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-host-proc-sys-net\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:03.899611 kubelet[2407]: I0209 18:40:03.899385 2407 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wpcfc\" (UniqueName: \"kubernetes.io/projected/55d51b3d-6c49-4bca-8284-c4a993836db0-kube-api-access-wpcfc\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:03.899611 kubelet[2407]: I0209 18:40:03.899419 2407 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae0e75a6-2aca-4cb3-8508-2a30c9b250d7-cilium-config-path\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:03.899611 kubelet[2407]: I0209 18:40:03.899432 2407 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-xtables-lock\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:03.899611 kubelet[2407]: I0209 18:40:03.899442 2407 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-etc-cni-netd\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:03.899611 kubelet[2407]: I0209 18:40:03.899452 2407 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/55d51b3d-6c49-4bca-8284-c4a993836db0-hubble-tls\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:03.899611 kubelet[2407]: I0209 18:40:03.899461 2407 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-hostproc\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:03.899611 kubelet[2407]: I0209 18:40:03.899470 2407 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-cilium-run\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:03.899611 kubelet[2407]: I0209 18:40:03.899482 2407 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55d51b3d-6c49-4bca-8284-c4a993836db0-cilium-config-path\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:03.899799 kubelet[2407]: I0209 18:40:03.899496 2407 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/55d51b3d-6c49-4bca-8284-c4a993836db0-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:04.472041 kubelet[2407]: I0209 18:40:04.472016 2407 scope.go:115] "RemoveContainer" containerID="7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae" Feb 9 18:40:04.478524 env[1356]: time="2024-02-09T18:40:04.475934858Z" level=info msg="RemoveContainer for \"7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae\"" Feb 9 18:40:04.479457 systemd[1]: Removed slice kubepods-besteffort-podae0e75a6_2aca_4cb3_8508_2a30c9b250d7.slice. Feb 9 18:40:04.482321 systemd[1]: Removed slice kubepods-burstable-pod55d51b3d_6c49_4bca_8284_c4a993836db0.slice. Feb 9 18:40:04.482401 systemd[1]: kubepods-burstable-pod55d51b3d_6c49_4bca_8284_c4a993836db0.slice: Consumed 6.343s CPU time. Feb 9 18:40:04.487180 env[1356]: time="2024-02-09T18:40:04.487109376Z" level=info msg="RemoveContainer for \"7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae\" returns successfully" Feb 9 18:40:04.487629 kubelet[2407]: I0209 18:40:04.487595 2407 scope.go:115] "RemoveContainer" containerID="7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae" Feb 9 18:40:04.488134 env[1356]: time="2024-02-09T18:40:04.487920304Z" level=error msg="ContainerStatus for \"7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae\": not found" Feb 9 18:40:04.490080 kubelet[2407]: E0209 18:40:04.490037 2407 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae\": not found" containerID="7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae" Feb 9 18:40:04.490339 kubelet[2407]: I0209 18:40:04.490323 2407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae} err="failed to get container status \"7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f710b0c0a7683b34c610048ce2d1803ff9a6a6ac74839e9a26d975af58741ae\": not found" Feb 9 18:40:04.490434 kubelet[2407]: I0209 18:40:04.490424 2407 scope.go:115] "RemoveContainer" containerID="f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9" Feb 9 18:40:04.493317 env[1356]: time="2024-02-09T18:40:04.493078423Z" level=info msg="RemoveContainer for \"f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9\"" Feb 9 18:40:04.501519 env[1356]: time="2024-02-09T18:40:04.501382156Z" level=info msg="RemoveContainer for \"f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9\" returns successfully" Feb 9 18:40:04.501749 kubelet[2407]: I0209 18:40:04.501732 2407 scope.go:115] "RemoveContainer" containerID="dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea" Feb 9 18:40:04.503522 env[1356]: time="2024-02-09T18:40:04.503447651Z" level=info msg="RemoveContainer for \"dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea\"" Feb 9 18:40:04.511847 env[1356]: time="2024-02-09T18:40:04.511809334Z" level=info msg="RemoveContainer for \"dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea\" returns successfully" Feb 9 18:40:04.512051 kubelet[2407]: I0209 18:40:04.512034 2407 scope.go:115] "RemoveContainer" containerID="998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224" Feb 9 18:40:04.513207 env[1356]: time="2024-02-09T18:40:04.513175679Z" level=info msg="RemoveContainer for \"998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224\"" Feb 9 18:40:04.520766 env[1356]: time="2024-02-09T18:40:04.520733831Z" level=info msg="RemoveContainer for \"998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224\" returns successfully" Feb 9 18:40:04.520966 kubelet[2407]: I0209 18:40:04.520942 2407 scope.go:115] "RemoveContainer" containerID="c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e" Feb 9 18:40:04.522155 env[1356]: time="2024-02-09T18:40:04.521940526Z" level=info msg="RemoveContainer for \"c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e\"" Feb 9 18:40:04.529797 env[1356]: time="2024-02-09T18:40:04.529729075Z" level=info msg="RemoveContainer for \"c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e\" returns successfully" Feb 9 18:40:04.530096 kubelet[2407]: I0209 18:40:04.530081 2407 scope.go:115] "RemoveContainer" containerID="e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966" Feb 9 18:40:04.531354 env[1356]: time="2024-02-09T18:40:04.531322978Z" level=info msg="RemoveContainer for \"e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966\"" Feb 9 18:40:04.539655 env[1356]: time="2024-02-09T18:40:04.539626151Z" level=info msg="RemoveContainer for \"e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966\" returns successfully" Feb 9 18:40:04.539927 kubelet[2407]: I0209 18:40:04.539912 2407 scope.go:115] "RemoveContainer" containerID="f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9" Feb 9 18:40:04.540259 env[1356]: time="2024-02-09T18:40:04.540202243Z" level=error msg="ContainerStatus for \"f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9\": not found" Feb 9 18:40:04.540433 kubelet[2407]: E0209 18:40:04.540412 2407 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9\": not found" containerID="f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9" Feb 9 18:40:04.540478 kubelet[2407]: I0209 18:40:04.540453 2407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9} err="failed to get container status \"f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9966d28ae8e9aec118c41c8be5ff0dcccc4d94dfacd884344ce2062c83c09a9\": not found" Feb 9 18:40:04.540478 kubelet[2407]: I0209 18:40:04.540464 2407 scope.go:115] "RemoveContainer" containerID="dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea" Feb 9 18:40:04.540672 env[1356]: time="2024-02-09T18:40:04.540619326Z" level=error msg="ContainerStatus for \"dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea\": not found" Feb 9 18:40:04.540836 kubelet[2407]: E0209 18:40:04.540811 2407 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea\": not found" containerID="dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea" Feb 9 18:40:04.540936 kubelet[2407]: I0209 18:40:04.540924 2407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea} err="failed to get container status \"dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc5b422ce9bada4ddbdf4ae8d24b966ca915ad66bf287aa7a7dca27b7b74c1ea\": not found" Feb 9 18:40:04.541014 kubelet[2407]: I0209 18:40:04.541005 2407 scope.go:115] "RemoveContainer" containerID="998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224" Feb 9 18:40:04.541266 env[1356]: time="2024-02-09T18:40:04.541213255Z" level=error msg="ContainerStatus for \"998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224\": not found" Feb 9 18:40:04.541433 kubelet[2407]: E0209 18:40:04.541412 2407 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224\": not found" containerID="998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224" Feb 9 18:40:04.541480 kubelet[2407]: I0209 18:40:04.541443 2407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224} err="failed to get container status \"998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224\": rpc error: code = NotFound desc = an error occurred when try to find container \"998612a6dbc8a32f5e9b886c22664f6ca34d74bac10db5dad89f66f719172224\": not found" Feb 9 18:40:04.541480 kubelet[2407]: I0209 18:40:04.541454 2407 scope.go:115] "RemoveContainer" containerID="c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e" Feb 9 18:40:04.541721 env[1356]: time="2024-02-09T18:40:04.541674449Z" level=error msg="ContainerStatus for \"c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e\": not found" Feb 9 18:40:04.541863 kubelet[2407]: E0209 18:40:04.541844 2407 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e\": not found" containerID="c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e" Feb 9 18:40:04.541902 kubelet[2407]: I0209 18:40:04.541874 2407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e} err="failed to get container status \"c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e\": rpc error: code = NotFound desc = an error occurred when try to find container \"c5568b936a09db6e61c9d7880ed8d70667f6d0b69f09e95b335a37868f25115e\": not found" Feb 9 18:40:04.541902 kubelet[2407]: I0209 18:40:04.541884 2407 scope.go:115] "RemoveContainer" containerID="e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966" Feb 9 18:40:04.542057 env[1356]: time="2024-02-09T18:40:04.542011466Z" level=error msg="ContainerStatus for \"e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966\": not found" Feb 9 18:40:04.542208 kubelet[2407]: E0209 18:40:04.542184 2407 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966\": not found" containerID="e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966" Feb 9 18:40:04.542327 kubelet[2407]: I0209 18:40:04.542317 2407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966} err="failed to get container status \"e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7f3388bc27c0258697a7edfce0b13357321a385ca1d2c5db121943adc975966\": not found" Feb 9 18:40:04.551695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414-rootfs.mount: Deactivated successfully. Feb 9 18:40:04.551801 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414-shm.mount: Deactivated successfully. Feb 9 18:40:04.551862 systemd[1]: var-lib-kubelet-pods-55d51b3d\x2d6c49\x2d4bca\x2d8284\x2dc4a993836db0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwpcfc.mount: Deactivated successfully. Feb 9 18:40:04.551914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520-rootfs.mount: Deactivated successfully. Feb 9 18:40:04.551963 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520-shm.mount: Deactivated successfully. Feb 9 18:40:04.552018 systemd[1]: var-lib-kubelet-pods-ae0e75a6\x2d2aca\x2d4cb3\x2d8508\x2d2a30c9b250d7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvkvnf.mount: Deactivated successfully. Feb 9 18:40:04.552069 systemd[1]: var-lib-kubelet-pods-55d51b3d\x2d6c49\x2d4bca\x2d8284\x2dc4a993836db0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:40:04.552115 systemd[1]: var-lib-kubelet-pods-55d51b3d\x2d6c49\x2d4bca\x2d8284\x2dc4a993836db0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:40:05.065560 kubelet[2407]: I0209 18:40:05.065526 2407 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=55d51b3d-6c49-4bca-8284-c4a993836db0 path="/var/lib/kubelet/pods/55d51b3d-6c49-4bca-8284-c4a993836db0/volumes" Feb 9 18:40:05.066092 kubelet[2407]: I0209 18:40:05.066073 2407 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=ae0e75a6-2aca-4cb3-8508-2a30c9b250d7 path="/var/lib/kubelet/pods/ae0e75a6-2aca-4cb3-8508-2a30c9b250d7/volumes" Feb 9 18:40:05.584841 sshd[3949]: pam_unix(sshd:session): session closed for user core Feb 9 18:40:05.587652 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 18:40:05.587653 systemd-logind[1339]: Session 23 logged out. Waiting for processes to exit. Feb 9 18:40:05.587853 systemd[1]: session-23.scope: Consumed 1.427s CPU time. Feb 9 18:40:05.588550 systemd[1]: sshd@20-10.200.20.17:22-10.200.12.6:39274.service: Deactivated successfully. Feb 9 18:40:05.589596 systemd-logind[1339]: Removed session 23. Feb 9 18:40:05.660150 systemd[1]: Started sshd@21-10.200.20.17:22-10.200.12.6:39284.service. Feb 9 18:40:06.109932 sshd[4112]: Accepted publickey for core from 10.200.12.6 port 39284 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:40:06.111588 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:06.116621 systemd[1]: Started session-24.scope. Feb 9 18:40:06.117877 systemd-logind[1339]: New session 24 of user core. Feb 9 18:40:06.166507 kubelet[2407]: E0209 18:40:06.166475 2407 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:40:07.266982 kubelet[2407]: I0209 18:40:07.266941 2407 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:40:07.266982 kubelet[2407]: E0209 18:40:07.266997 2407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae0e75a6-2aca-4cb3-8508-2a30c9b250d7" containerName="cilium-operator" Feb 9 18:40:07.267390 kubelet[2407]: E0209 18:40:07.267006 2407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55d51b3d-6c49-4bca-8284-c4a993836db0" containerName="apply-sysctl-overwrites" Feb 9 18:40:07.267390 kubelet[2407]: E0209 18:40:07.267014 2407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55d51b3d-6c49-4bca-8284-c4a993836db0" containerName="clean-cilium-state" Feb 9 18:40:07.267390 kubelet[2407]: E0209 18:40:07.267021 2407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55d51b3d-6c49-4bca-8284-c4a993836db0" containerName="mount-cgroup" Feb 9 18:40:07.267390 kubelet[2407]: E0209 18:40:07.267027 2407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55d51b3d-6c49-4bca-8284-c4a993836db0" containerName="mount-bpf-fs" Feb 9 18:40:07.267390 kubelet[2407]: E0209 18:40:07.267033 2407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55d51b3d-6c49-4bca-8284-c4a993836db0" containerName="cilium-agent" Feb 9 18:40:07.267390 kubelet[2407]: I0209 18:40:07.267054 2407 memory_manager.go:346] "RemoveStaleState removing state" podUID="ae0e75a6-2aca-4cb3-8508-2a30c9b250d7" containerName="cilium-operator" Feb 9 18:40:07.267390 kubelet[2407]: I0209 18:40:07.267061 2407 memory_manager.go:346] "RemoveStaleState removing state" podUID="55d51b3d-6c49-4bca-8284-c4a993836db0" containerName="cilium-agent" Feb 9 18:40:07.272142 systemd[1]: Created slice kubepods-burstable-pod296ba679_7187_4567_bce6_810c55e695c1.slice. Feb 9 18:40:07.299746 sshd[4112]: pam_unix(sshd:session): session closed for user core Feb 9 18:40:07.302548 systemd[1]: sshd@21-10.200.20.17:22-10.200.12.6:39284.service: Deactivated successfully. Feb 9 18:40:07.303316 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 18:40:07.304491 systemd-logind[1339]: Session 24 logged out. Waiting for processes to exit. Feb 9 18:40:07.305799 systemd-logind[1339]: Removed session 24. Feb 9 18:40:07.374387 systemd[1]: Started sshd@22-10.200.20.17:22-10.200.12.6:53610.service. Feb 9 18:40:07.417969 kubelet[2407]: I0209 18:40:07.417943 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-host-proc-sys-net\") pod \"cilium-lxb5h\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " pod="kube-system/cilium-lxb5h" Feb 9 18:40:07.418220 kubelet[2407]: I0209 18:40:07.418196 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-bpf-maps\") pod \"cilium-lxb5h\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " pod="kube-system/cilium-lxb5h" Feb 9 18:40:07.418356 kubelet[2407]: I0209 18:40:07.418346 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/296ba679-7187-4567-bce6-810c55e695c1-hubble-tls\") pod \"cilium-lxb5h\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " pod="kube-system/cilium-lxb5h" Feb 9 18:40:07.418473 kubelet[2407]: I0209 18:40:07.418464 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-xtables-lock\") pod \"cilium-lxb5h\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " pod="kube-system/cilium-lxb5h" Feb 9 18:40:07.418654 kubelet[2407]: I0209 18:40:07.418643 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-lib-modules\") pod \"cilium-lxb5h\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " pod="kube-system/cilium-lxb5h" Feb 9 18:40:07.418784 kubelet[2407]: I0209 18:40:07.418773 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt97b\" (UniqueName: \"kubernetes.io/projected/296ba679-7187-4567-bce6-810c55e695c1-kube-api-access-lt97b\") pod \"cilium-lxb5h\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " pod="kube-system/cilium-lxb5h" Feb 9 18:40:07.418902 kubelet[2407]: I0209 18:40:07.418892 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-etc-cni-netd\") pod \"cilium-lxb5h\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " pod="kube-system/cilium-lxb5h" Feb 9 18:40:07.419015 kubelet[2407]: I0209 18:40:07.419006 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/296ba679-7187-4567-bce6-810c55e695c1-cilium-ipsec-secrets\") pod \"cilium-lxb5h\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " pod="kube-system/cilium-lxb5h" Feb 9 18:40:07.419119 kubelet[2407]: I0209 18:40:07.419110 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-cilium-run\") pod \"cilium-lxb5h\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " pod="kube-system/cilium-lxb5h" Feb 9 18:40:07.419235 kubelet[2407]: I0209 18:40:07.419224 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-cni-path\") pod \"cilium-lxb5h\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " pod="kube-system/cilium-lxb5h" Feb 9 18:40:07.419352 kubelet[2407]: I0209 18:40:07.419343 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/296ba679-7187-4567-bce6-810c55e695c1-clustermesh-secrets\") pod \"cilium-lxb5h\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " pod="kube-system/cilium-lxb5h" Feb 9 18:40:07.419469 kubelet[2407]: I0209 18:40:07.419460 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-hostproc\") pod \"cilium-lxb5h\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " pod="kube-system/cilium-lxb5h" Feb 9 18:40:07.419579 kubelet[2407]: I0209 18:40:07.419570 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-cilium-cgroup\") pod \"cilium-lxb5h\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " pod="kube-system/cilium-lxb5h" Feb 9 18:40:07.419689 kubelet[2407]: I0209 18:40:07.419679 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/296ba679-7187-4567-bce6-810c55e695c1-cilium-config-path\") pod \"cilium-lxb5h\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " pod="kube-system/cilium-lxb5h" Feb 9 18:40:07.419803 kubelet[2407]: I0209 18:40:07.419791 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-host-proc-sys-kernel\") pod \"cilium-lxb5h\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " pod="kube-system/cilium-lxb5h" Feb 9 18:40:07.577332 env[1356]: time="2024-02-09T18:40:07.576094656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lxb5h,Uid:296ba679-7187-4567-bce6-810c55e695c1,Namespace:kube-system,Attempt:0,}" Feb 9 18:40:07.636525 env[1356]: time="2024-02-09T18:40:07.636458179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:40:07.636707 env[1356]: time="2024-02-09T18:40:07.636685098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:40:07.636785 env[1356]: time="2024-02-09T18:40:07.636766363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:40:07.637027 env[1356]: time="2024-02-09T18:40:07.636992321Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49 pid=4139 runtime=io.containerd.runc.v2 Feb 9 18:40:07.648061 systemd[1]: Started cri-containerd-061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49.scope. Feb 9 18:40:07.671700 env[1356]: time="2024-02-09T18:40:07.671663439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lxb5h,Uid:296ba679-7187-4567-bce6-810c55e695c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49\"" Feb 9 18:40:07.676397 env[1356]: time="2024-02-09T18:40:07.676260196Z" level=info msg="CreateContainer within sandbox \"061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:40:07.707890 env[1356]: time="2024-02-09T18:40:07.707834042Z" level=info msg="CreateContainer within sandbox \"061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3\"" Feb 9 18:40:07.708455 env[1356]: time="2024-02-09T18:40:07.708370343Z" level=info msg="StartContainer for \"5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3\"" Feb 9 18:40:07.724854 systemd[1]: Started cri-containerd-5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3.scope. Feb 9 18:40:07.735757 systemd[1]: cri-containerd-5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3.scope: Deactivated successfully. Feb 9 18:40:07.818315 env[1356]: time="2024-02-09T18:40:07.818229664Z" level=info msg="shim disconnected" id=5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3 Feb 9 18:40:07.818315 env[1356]: time="2024-02-09T18:40:07.818307209Z" level=warning msg="cleaning up after shim disconnected" id=5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3 namespace=k8s.io Feb 9 18:40:07.818315 env[1356]: time="2024-02-09T18:40:07.818317167Z" level=info msg="cleaning up dead shim" Feb 9 18:40:07.821372 sshd[4125]: Accepted publickey for core from 10.200.12.6 port 53610 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:40:07.822838 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:07.829864 systemd[1]: Started session-25.scope. Feb 9 18:40:07.830605 systemd-logind[1339]: New session 25 of user core. Feb 9 18:40:07.839037 env[1356]: time="2024-02-09T18:40:07.838987174Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4199 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T18:40:07Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 18:40:07.839391 env[1356]: time="2024-02-09T18:40:07.839251526Z" level=error msg="copy shim log" error="read /proc/self/fd/33: file already closed" Feb 9 18:40:07.840367 env[1356]: time="2024-02-09T18:40:07.840328008Z" level=error msg="Failed to pipe stdout of container \"5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3\"" error="reading from a closed fifo" Feb 9 18:40:07.840471 env[1356]: time="2024-02-09T18:40:07.840344445Z" level=error msg="Failed to pipe stderr of container \"5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3\"" error="reading from a closed fifo" Feb 9 18:40:07.844629 env[1356]: time="2024-02-09T18:40:07.844569110Z" level=error msg="StartContainer for \"5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 18:40:07.844864 kubelet[2407]: E0209 18:40:07.844832 2407 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3" Feb 9 18:40:07.844956 kubelet[2407]: E0209 18:40:07.844947 2407 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 18:40:07.844956 kubelet[2407]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 18:40:07.844956 kubelet[2407]: rm /hostbin/cilium-mount Feb 9 18:40:07.845028 kubelet[2407]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lt97b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-lxb5h_kube-system(296ba679-7187-4567-bce6-810c55e695c1): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 18:40:07.845028 kubelet[2407]: E0209 18:40:07.844983 2407 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lxb5h" podUID=296ba679-7187-4567-bce6-810c55e695c1 Feb 9 18:40:08.216498 sshd[4125]: pam_unix(sshd:session): session closed for user core Feb 9 18:40:08.218905 systemd-logind[1339]: Session 25 logged out. Waiting for processes to exit. Feb 9 18:40:08.219153 systemd[1]: sshd@22-10.200.20.17:22-10.200.12.6:53610.service: Deactivated successfully. Feb 9 18:40:08.219889 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 18:40:08.220643 systemd-logind[1339]: Removed session 25. Feb 9 18:40:08.287059 systemd[1]: Started sshd@23-10.200.20.17:22-10.200.12.6:53614.service. Feb 9 18:40:08.487806 env[1356]: time="2024-02-09T18:40:08.487446061Z" level=info msg="StopPodSandbox for \"061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49\"" Feb 9 18:40:08.487806 env[1356]: time="2024-02-09T18:40:08.487541844Z" level=info msg="Container to stop \"5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:08.503992 systemd[1]: cri-containerd-061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49.scope: Deactivated successfully. Feb 9 18:40:08.529957 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49-shm.mount: Deactivated successfully. Feb 9 18:40:08.538816 env[1356]: time="2024-02-09T18:40:08.538776889Z" level=info msg="shim disconnected" id=061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49 Feb 9 18:40:08.539085 env[1356]: time="2024-02-09T18:40:08.539055718Z" level=warning msg="cleaning up after shim disconnected" id=061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49 namespace=k8s.io Feb 9 18:40:08.539543 env[1356]: time="2024-02-09T18:40:08.539522552Z" level=info msg="cleaning up dead shim" Feb 9 18:40:08.547544 env[1356]: time="2024-02-09T18:40:08.547512773Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4241 runtime=io.containerd.runc.v2\n" Feb 9 18:40:08.547917 env[1356]: time="2024-02-09T18:40:08.547892904Z" level=info msg="TearDown network for sandbox \"061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49\" successfully" Feb 9 18:40:08.548007 env[1356]: time="2024-02-09T18:40:08.547990846Z" level=info msg="StopPodSandbox for \"061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49\" returns successfully" Feb 9 18:40:08.634968 kubelet[2407]: I0209 18:40:08.634357 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-lib-modules\") pod \"296ba679-7187-4567-bce6-810c55e695c1\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " Feb 9 18:40:08.634968 kubelet[2407]: I0209 18:40:08.634398 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/296ba679-7187-4567-bce6-810c55e695c1-cilium-config-path\") pod \"296ba679-7187-4567-bce6-810c55e695c1\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " Feb 9 18:40:08.634968 kubelet[2407]: I0209 18:40:08.634420 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-bpf-maps\") pod \"296ba679-7187-4567-bce6-810c55e695c1\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " Feb 9 18:40:08.634968 kubelet[2407]: I0209 18:40:08.634439 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-host-proc-sys-net\") pod \"296ba679-7187-4567-bce6-810c55e695c1\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " Feb 9 18:40:08.634968 kubelet[2407]: I0209 18:40:08.634467 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/296ba679-7187-4567-bce6-810c55e695c1-hubble-tls\") pod \"296ba679-7187-4567-bce6-810c55e695c1\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " Feb 9 18:40:08.634968 kubelet[2407]: I0209 18:40:08.634484 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-cilium-run\") pod \"296ba679-7187-4567-bce6-810c55e695c1\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " Feb 9 18:40:08.634968 kubelet[2407]: I0209 18:40:08.634503 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-cni-path\") pod \"296ba679-7187-4567-bce6-810c55e695c1\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " Feb 9 18:40:08.634968 kubelet[2407]: I0209 18:40:08.634523 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-xtables-lock\") pod \"296ba679-7187-4567-bce6-810c55e695c1\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " Feb 9 18:40:08.634968 kubelet[2407]: I0209 18:40:08.634542 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt97b\" (UniqueName: \"kubernetes.io/projected/296ba679-7187-4567-bce6-810c55e695c1-kube-api-access-lt97b\") pod \"296ba679-7187-4567-bce6-810c55e695c1\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " Feb 9 18:40:08.634968 kubelet[2407]: I0209 18:40:08.634561 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-cilium-cgroup\") pod \"296ba679-7187-4567-bce6-810c55e695c1\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " Feb 9 18:40:08.634968 kubelet[2407]: I0209 18:40:08.634578 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-hostproc\") pod \"296ba679-7187-4567-bce6-810c55e695c1\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " Feb 9 18:40:08.634968 kubelet[2407]: I0209 18:40:08.634596 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-host-proc-sys-kernel\") pod \"296ba679-7187-4567-bce6-810c55e695c1\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " Feb 9 18:40:08.634968 kubelet[2407]: I0209 18:40:08.634614 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-etc-cni-netd\") pod \"296ba679-7187-4567-bce6-810c55e695c1\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " Feb 9 18:40:08.634968 kubelet[2407]: I0209 18:40:08.634634 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/296ba679-7187-4567-bce6-810c55e695c1-cilium-ipsec-secrets\") pod \"296ba679-7187-4567-bce6-810c55e695c1\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " Feb 9 18:40:08.634968 kubelet[2407]: I0209 18:40:08.634655 2407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/296ba679-7187-4567-bce6-810c55e695c1-clustermesh-secrets\") pod \"296ba679-7187-4567-bce6-810c55e695c1\" (UID: \"296ba679-7187-4567-bce6-810c55e695c1\") " Feb 9 18:40:08.636310 kubelet[2407]: I0209 18:40:08.635684 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-cni-path" (OuterVolumeSpecName: "cni-path") pod "296ba679-7187-4567-bce6-810c55e695c1" (UID: "296ba679-7187-4567-bce6-810c55e695c1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.636310 kubelet[2407]: I0209 18:40:08.635724 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "296ba679-7187-4567-bce6-810c55e695c1" (UID: "296ba679-7187-4567-bce6-810c55e695c1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.636310 kubelet[2407]: I0209 18:40:08.635804 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "296ba679-7187-4567-bce6-810c55e695c1" (UID: "296ba679-7187-4567-bce6-810c55e695c1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.636310 kubelet[2407]: W0209 18:40:08.635845 2407 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/296ba679-7187-4567-bce6-810c55e695c1/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:40:08.636466 kubelet[2407]: I0209 18:40:08.636392 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "296ba679-7187-4567-bce6-810c55e695c1" (UID: "296ba679-7187-4567-bce6-810c55e695c1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.636466 kubelet[2407]: I0209 18:40:08.636442 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "296ba679-7187-4567-bce6-810c55e695c1" (UID: "296ba679-7187-4567-bce6-810c55e695c1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.636683 kubelet[2407]: I0209 18:40:08.636657 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "296ba679-7187-4567-bce6-810c55e695c1" (UID: "296ba679-7187-4567-bce6-810c55e695c1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.638028 kubelet[2407]: I0209 18:40:08.637983 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/296ba679-7187-4567-bce6-810c55e695c1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "296ba679-7187-4567-bce6-810c55e695c1" (UID: "296ba679-7187-4567-bce6-810c55e695c1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:40:08.638115 kubelet[2407]: I0209 18:40:08.638037 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "296ba679-7187-4567-bce6-810c55e695c1" (UID: "296ba679-7187-4567-bce6-810c55e695c1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.638115 kubelet[2407]: I0209 18:40:08.638055 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "296ba679-7187-4567-bce6-810c55e695c1" (UID: "296ba679-7187-4567-bce6-810c55e695c1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.638115 kubelet[2407]: I0209 18:40:08.638070 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-hostproc" (OuterVolumeSpecName: "hostproc") pod "296ba679-7187-4567-bce6-810c55e695c1" (UID: "296ba679-7187-4567-bce6-810c55e695c1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.638115 kubelet[2407]: I0209 18:40:08.638088 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "296ba679-7187-4567-bce6-810c55e695c1" (UID: "296ba679-7187-4567-bce6-810c55e695c1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.640000 systemd[1]: var-lib-kubelet-pods-296ba679\x2d7187\x2d4567\x2dbce6\x2d810c55e695c1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlt97b.mount: Deactivated successfully. Feb 9 18:40:08.641112 kubelet[2407]: I0209 18:40:08.641071 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/296ba679-7187-4567-bce6-810c55e695c1-kube-api-access-lt97b" (OuterVolumeSpecName: "kube-api-access-lt97b") pod "296ba679-7187-4567-bce6-810c55e695c1" (UID: "296ba679-7187-4567-bce6-810c55e695c1"). InnerVolumeSpecName "kube-api-access-lt97b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:08.643106 systemd[1]: var-lib-kubelet-pods-296ba679\x2d7187\x2d4567\x2dbce6\x2d810c55e695c1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:40:08.646487 systemd[1]: var-lib-kubelet-pods-296ba679\x2d7187\x2d4567\x2dbce6\x2d810c55e695c1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:40:08.647128 kubelet[2407]: I0209 18:40:08.647101 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/296ba679-7187-4567-bce6-810c55e695c1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "296ba679-7187-4567-bce6-810c55e695c1" (UID: "296ba679-7187-4567-bce6-810c55e695c1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:40:08.647719 kubelet[2407]: I0209 18:40:08.647648 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/296ba679-7187-4567-bce6-810c55e695c1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "296ba679-7187-4567-bce6-810c55e695c1" (UID: "296ba679-7187-4567-bce6-810c55e695c1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:08.648993 systemd[1]: var-lib-kubelet-pods-296ba679\x2d7187\x2d4567\x2dbce6\x2d810c55e695c1-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 18:40:08.650219 kubelet[2407]: I0209 18:40:08.650196 2407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/296ba679-7187-4567-bce6-810c55e695c1-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "296ba679-7187-4567-bce6-810c55e695c1" (UID: "296ba679-7187-4567-bce6-810c55e695c1"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:40:08.706912 sshd[4220]: Accepted publickey for core from 10.200.12.6 port 53614 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:40:08.708194 sshd[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:08.711892 systemd-logind[1339]: New session 26 of user core. Feb 9 18:40:08.712361 systemd[1]: Started session-26.scope. Feb 9 18:40:08.735365 kubelet[2407]: I0209 18:40:08.735332 2407 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-bpf-maps\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:08.735365 kubelet[2407]: I0209 18:40:08.735366 2407 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-lib-modules\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:08.735510 kubelet[2407]: I0209 18:40:08.735378 2407 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/296ba679-7187-4567-bce6-810c55e695c1-cilium-config-path\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:08.735510 kubelet[2407]: I0209 18:40:08.735389 2407 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-host-proc-sys-net\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:08.735510 kubelet[2407]: I0209 18:40:08.735401 2407 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/296ba679-7187-4567-bce6-810c55e695c1-hubble-tls\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:08.735510 kubelet[2407]: I0209 18:40:08.735410 2407 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-cilium-run\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:08.735510 kubelet[2407]: I0209 18:40:08.735420 2407 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-cni-path\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:08.735510 kubelet[2407]: I0209 18:40:08.735430 2407 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-xtables-lock\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:08.735510 kubelet[2407]: I0209 18:40:08.735440 2407 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lt97b\" (UniqueName: \"kubernetes.io/projected/296ba679-7187-4567-bce6-810c55e695c1-kube-api-access-lt97b\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:08.735510 kubelet[2407]: I0209 18:40:08.735449 2407 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-cilium-cgroup\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:08.735510 kubelet[2407]: I0209 18:40:08.735461 2407 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-etc-cni-netd\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:08.735510 kubelet[2407]: I0209 18:40:08.735477 2407 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-hostproc\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:08.735510 kubelet[2407]: I0209 18:40:08.735487 2407 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/296ba679-7187-4567-bce6-810c55e695c1-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:08.735510 kubelet[2407]: I0209 18:40:08.735496 2407 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/296ba679-7187-4567-bce6-810c55e695c1-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:08.735510 kubelet[2407]: I0209 18:40:08.735506 2407 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/296ba679-7187-4567-bce6-810c55e695c1-clustermesh-secrets\") on node \"ci-3510.3.2-a-b879aa43fa\" DevicePath \"\"" Feb 9 18:40:09.069125 systemd[1]: Removed slice kubepods-burstable-pod296ba679_7187_4567_bce6_810c55e695c1.slice. Feb 9 18:40:09.489136 kubelet[2407]: I0209 18:40:09.489110 2407 scope.go:115] "RemoveContainer" containerID="5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3" Feb 9 18:40:09.492891 env[1356]: time="2024-02-09T18:40:09.492848319Z" level=info msg="RemoveContainer for \"5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3\"" Feb 9 18:40:09.501363 env[1356]: time="2024-02-09T18:40:09.501318660Z" level=info msg="RemoveContainer for \"5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3\" returns successfully" Feb 9 18:40:09.523180 kubelet[2407]: I0209 18:40:09.523148 2407 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:40:09.523460 kubelet[2407]: E0209 18:40:09.523446 2407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="296ba679-7187-4567-bce6-810c55e695c1" containerName="mount-cgroup" Feb 9 18:40:09.523578 kubelet[2407]: I0209 18:40:09.523567 2407 memory_manager.go:346] "RemoveStaleState removing state" podUID="296ba679-7187-4567-bce6-810c55e695c1" containerName="mount-cgroup" Feb 9 18:40:09.529139 systemd[1]: Created slice kubepods-burstable-podea70b796_a14d_4e75_89a6_617b92000e33.slice. Feb 9 18:40:09.538727 kubelet[2407]: I0209 18:40:09.538694 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea70b796-a14d-4e75-89a6-617b92000e33-hostproc\") pod \"cilium-2qc48\" (UID: \"ea70b796-a14d-4e75-89a6-617b92000e33\") " pod="kube-system/cilium-2qc48" Feb 9 18:40:09.538882 kubelet[2407]: I0209 18:40:09.538870 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea70b796-a14d-4e75-89a6-617b92000e33-clustermesh-secrets\") pod \"cilium-2qc48\" (UID: \"ea70b796-a14d-4e75-89a6-617b92000e33\") " pod="kube-system/cilium-2qc48" Feb 9 18:40:09.538977 kubelet[2407]: I0209 18:40:09.538967 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea70b796-a14d-4e75-89a6-617b92000e33-cilium-cgroup\") pod \"cilium-2qc48\" (UID: \"ea70b796-a14d-4e75-89a6-617b92000e33\") " pod="kube-system/cilium-2qc48" Feb 9 18:40:09.539075 kubelet[2407]: I0209 18:40:09.539065 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea70b796-a14d-4e75-89a6-617b92000e33-cni-path\") pod \"cilium-2qc48\" (UID: \"ea70b796-a14d-4e75-89a6-617b92000e33\") " pod="kube-system/cilium-2qc48" Feb 9 18:40:09.539166 kubelet[2407]: I0209 18:40:09.539157 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ea70b796-a14d-4e75-89a6-617b92000e33-cilium-ipsec-secrets\") pod \"cilium-2qc48\" (UID: \"ea70b796-a14d-4e75-89a6-617b92000e33\") " pod="kube-system/cilium-2qc48" Feb 9 18:40:09.539257 kubelet[2407]: I0209 18:40:09.539247 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea70b796-a14d-4e75-89a6-617b92000e33-host-proc-sys-net\") pod \"cilium-2qc48\" (UID: \"ea70b796-a14d-4e75-89a6-617b92000e33\") " pod="kube-system/cilium-2qc48" Feb 9 18:40:09.539359 kubelet[2407]: I0209 18:40:09.539348 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea70b796-a14d-4e75-89a6-617b92000e33-lib-modules\") pod \"cilium-2qc48\" (UID: \"ea70b796-a14d-4e75-89a6-617b92000e33\") " pod="kube-system/cilium-2qc48" Feb 9 18:40:09.539447 kubelet[2407]: I0209 18:40:09.539437 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea70b796-a14d-4e75-89a6-617b92000e33-hubble-tls\") pod \"cilium-2qc48\" (UID: \"ea70b796-a14d-4e75-89a6-617b92000e33\") " pod="kube-system/cilium-2qc48" Feb 9 18:40:09.539540 kubelet[2407]: I0209 18:40:09.539531 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea70b796-a14d-4e75-89a6-617b92000e33-cilium-run\") pod \"cilium-2qc48\" (UID: \"ea70b796-a14d-4e75-89a6-617b92000e33\") " pod="kube-system/cilium-2qc48" Feb 9 18:40:09.539633 kubelet[2407]: I0209 18:40:09.539622 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea70b796-a14d-4e75-89a6-617b92000e33-etc-cni-netd\") pod \"cilium-2qc48\" (UID: \"ea70b796-a14d-4e75-89a6-617b92000e33\") " pod="kube-system/cilium-2qc48" Feb 9 18:40:09.539722 kubelet[2407]: I0209 18:40:09.539713 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea70b796-a14d-4e75-89a6-617b92000e33-cilium-config-path\") pod \"cilium-2qc48\" (UID: \"ea70b796-a14d-4e75-89a6-617b92000e33\") " pod="kube-system/cilium-2qc48" Feb 9 18:40:09.539818 kubelet[2407]: I0209 18:40:09.539808 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea70b796-a14d-4e75-89a6-617b92000e33-host-proc-sys-kernel\") pod \"cilium-2qc48\" (UID: \"ea70b796-a14d-4e75-89a6-617b92000e33\") " pod="kube-system/cilium-2qc48" Feb 9 18:40:09.539911 kubelet[2407]: I0209 18:40:09.539902 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea70b796-a14d-4e75-89a6-617b92000e33-bpf-maps\") pod \"cilium-2qc48\" (UID: \"ea70b796-a14d-4e75-89a6-617b92000e33\") " pod="kube-system/cilium-2qc48" Feb 9 18:40:09.540013 kubelet[2407]: I0209 18:40:09.540003 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea70b796-a14d-4e75-89a6-617b92000e33-xtables-lock\") pod \"cilium-2qc48\" (UID: \"ea70b796-a14d-4e75-89a6-617b92000e33\") " pod="kube-system/cilium-2qc48" Feb 9 18:40:09.540107 kubelet[2407]: I0209 18:40:09.540097 2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l7c9\" (UniqueName: \"kubernetes.io/projected/ea70b796-a14d-4e75-89a6-617b92000e33-kube-api-access-8l7c9\") pod \"cilium-2qc48\" (UID: \"ea70b796-a14d-4e75-89a6-617b92000e33\") " pod="kube-system/cilium-2qc48" Feb 9 18:40:09.832662 env[1356]: time="2024-02-09T18:40:09.832258408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2qc48,Uid:ea70b796-a14d-4e75-89a6-617b92000e33,Namespace:kube-system,Attempt:0,}" Feb 9 18:40:09.871711 env[1356]: time="2024-02-09T18:40:09.871639733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:40:09.871711 env[1356]: time="2024-02-09T18:40:09.871680525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:40:09.871711 env[1356]: time="2024-02-09T18:40:09.871690924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:40:09.872143 env[1356]: time="2024-02-09T18:40:09.872074094Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc33700389b4c5fac3b17fcfb407a9ced48af1025f04e1a9759ff08a9cb0918b pid=4276 runtime=io.containerd.runc.v2 Feb 9 18:40:09.883029 systemd[1]: Started cri-containerd-bc33700389b4c5fac3b17fcfb407a9ced48af1025f04e1a9759ff08a9cb0918b.scope. Feb 9 18:40:09.904325 env[1356]: time="2024-02-09T18:40:09.904262965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2qc48,Uid:ea70b796-a14d-4e75-89a6-617b92000e33,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc33700389b4c5fac3b17fcfb407a9ced48af1025f04e1a9759ff08a9cb0918b\"" Feb 9 18:40:09.907202 env[1356]: time="2024-02-09T18:40:09.907170157Z" level=info msg="CreateContainer within sandbox \"bc33700389b4c5fac3b17fcfb407a9ced48af1025f04e1a9759ff08a9cb0918b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:40:09.982418 env[1356]: time="2024-02-09T18:40:09.982373612Z" level=info msg="CreateContainer within sandbox \"bc33700389b4c5fac3b17fcfb407a9ced48af1025f04e1a9759ff08a9cb0918b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c7b1905c093a28e300ea955d0b9542dd6b9a830824fbb2fca679ce9c34008ee9\"" Feb 9 18:40:09.984937 env[1356]: time="2024-02-09T18:40:09.984897554Z" level=info msg="StartContainer for \"c7b1905c093a28e300ea955d0b9542dd6b9a830824fbb2fca679ce9c34008ee9\"" Feb 9 18:40:10.007595 systemd[1]: Started cri-containerd-c7b1905c093a28e300ea955d0b9542dd6b9a830824fbb2fca679ce9c34008ee9.scope. Feb 9 18:40:10.068667 env[1356]: time="2024-02-09T18:40:10.068610442Z" level=info msg="StartContainer for \"c7b1905c093a28e300ea955d0b9542dd6b9a830824fbb2fca679ce9c34008ee9\" returns successfully" Feb 9 18:40:10.081339 systemd[1]: cri-containerd-c7b1905c093a28e300ea955d0b9542dd6b9a830824fbb2fca679ce9c34008ee9.scope: Deactivated successfully. Feb 9 18:40:10.154797 env[1356]: time="2024-02-09T18:40:10.154746547Z" level=info msg="shim disconnected" id=c7b1905c093a28e300ea955d0b9542dd6b9a830824fbb2fca679ce9c34008ee9 Feb 9 18:40:10.154797 env[1356]: time="2024-02-09T18:40:10.154791419Z" level=warning msg="cleaning up after shim disconnected" id=c7b1905c093a28e300ea955d0b9542dd6b9a830824fbb2fca679ce9c34008ee9 namespace=k8s.io Feb 9 18:40:10.154797 env[1356]: time="2024-02-09T18:40:10.154801937Z" level=info msg="cleaning up dead shim" Feb 9 18:40:10.161552 env[1356]: time="2024-02-09T18:40:10.161499486Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4360 runtime=io.containerd.runc.v2\n" Feb 9 18:40:10.495573 env[1356]: time="2024-02-09T18:40:10.495466179Z" level=info msg="CreateContainer within sandbox \"bc33700389b4c5fac3b17fcfb407a9ced48af1025f04e1a9759ff08a9cb0918b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:40:10.531506 env[1356]: time="2024-02-09T18:40:10.531455431Z" level=info msg="CreateContainer within sandbox \"bc33700389b4c5fac3b17fcfb407a9ced48af1025f04e1a9759ff08a9cb0918b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"78c05b3b990cad64effa7e1c7aaabc8fa726557e6f11d1fd6653d03b7faca48d\"" Feb 9 18:40:10.532130 env[1356]: time="2024-02-09T18:40:10.532096395Z" level=info msg="StartContainer for \"78c05b3b990cad64effa7e1c7aaabc8fa726557e6f11d1fd6653d03b7faca48d\"" Feb 9 18:40:10.546652 systemd[1]: Started cri-containerd-78c05b3b990cad64effa7e1c7aaabc8fa726557e6f11d1fd6653d03b7faca48d.scope. Feb 9 18:40:10.574387 env[1356]: time="2024-02-09T18:40:10.574339597Z" level=info msg="StartContainer for \"78c05b3b990cad64effa7e1c7aaabc8fa726557e6f11d1fd6653d03b7faca48d\" returns successfully" Feb 9 18:40:10.579772 systemd[1]: cri-containerd-78c05b3b990cad64effa7e1c7aaabc8fa726557e6f11d1fd6653d03b7faca48d.scope: Deactivated successfully. Feb 9 18:40:10.609585 env[1356]: time="2024-02-09T18:40:10.609542911Z" level=info msg="shim disconnected" id=78c05b3b990cad64effa7e1c7aaabc8fa726557e6f11d1fd6653d03b7faca48d Feb 9 18:40:10.609827 env[1356]: time="2024-02-09T18:40:10.609808463Z" level=warning msg="cleaning up after shim disconnected" id=78c05b3b990cad64effa7e1c7aaabc8fa726557e6f11d1fd6653d03b7faca48d namespace=k8s.io Feb 9 18:40:10.609914 env[1356]: time="2024-02-09T18:40:10.609900487Z" level=info msg="cleaning up dead shim" Feb 9 18:40:10.618804 env[1356]: time="2024-02-09T18:40:10.618766323Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4422 runtime=io.containerd.runc.v2\n" Feb 9 18:40:10.921200 kubelet[2407]: W0209 18:40:10.921152 2407 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod296ba679_7187_4567_bce6_810c55e695c1.slice/cri-containerd-5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3.scope WatchSource:0}: container "5edceaa8ac5106a34ecdf09069fb48ca68cddebb56c9e8e94bc2896ba416c4d3" in namespace "k8s.io": not found Feb 9 18:40:11.066329 kubelet[2407]: I0209 18:40:11.065913 2407 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=296ba679-7187-4567-bce6-810c55e695c1 path="/var/lib/kubelet/pods/296ba679-7187-4567-bce6-810c55e695c1/volumes" Feb 9 18:40:11.167537 kubelet[2407]: E0209 18:40:11.167503 2407 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:40:11.500878 env[1356]: time="2024-02-09T18:40:11.500830462Z" level=info msg="CreateContainer within sandbox \"bc33700389b4c5fac3b17fcfb407a9ced48af1025f04e1a9759ff08a9cb0918b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:40:11.526922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2144338677.mount: Deactivated successfully. Feb 9 18:40:11.541865 env[1356]: time="2024-02-09T18:40:11.541818926Z" level=info msg="CreateContainer within sandbox \"bc33700389b4c5fac3b17fcfb407a9ced48af1025f04e1a9759ff08a9cb0918b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"82bcdbfcd7dc13e7db99fa859420b5499d48779281841d57ed4f67f92437ecfa\"" Feb 9 18:40:11.542493 env[1356]: time="2024-02-09T18:40:11.542468289Z" level=info msg="StartContainer for \"82bcdbfcd7dc13e7db99fa859420b5499d48779281841d57ed4f67f92437ecfa\"" Feb 9 18:40:11.558013 systemd[1]: Started cri-containerd-82bcdbfcd7dc13e7db99fa859420b5499d48779281841d57ed4f67f92437ecfa.scope. Feb 9 18:40:11.586067 systemd[1]: cri-containerd-82bcdbfcd7dc13e7db99fa859420b5499d48779281841d57ed4f67f92437ecfa.scope: Deactivated successfully. Feb 9 18:40:11.591274 env[1356]: time="2024-02-09T18:40:11.591206359Z" level=info msg="StartContainer for \"82bcdbfcd7dc13e7db99fa859420b5499d48779281841d57ed4f67f92437ecfa\" returns successfully" Feb 9 18:40:11.626460 env[1356]: time="2024-02-09T18:40:11.626407264Z" level=info msg="shim disconnected" id=82bcdbfcd7dc13e7db99fa859420b5499d48779281841d57ed4f67f92437ecfa Feb 9 18:40:11.626460 env[1356]: time="2024-02-09T18:40:11.626456935Z" level=warning msg="cleaning up after shim disconnected" id=82bcdbfcd7dc13e7db99fa859420b5499d48779281841d57ed4f67f92437ecfa namespace=k8s.io Feb 9 18:40:11.626460 env[1356]: time="2024-02-09T18:40:11.626466453Z" level=info msg="cleaning up dead shim" Feb 9 18:40:11.634219 env[1356]: time="2024-02-09T18:40:11.634172907Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4478 runtime=io.containerd.runc.v2\n" Feb 9 18:40:12.504927 env[1356]: time="2024-02-09T18:40:12.504870893Z" level=info msg="CreateContainer within sandbox \"bc33700389b4c5fac3b17fcfb407a9ced48af1025f04e1a9759ff08a9cb0918b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:40:12.541964 env[1356]: time="2024-02-09T18:40:12.541922617Z" level=info msg="CreateContainer within sandbox \"bc33700389b4c5fac3b17fcfb407a9ced48af1025f04e1a9759ff08a9cb0918b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"341ed73debb2ac51b9bbf420af88ca6b8b56b3bbda953b1eafc7a917ec4df88f\"" Feb 9 18:40:12.542825 env[1356]: time="2024-02-09T18:40:12.542790862Z" level=info msg="StartContainer for \"341ed73debb2ac51b9bbf420af88ca6b8b56b3bbda953b1eafc7a917ec4df88f\"" Feb 9 18:40:12.564440 systemd[1]: Started cri-containerd-341ed73debb2ac51b9bbf420af88ca6b8b56b3bbda953b1eafc7a917ec4df88f.scope. Feb 9 18:40:12.589903 systemd[1]: cri-containerd-341ed73debb2ac51b9bbf420af88ca6b8b56b3bbda953b1eafc7a917ec4df88f.scope: Deactivated successfully. Feb 9 18:40:12.593995 env[1356]: time="2024-02-09T18:40:12.593952019Z" level=info msg="StartContainer for \"341ed73debb2ac51b9bbf420af88ca6b8b56b3bbda953b1eafc7a917ec4df88f\" returns successfully" Feb 9 18:40:12.622725 env[1356]: time="2024-02-09T18:40:12.622668116Z" level=info msg="shim disconnected" id=341ed73debb2ac51b9bbf420af88ca6b8b56b3bbda953b1eafc7a917ec4df88f Feb 9 18:40:12.622725 env[1356]: time="2024-02-09T18:40:12.622719626Z" level=warning msg="cleaning up after shim disconnected" id=341ed73debb2ac51b9bbf420af88ca6b8b56b3bbda953b1eafc7a917ec4df88f namespace=k8s.io Feb 9 18:40:12.622725 env[1356]: time="2024-02-09T18:40:12.622730504Z" level=info msg="cleaning up dead shim" Feb 9 18:40:12.630256 env[1356]: time="2024-02-09T18:40:12.630186409Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4534 runtime=io.containerd.runc.v2\n" Feb 9 18:40:12.645673 systemd[1]: run-containerd-runc-k8s.io-341ed73debb2ac51b9bbf420af88ca6b8b56b3bbda953b1eafc7a917ec4df88f-runc.5ZcXhz.mount: Deactivated successfully. Feb 9 18:40:12.645770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-341ed73debb2ac51b9bbf420af88ca6b8b56b3bbda953b1eafc7a917ec4df88f-rootfs.mount: Deactivated successfully. Feb 9 18:40:13.511537 env[1356]: time="2024-02-09T18:40:13.511497477Z" level=info msg="CreateContainer within sandbox \"bc33700389b4c5fac3b17fcfb407a9ced48af1025f04e1a9759ff08a9cb0918b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:40:13.543518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount15529261.mount: Deactivated successfully. Feb 9 18:40:13.549037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2430246940.mount: Deactivated successfully. Feb 9 18:40:13.557999 env[1356]: time="2024-02-09T18:40:13.557959034Z" level=info msg="CreateContainer within sandbox \"bc33700389b4c5fac3b17fcfb407a9ced48af1025f04e1a9759ff08a9cb0918b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"426795647903b1e74117503284d15f5dc88c210d0063cd32b67da2dee4c0c871\"" Feb 9 18:40:13.558808 env[1356]: time="2024-02-09T18:40:13.558785287Z" level=info msg="StartContainer for \"426795647903b1e74117503284d15f5dc88c210d0063cd32b67da2dee4c0c871\"" Feb 9 18:40:13.574131 systemd[1]: Started cri-containerd-426795647903b1e74117503284d15f5dc88c210d0063cd32b67da2dee4c0c871.scope. Feb 9 18:40:13.614191 env[1356]: time="2024-02-09T18:40:13.614146658Z" level=info msg="StartContainer for \"426795647903b1e74117503284d15f5dc88c210d0063cd32b67da2dee4c0c871\" returns successfully" Feb 9 18:40:14.031328 kubelet[2407]: W0209 18:40:14.030895 2407 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea70b796_a14d_4e75_89a6_617b92000e33.slice/cri-containerd-c7b1905c093a28e300ea955d0b9542dd6b9a830824fbb2fca679ce9c34008ee9.scope WatchSource:0}: task c7b1905c093a28e300ea955d0b9542dd6b9a830824fbb2fca679ce9c34008ee9 not found: not found Feb 9 18:40:14.107311 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 18:40:15.190177 systemd[1]: run-containerd-runc-k8s.io-426795647903b1e74117503284d15f5dc88c210d0063cd32b67da2dee4c0c871-runc.MxZI3l.mount: Deactivated successfully. Feb 9 18:40:15.508940 kubelet[2407]: I0209 18:40:15.508640 2407 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-b879aa43fa" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 18:40:15.508581178 +0000 UTC m=+214.614680181 LastTransitionTime:2024-02-09 18:40:15.508581178 +0000 UTC m=+214.614680181 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 18:40:16.570763 systemd-networkd[1501]: lxc_health: Link UP Feb 9 18:40:16.583183 systemd-networkd[1501]: lxc_health: Gained carrier Feb 9 18:40:16.583375 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:40:17.139853 kubelet[2407]: W0209 18:40:17.139810 2407 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea70b796_a14d_4e75_89a6_617b92000e33.slice/cri-containerd-78c05b3b990cad64effa7e1c7aaabc8fa726557e6f11d1fd6653d03b7faca48d.scope WatchSource:0}: task 78c05b3b990cad64effa7e1c7aaabc8fa726557e6f11d1fd6653d03b7faca48d not found: not found Feb 9 18:40:17.345906 systemd[1]: run-containerd-runc-k8s.io-426795647903b1e74117503284d15f5dc88c210d0063cd32b67da2dee4c0c871-runc.9Q5DUu.mount: Deactivated successfully. Feb 9 18:40:17.854300 kubelet[2407]: I0209 18:40:17.854240 2407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2qc48" podStartSLOduration=8.854193139 podCreationTimestamp="2024-02-09 18:40:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:40:14.525374617 +0000 UTC m=+213.631473620" watchObservedRunningTime="2024-02-09 18:40:17.854193139 +0000 UTC m=+216.960292142" Feb 9 18:40:18.551400 systemd-networkd[1501]: lxc_health: Gained IPv6LL Feb 9 18:40:19.533730 systemd[1]: run-containerd-runc-k8s.io-426795647903b1e74117503284d15f5dc88c210d0063cd32b67da2dee4c0c871-runc.NIRc2X.mount: Deactivated successfully. Feb 9 18:40:20.248015 kubelet[2407]: W0209 18:40:20.247957 2407 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea70b796_a14d_4e75_89a6_617b92000e33.slice/cri-containerd-82bcdbfcd7dc13e7db99fa859420b5499d48779281841d57ed4f67f92437ecfa.scope WatchSource:0}: task 82bcdbfcd7dc13e7db99fa859420b5499d48779281841d57ed4f67f92437ecfa not found: not found Feb 9 18:40:21.662593 systemd[1]: run-containerd-runc-k8s.io-426795647903b1e74117503284d15f5dc88c210d0063cd32b67da2dee4c0c871-runc.F8IdrN.mount: Deactivated successfully. Feb 9 18:40:23.359708 kubelet[2407]: W0209 18:40:23.359674 2407 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea70b796_a14d_4e75_89a6_617b92000e33.slice/cri-containerd-341ed73debb2ac51b9bbf420af88ca6b8b56b3bbda953b1eafc7a917ec4df88f.scope WatchSource:0}: task 341ed73debb2ac51b9bbf420af88ca6b8b56b3bbda953b1eafc7a917ec4df88f not found: not found Feb 9 18:40:23.784644 systemd[1]: run-containerd-runc-k8s.io-426795647903b1e74117503284d15f5dc88c210d0063cd32b67da2dee4c0c871-runc.z4a16O.mount: Deactivated successfully. Feb 9 18:40:23.934260 sshd[4220]: pam_unix(sshd:session): session closed for user core Feb 9 18:40:23.936872 systemd[1]: sshd@23-10.200.20.17:22-10.200.12.6:53614.service: Deactivated successfully. Feb 9 18:40:23.937596 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 18:40:23.938200 systemd-logind[1339]: Session 26 logged out. Waiting for processes to exit. Feb 9 18:40:23.939047 systemd-logind[1339]: Removed session 26. Feb 9 18:40:37.943493 kubelet[2407]: E0209 18:40:37.943462 2407 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.17:53412->10.200.20.43:2379: read: connection timed out" Feb 9 18:40:37.950532 systemd[1]: cri-containerd-c7f111754e7657c87685f53e5107e4b36f9d6a9b29d0bbc470afc059f1d0087e.scope: Deactivated successfully. Feb 9 18:40:37.950909 systemd[1]: cri-containerd-c7f111754e7657c87685f53e5107e4b36f9d6a9b29d0bbc470afc059f1d0087e.scope: Consumed 1.913s CPU time. Feb 9 18:40:37.969118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7f111754e7657c87685f53e5107e4b36f9d6a9b29d0bbc470afc059f1d0087e-rootfs.mount: Deactivated successfully. Feb 9 18:40:38.011054 env[1356]: time="2024-02-09T18:40:38.011007923Z" level=info msg="shim disconnected" id=c7f111754e7657c87685f53e5107e4b36f9d6a9b29d0bbc470afc059f1d0087e Feb 9 18:40:38.011560 env[1356]: time="2024-02-09T18:40:38.011530439Z" level=warning msg="cleaning up after shim disconnected" id=c7f111754e7657c87685f53e5107e4b36f9d6a9b29d0bbc470afc059f1d0087e namespace=k8s.io Feb 9 18:40:38.011660 env[1356]: time="2024-02-09T18:40:38.011645940Z" level=info msg="cleaning up dead shim" Feb 9 18:40:38.019011 env[1356]: time="2024-02-09T18:40:38.018972920Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5235 runtime=io.containerd.runc.v2\n" Feb 9 18:40:38.552942 kubelet[2407]: I0209 18:40:38.552599 2407 scope.go:115] "RemoveContainer" containerID="c7f111754e7657c87685f53e5107e4b36f9d6a9b29d0bbc470afc059f1d0087e" Feb 9 18:40:38.554729 env[1356]: time="2024-02-09T18:40:38.554687493Z" level=info msg="CreateContainer within sandbox \"5c85dd3590539d85f6ffbcba93f5785dc239b388ff163d05b5ee694d19156d9a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 18:40:38.579826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount276926036.mount: Deactivated successfully. Feb 9 18:40:38.585196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1235553474.mount: Deactivated successfully. Feb 9 18:40:38.600076 env[1356]: time="2024-02-09T18:40:38.600019873Z" level=info msg="CreateContainer within sandbox \"5c85dd3590539d85f6ffbcba93f5785dc239b388ff163d05b5ee694d19156d9a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4901cd692b14160b8bad5a96ae60bc8f07862d70a2013124a1bad1b85c525eeb\"" Feb 9 18:40:38.600810 env[1356]: time="2024-02-09T18:40:38.600775791Z" level=info msg="StartContainer for \"4901cd692b14160b8bad5a96ae60bc8f07862d70a2013124a1bad1b85c525eeb\"" Feb 9 18:40:38.619752 systemd[1]: Started cri-containerd-4901cd692b14160b8bad5a96ae60bc8f07862d70a2013124a1bad1b85c525eeb.scope. Feb 9 18:40:38.661201 env[1356]: time="2024-02-09T18:40:38.661149469Z" level=info msg="StartContainer for \"4901cd692b14160b8bad5a96ae60bc8f07862d70a2013124a1bad1b85c525eeb\" returns successfully" Feb 9 18:40:38.732700 systemd[1]: cri-containerd-143360341c3c767b341822e7dd6ea9a23e49922b6479b89c52c65e49abe6aced.scope: Deactivated successfully. Feb 9 18:40:38.732993 systemd[1]: cri-containerd-143360341c3c767b341822e7dd6ea9a23e49922b6479b89c52c65e49abe6aced.scope: Consumed 3.221s CPU time. Feb 9 18:40:38.787694 env[1356]: time="2024-02-09T18:40:38.787649898Z" level=info msg="shim disconnected" id=143360341c3c767b341822e7dd6ea9a23e49922b6479b89c52c65e49abe6aced Feb 9 18:40:38.788012 env[1356]: time="2024-02-09T18:40:38.787993123Z" level=warning msg="cleaning up after shim disconnected" id=143360341c3c767b341822e7dd6ea9a23e49922b6479b89c52c65e49abe6aced namespace=k8s.io Feb 9 18:40:38.788096 env[1356]: time="2024-02-09T18:40:38.788081908Z" level=info msg="cleaning up dead shim" Feb 9 18:40:38.795421 env[1356]: time="2024-02-09T18:40:38.795387612Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5298 runtime=io.containerd.runc.v2\n" Feb 9 18:40:38.969881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-143360341c3c767b341822e7dd6ea9a23e49922b6479b89c52c65e49abe6aced-rootfs.mount: Deactivated successfully. Feb 9 18:40:39.555200 kubelet[2407]: I0209 18:40:39.555176 2407 scope.go:115] "RemoveContainer" containerID="143360341c3c767b341822e7dd6ea9a23e49922b6479b89c52c65e49abe6aced" Feb 9 18:40:39.558815 env[1356]: time="2024-02-09T18:40:39.557724161Z" level=info msg="CreateContainer within sandbox \"6165d33e4aa77ffd73e6f3587916150ab8127f2b950ea0b21027c75f4c116650\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 18:40:39.580752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334794199.mount: Deactivated successfully. Feb 9 18:40:39.586026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2717947023.mount: Deactivated successfully. Feb 9 18:40:39.602755 env[1356]: time="2024-02-09T18:40:39.602691105Z" level=info msg="CreateContainer within sandbox \"6165d33e4aa77ffd73e6f3587916150ab8127f2b950ea0b21027c75f4c116650\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f7743b08cdcf25febc80b577f81f3ea1f24954a8b1103610a8081b99145536dc\"" Feb 9 18:40:39.603394 env[1356]: time="2024-02-09T18:40:39.603360797Z" level=info msg="StartContainer for \"f7743b08cdcf25febc80b577f81f3ea1f24954a8b1103610a8081b99145536dc\"" Feb 9 18:40:39.618898 systemd[1]: Started cri-containerd-f7743b08cdcf25febc80b577f81f3ea1f24954a8b1103610a8081b99145536dc.scope. Feb 9 18:40:39.682296 env[1356]: time="2024-02-09T18:40:39.682235740Z" level=info msg="StartContainer for \"f7743b08cdcf25febc80b577f81f3ea1f24954a8b1103610a8081b99145536dc\" returns successfully" Feb 9 18:40:40.311714 kubelet[2407]: E0209 18:40:40.311408 2407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-b879aa43fa.17b245de50535879", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-b879aa43fa", UID:"1fd66c0a2b1014958b920312728c3e65", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Liveness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b879aa43fa"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 40, 29, 859534969, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 40, 29, 859534969, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.17:53218->10.200.20.43:2379: read: connection timed out' (will not retry!) Feb 9 18:40:41.052025 env[1356]: time="2024-02-09T18:40:41.051988411Z" level=info msg="StopPodSandbox for \"061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49\"" Feb 9 18:40:41.052512 env[1356]: time="2024-02-09T18:40:41.052471014Z" level=info msg="TearDown network for sandbox \"061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49\" successfully" Feb 9 18:40:41.052598 env[1356]: time="2024-02-09T18:40:41.052582276Z" level=info msg="StopPodSandbox for \"061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49\" returns successfully" Feb 9 18:40:41.052997 env[1356]: time="2024-02-09T18:40:41.052975334Z" level=info msg="RemovePodSandbox for \"061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49\"" Feb 9 18:40:41.053114 env[1356]: time="2024-02-09T18:40:41.053081477Z" level=info msg="Forcibly stopping sandbox \"061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49\"" Feb 9 18:40:41.053242 env[1356]: time="2024-02-09T18:40:41.053223454Z" level=info msg="TearDown network for sandbox \"061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49\" successfully" Feb 9 18:40:41.063241 env[1356]: time="2024-02-09T18:40:41.063209223Z" level=info msg="RemovePodSandbox \"061ddae7d0dac4016e7528a9f838a7b08d4ebad685dfb36c9e2b563bf4a82f49\" returns successfully" Feb 9 18:40:41.064256 env[1356]: time="2024-02-09T18:40:41.064219902Z" level=info msg="StopPodSandbox for \"3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520\"" Feb 9 18:40:41.064363 env[1356]: time="2024-02-09T18:40:41.064320046Z" level=info msg="TearDown network for sandbox \"3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520\" successfully" Feb 9 18:40:41.064363 env[1356]: time="2024-02-09T18:40:41.064357600Z" level=info msg="StopPodSandbox for \"3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520\" returns successfully" Feb 9 18:40:41.064862 env[1356]: time="2024-02-09T18:40:41.064829364Z" level=info msg="RemovePodSandbox for \"3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520\"" Feb 9 18:40:41.064904 env[1356]: time="2024-02-09T18:40:41.064866559Z" level=info msg="Forcibly stopping sandbox \"3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520\"" Feb 9 18:40:41.064954 env[1356]: time="2024-02-09T18:40:41.064931748Z" level=info msg="TearDown network for sandbox \"3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520\" successfully" Feb 9 18:40:41.074019 env[1356]: time="2024-02-09T18:40:41.073974947Z" level=info msg="RemovePodSandbox \"3fd533c40f0bce33050149c95f330bea7975a5f63e6f3035ad79be0c00b74520\" returns successfully" Feb 9 18:40:41.074386 env[1356]: time="2024-02-09T18:40:41.074362325Z" level=info msg="StopPodSandbox for \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\"" Feb 9 18:40:41.074626 env[1356]: time="2024-02-09T18:40:41.074587089Z" level=info msg="TearDown network for sandbox \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\" successfully" Feb 9 18:40:41.074705 env[1356]: time="2024-02-09T18:40:41.074688993Z" level=info msg="StopPodSandbox for \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\" returns successfully" Feb 9 18:40:41.075154 env[1356]: time="2024-02-09T18:40:41.075122044Z" level=info msg="RemovePodSandbox for \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\"" Feb 9 18:40:41.075235 env[1356]: time="2024-02-09T18:40:41.075155159Z" level=info msg="Forcibly stopping sandbox \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\"" Feb 9 18:40:41.075235 env[1356]: time="2024-02-09T18:40:41.075218229Z" level=info msg="TearDown network for sandbox \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\" successfully" Feb 9 18:40:41.081737 env[1356]: time="2024-02-09T18:40:41.081684438Z" level=info msg="RemovePodSandbox \"0e1018b4e4ce02dde9df9937d746ac86e039ff28bc8cd21c9b4d3ebeb16e1414\" returns successfully" Feb 9 18:40:47.945133 kubelet[2407]: E0209 18:40:47.945083 2407 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b879aa43fa?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 18:40:48.631089 kubelet[2407]: I0209 18:40:48.631055 2407 status_manager.go:809] "Failed to get status for pod" podUID=1fd66c0a2b1014958b920312728c3e65 pod="kube-system/kube-apiserver-ci-3510.3.2-a-b879aa43fa" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.17:53312->10.200.20.43:2379: read: connection timed out" Feb 9 18:40:57.945537 kubelet[2407]: E0209 18:40:57.945504 2407 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b879aa43fa?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"