Feb 9 09:58:12.029221 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 09:58:12.029244 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:58:12.029253 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 9 09:58:12.029261 kernel: printk: bootconsole [pl11] enabled Feb 9 09:58:12.029266 kernel: efi: EFI v2.70 by EDK II Feb 9 09:58:12.029272 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 9 09:58:12.029278 kernel: random: crng init done Feb 9 09:58:12.029284 kernel: ACPI: Early table checksum verification disabled Feb 9 09:58:12.029289 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 9 09:58:12.029295 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:58:12.029300 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:58:12.029307 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 09:58:12.029312 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:58:12.029318 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:58:12.029325 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:58:12.029331 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:58:12.029336 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:58:12.029344 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:58:12.029349 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 9 09:58:12.029355 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:58:12.029361 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 9 09:58:12.029367 kernel: NUMA: Failed to initialise from firmware Feb 9 09:58:12.029372 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 09:58:12.029378 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Feb 9 09:58:12.029384 kernel: Zone ranges: Feb 9 09:58:12.029389 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 9 09:58:12.029395 kernel: DMA32 empty Feb 9 09:58:12.029402 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 09:58:12.029407 kernel: Movable zone start for each node Feb 9 09:58:12.029413 kernel: Early memory node ranges Feb 9 09:58:12.029419 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 9 09:58:12.029424 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 9 09:58:12.029430 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 9 09:58:12.029436 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 9 09:58:12.029442 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 9 09:58:12.029447 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 9 09:58:12.029453 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 9 09:58:12.029459 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 9 09:58:12.029464 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 09:58:12.029472 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 09:58:12.029480 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 9 09:58:12.029486 kernel: psci: probing for conduit method from ACPI. Feb 9 09:58:12.029492 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 09:58:12.029498 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:58:12.029506 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 9 09:58:12.029511 kernel: psci: SMC Calling Convention v1.4 Feb 9 09:58:12.029517 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 9 09:58:12.029523 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 9 09:58:12.029529 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:58:12.029536 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:58:12.029542 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 09:58:12.029548 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:58:12.029554 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:58:12.029560 kernel: CPU features: detected: Hardware dirty bit management Feb 9 09:58:12.029566 kernel: CPU features: detected: Spectre-BHB Feb 9 09:58:12.029572 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:58:12.029579 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:58:12.029585 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 09:58:12.029591 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 9 09:58:12.029597 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 9 09:58:12.029603 kernel: Policy zone: Normal Feb 9 09:58:12.029611 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:58:12.029618 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:58:12.029624 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:58:12.029630 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:58:12.029636 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:58:12.029643 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 9 09:58:12.029650 kernel: Memory: 3991936K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202224K reserved, 0K cma-reserved) Feb 9 09:58:12.029656 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 09:58:12.029662 kernel: trace event string verifier disabled Feb 9 09:58:12.029668 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:58:12.029674 kernel: rcu: RCU event tracing is enabled. Feb 9 09:58:12.029680 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 09:58:12.029686 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:58:12.029693 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:58:12.029699 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:58:12.029705 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 09:58:12.029712 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:58:12.029718 kernel: GICv3: 960 SPIs implemented Feb 9 09:58:12.029724 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:58:12.029730 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:58:12.029736 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:58:12.029742 kernel: GICv3: 16 PPIs implemented Feb 9 09:58:12.029748 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 9 09:58:12.029754 kernel: ITS: No ITS available, not enabling LPIs Feb 9 09:58:12.029760 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:58:12.029766 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 09:58:12.029772 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 09:58:12.029778 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 09:58:12.029786 kernel: Console: colour dummy device 80x25 Feb 9 09:58:12.029793 kernel: printk: console [tty1] enabled Feb 9 09:58:12.029799 kernel: ACPI: Core revision 20210730 Feb 9 09:58:12.029806 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 09:58:12.029812 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:58:12.029818 kernel: LSM: Security Framework initializing Feb 9 09:58:12.029824 kernel: SELinux: Initializing. Feb 9 09:58:12.029831 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:58:12.029837 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:58:12.029845 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 9 09:58:12.029851 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 9 09:58:12.029858 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:58:12.029864 kernel: Remapping and enabling EFI services. Feb 9 09:58:12.029870 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:58:12.029876 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:58:12.029883 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 9 09:58:12.029889 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:58:12.029895 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 09:58:12.029902 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 09:58:12.029909 kernel: SMP: Total of 2 processors activated. Feb 9 09:58:12.029915 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:58:12.029921 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 9 09:58:12.029928 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 09:58:12.029934 kernel: CPU features: detected: CRC32 instructions Feb 9 09:58:12.029940 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 09:58:12.029946 kernel: CPU features: detected: LSE atomic instructions Feb 9 09:58:12.029952 kernel: CPU features: detected: Privileged Access Never Feb 9 09:58:12.029960 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:58:12.029966 kernel: alternatives: patching kernel code Feb 9 09:58:12.029977 kernel: devtmpfs: initialized Feb 9 09:58:12.029985 kernel: KASLR enabled Feb 9 09:58:12.029992 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:58:12.029999 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 09:58:12.030005 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:58:12.030011 kernel: SMBIOS 3.1.0 present. Feb 9 09:58:12.030018 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 09:58:12.030025 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:58:12.030033 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:58:12.030040 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:58:12.030047 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:58:12.030053 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:58:12.030060 kernel: audit: type=2000 audit(0.086:1): state=initialized audit_enabled=0 res=1 Feb 9 09:58:12.030066 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:58:12.030073 kernel: cpuidle: using governor menu Feb 9 09:58:12.030081 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:58:12.030088 kernel: ASID allocator initialised with 32768 entries Feb 9 09:58:12.030094 kernel: ACPI: bus type PCI registered Feb 9 09:58:12.030101 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:58:12.030107 kernel: Serial: AMBA PL011 UART driver Feb 9 09:58:12.030114 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:58:12.030120 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:58:12.030127 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:58:12.030133 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:58:12.030141 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:58:12.030148 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:58:12.030154 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:58:12.030161 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:58:12.030179 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:58:12.030187 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:58:12.030193 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:58:12.030200 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:58:12.030207 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:58:12.030216 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:58:12.030222 kernel: ACPI: Interpreter enabled Feb 9 09:58:12.030229 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:58:12.030235 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 9 09:58:12.030242 kernel: printk: console [ttyAMA0] enabled Feb 9 09:58:12.030248 kernel: printk: bootconsole [pl11] disabled Feb 9 09:58:12.030255 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 9 09:58:12.030262 kernel: iommu: Default domain type: Translated Feb 9 09:58:12.030268 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:58:12.030276 kernel: vgaarb: loaded Feb 9 09:58:12.030283 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:58:12.030289 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:58:12.030296 kernel: PTP clock support registered Feb 9 09:58:12.030302 kernel: Registered efivars operations Feb 9 09:58:12.030309 kernel: No ACPI PMU IRQ for CPU0 Feb 9 09:58:12.030315 kernel: No ACPI PMU IRQ for CPU1 Feb 9 09:58:12.030322 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:58:12.030328 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:58:12.030336 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:58:12.030343 kernel: pnp: PnP ACPI init Feb 9 09:58:12.030349 kernel: pnp: PnP ACPI: found 0 devices Feb 9 09:58:12.030356 kernel: NET: Registered PF_INET protocol family Feb 9 09:58:12.030363 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:58:12.030369 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:58:12.030376 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:58:12.030383 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:58:12.030390 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:58:12.030398 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:58:12.030405 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:58:12.030412 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:58:12.030418 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:58:12.030425 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:58:12.030431 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 9 09:58:12.030438 kernel: kvm [1]: HYP mode not available Feb 9 09:58:12.030444 kernel: Initialise system trusted keyrings Feb 9 09:58:12.030451 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:58:12.030459 kernel: Key type asymmetric registered Feb 9 09:58:12.030465 kernel: Asymmetric key parser 'x509' registered Feb 9 09:58:12.030472 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:58:12.030478 kernel: io scheduler mq-deadline registered Feb 9 09:58:12.030485 kernel: io scheduler kyber registered Feb 9 09:58:12.030492 kernel: io scheduler bfq registered Feb 9 09:58:12.030498 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:58:12.030505 kernel: thunder_xcv, ver 1.0 Feb 9 09:58:12.030511 kernel: thunder_bgx, ver 1.0 Feb 9 09:58:12.030519 kernel: nicpf, ver 1.0 Feb 9 09:58:12.030526 kernel: nicvf, ver 1.0 Feb 9 09:58:12.030654 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:58:12.030715 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:58:11 UTC (1707472691) Feb 9 09:58:12.030723 kernel: efifb: probing for efifb Feb 9 09:58:12.030731 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 09:58:12.030737 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 09:58:12.030744 kernel: efifb: scrolling: redraw Feb 9 09:58:12.030753 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 09:58:12.030759 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:58:12.030766 kernel: fb0: EFI VGA frame buffer device Feb 9 09:58:12.030773 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 9 09:58:12.030779 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:58:12.030786 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:58:12.030792 kernel: Segment Routing with IPv6 Feb 9 09:58:12.030799 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:58:12.030805 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:58:12.030813 kernel: Key type dns_resolver registered Feb 9 09:58:12.030819 kernel: registered taskstats version 1 Feb 9 09:58:12.030826 kernel: Loading compiled-in X.509 certificates Feb 9 09:58:12.030833 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:58:12.030839 kernel: Key type .fscrypt registered Feb 9 09:58:12.030846 kernel: Key type fscrypt-provisioning registered Feb 9 09:58:12.030853 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:58:12.030859 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:58:12.030866 kernel: ima: No architecture policies found Feb 9 09:58:12.030874 kernel: Freeing unused kernel memory: 34688K Feb 9 09:58:12.030881 kernel: Run /init as init process Feb 9 09:58:12.030887 kernel: with arguments: Feb 9 09:58:12.030894 kernel: /init Feb 9 09:58:12.030900 kernel: with environment: Feb 9 09:58:12.030906 kernel: HOME=/ Feb 9 09:58:12.030913 kernel: TERM=linux Feb 9 09:58:12.030920 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:58:12.030928 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:58:12.030938 systemd[1]: Detected virtualization microsoft. Feb 9 09:58:12.030945 systemd[1]: Detected architecture arm64. Feb 9 09:58:12.030952 systemd[1]: Running in initrd. Feb 9 09:58:12.030959 systemd[1]: No hostname configured, using default hostname. Feb 9 09:58:12.030966 systemd[1]: Hostname set to . Feb 9 09:58:12.030973 systemd[1]: Initializing machine ID from random generator. Feb 9 09:58:12.030980 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:58:12.030988 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:58:12.030996 systemd[1]: Reached target cryptsetup.target. Feb 9 09:58:12.031003 systemd[1]: Reached target paths.target. Feb 9 09:58:12.031010 systemd[1]: Reached target slices.target. Feb 9 09:58:12.031017 systemd[1]: Reached target swap.target. Feb 9 09:58:12.031024 systemd[1]: Reached target timers.target. Feb 9 09:58:12.031031 systemd[1]: Listening on iscsid.socket. Feb 9 09:58:12.031038 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:58:12.031047 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:58:12.031054 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:58:12.031061 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:58:12.031068 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:58:12.031075 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:58:12.031083 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:58:12.031090 systemd[1]: Reached target sockets.target. Feb 9 09:58:12.031097 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:58:12.031104 systemd[1]: Finished network-cleanup.service. Feb 9 09:58:12.031112 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:58:12.031119 systemd[1]: Starting systemd-journald.service... Feb 9 09:58:12.031126 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:58:12.031133 systemd[1]: Starting systemd-resolved.service... Feb 9 09:58:12.031140 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:58:12.031151 systemd-journald[276]: Journal started Feb 9 09:58:12.031206 systemd-journald[276]: Runtime Journal (/run/log/journal/248411a00eae4ea281a34ec7e9cb4305) is 8.0M, max 78.6M, 70.6M free. Feb 9 09:58:12.015222 systemd-modules-load[277]: Inserted module 'overlay' Feb 9 09:58:12.054455 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:58:12.059521 systemd-resolved[278]: Positive Trust Anchors: Feb 9 09:58:12.076105 systemd[1]: Started systemd-journald.service. Feb 9 09:58:12.076128 kernel: Bridge firewalling registered Feb 9 09:58:12.067786 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:58:12.104498 kernel: audit: type=1130 audit(1707472692.080:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.067820 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:58:12.167378 kernel: SCSI subsystem initialized Feb 9 09:58:12.167399 kernel: audit: type=1130 audit(1707472692.120:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.069962 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 9 09:58:12.080283 systemd-modules-load[277]: Inserted module 'br_netfilter' Feb 9 09:58:12.096645 systemd[1]: Started systemd-resolved.service. Feb 9 09:58:12.211098 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:58:12.211119 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:58:12.169151 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:58:12.230709 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:58:12.193987 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:58:12.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.203572 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:58:12.306826 kernel: audit: type=1130 audit(1707472692.193:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.306854 kernel: audit: type=1130 audit(1707472692.203:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.306865 kernel: audit: type=1130 audit(1707472692.223:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.224269 systemd[1]: Reached target nss-lookup.target. Feb 9 09:58:12.337615 kernel: audit: type=1130 audit(1707472692.311:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.235446 systemd-modules-load[277]: Inserted module 'dm_multipath' Feb 9 09:58:12.240057 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:58:12.276593 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:58:12.406342 kernel: audit: type=1130 audit(1707472692.337:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.406365 kernel: audit: type=1130 audit(1707472692.346:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.298471 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:58:12.412524 dracut-cmdline[296]: dracut-dracut-053 Feb 9 09:58:12.311969 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:58:12.422559 dracut-cmdline[296]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:58:12.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.337784 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:58:12.483125 kernel: audit: type=1130 audit(1707472692.429:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.348136 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:58:12.392300 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:58:12.414081 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:58:12.544202 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:58:12.555194 kernel: iscsi: registered transport (tcp) Feb 9 09:58:12.574570 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:58:12.574593 kernel: QLogic iSCSI HBA Driver Feb 9 09:58:12.604681 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:58:12.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:12.609987 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:58:12.663204 kernel: raid6: neonx8 gen() 13825 MB/s Feb 9 09:58:12.683182 kernel: raid6: neonx8 xor() 10817 MB/s Feb 9 09:58:12.703179 kernel: raid6: neonx4 gen() 13579 MB/s Feb 9 09:58:12.724180 kernel: raid6: neonx4 xor() 11202 MB/s Feb 9 09:58:12.745178 kernel: raid6: neonx2 gen() 12972 MB/s Feb 9 09:58:12.767180 kernel: raid6: neonx2 xor() 10386 MB/s Feb 9 09:58:12.787182 kernel: raid6: neonx1 gen() 10513 MB/s Feb 9 09:58:12.807179 kernel: raid6: neonx1 xor() 8803 MB/s Feb 9 09:58:12.828180 kernel: raid6: int64x8 gen() 6297 MB/s Feb 9 09:58:12.848179 kernel: raid6: int64x8 xor() 3550 MB/s Feb 9 09:58:12.868179 kernel: raid6: int64x4 gen() 7272 MB/s Feb 9 09:58:12.889179 kernel: raid6: int64x4 xor() 3849 MB/s Feb 9 09:58:12.910178 kernel: raid6: int64x2 gen() 6158 MB/s Feb 9 09:58:12.930178 kernel: raid6: int64x2 xor() 3319 MB/s Feb 9 09:58:12.951180 kernel: raid6: int64x1 gen() 5044 MB/s Feb 9 09:58:12.975889 kernel: raid6: int64x1 xor() 2648 MB/s Feb 9 09:58:12.975898 kernel: raid6: using algorithm neonx8 gen() 13825 MB/s Feb 9 09:58:12.975906 kernel: raid6: .... xor() 10817 MB/s, rmw enabled Feb 9 09:58:12.980165 kernel: raid6: using neon recovery algorithm Feb 9 09:58:12.998182 kernel: xor: measuring software checksum speed Feb 9 09:58:13.008068 kernel: 8regs : 17304 MB/sec Feb 9 09:58:13.008078 kernel: 32regs : 20760 MB/sec Feb 9 09:58:13.016601 kernel: arm64_neon : 27741 MB/sec Feb 9 09:58:13.016611 kernel: xor: using function: arm64_neon (27741 MB/sec) Feb 9 09:58:13.072186 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:58:13.081712 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:58:13.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:13.090000 audit: BPF prog-id=7 op=LOAD Feb 9 09:58:13.090000 audit: BPF prog-id=8 op=LOAD Feb 9 09:58:13.090822 systemd[1]: Starting systemd-udevd.service... Feb 9 09:58:13.109418 systemd-udevd[475]: Using default interface naming scheme 'v252'. Feb 9 09:58:13.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:13.115700 systemd[1]: Started systemd-udevd.service. Feb 9 09:58:13.126050 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:58:13.144100 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Feb 9 09:58:13.170102 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:58:13.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:13.175814 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:58:13.215141 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:58:13.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:13.266199 kernel: hv_vmbus: Vmbus version:5.3 Feb 9 09:58:13.289198 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 09:58:13.289251 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 09:58:13.289260 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 09:58:13.290185 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 09:58:13.321220 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 09:58:13.321270 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 09:58:13.332193 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 09:58:13.338200 kernel: scsi host0: storvsc_host_t Feb 9 09:58:13.338378 kernel: scsi host1: storvsc_host_t Feb 9 09:58:13.348622 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 09:58:13.357189 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 09:58:13.375553 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 09:58:13.375900 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 09:58:13.388105 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 09:58:13.388269 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 09:58:13.388352 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 09:58:13.392188 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 09:58:13.392324 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 09:58:13.393195 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 09:58:13.406184 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:58:13.417494 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 09:58:13.417654 kernel: hv_netvsc 000d3ac4-4940-000d-3ac4-4940000d3ac4 eth0: VF slot 1 added Feb 9 09:58:13.435266 kernel: hv_vmbus: registering driver hv_pci Feb 9 09:58:13.435330 kernel: hv_pci 5d382a3e-0c86-4322-a4cc-9cb18047f7a1: PCI VMBus probing: Using version 0x10004 Feb 9 09:58:13.452998 kernel: hv_pci 5d382a3e-0c86-4322-a4cc-9cb18047f7a1: PCI host bridge to bus 0c86:00 Feb 9 09:58:13.453162 kernel: pci_bus 0c86:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 9 09:58:13.453273 kernel: pci_bus 0c86:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 09:58:13.466639 kernel: pci 0c86:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 9 09:58:13.480613 kernel: pci 0c86:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 09:58:13.505228 kernel: pci 0c86:00:02.0: enabling Extended Tags Feb 9 09:58:13.528287 kernel: pci 0c86:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0c86:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 9 09:58:13.540153 kernel: pci_bus 0c86:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 09:58:13.540305 kernel: pci 0c86:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 09:58:13.580196 kernel: mlx5_core 0c86:00:02.0: firmware version: 16.30.1284 Feb 9 09:58:13.738191 kernel: mlx5_core 0c86:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 9 09:58:13.796235 kernel: hv_netvsc 000d3ac4-4940-000d-3ac4-4940000d3ac4 eth0: VF registering: eth1 Feb 9 09:58:13.796413 kernel: mlx5_core 0c86:00:02.0 eth1: joined to eth0 Feb 9 09:58:13.808209 kernel: mlx5_core 0c86:00:02.0 enP3206s1: renamed from eth1 Feb 9 09:58:13.896319 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:58:13.942197 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (528) Feb 9 09:58:13.954586 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:58:14.126212 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:58:14.134999 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:58:14.172040 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:58:14.188191 systemd[1]: Starting disk-uuid.service... Feb 9 09:58:14.210211 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:58:14.224201 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:58:15.232073 disk-uuid[603]: The operation has completed successfully. Feb 9 09:58:15.238033 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:58:15.293245 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:58:15.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:15.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:15.293345 systemd[1]: Finished disk-uuid.service. Feb 9 09:58:15.302947 systemd[1]: Starting verity-setup.service... Feb 9 09:58:15.355412 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:58:15.555388 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:58:15.561818 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:58:15.572029 systemd[1]: Finished verity-setup.service. Feb 9 09:58:15.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:15.629191 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:58:15.629268 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:58:15.633388 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:58:15.634133 systemd[1]: Starting ignition-setup.service... Feb 9 09:58:15.642215 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:58:15.682335 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:58:15.682387 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:58:15.688230 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:58:15.738310 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:58:15.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:15.747000 audit: BPF prog-id=9 op=LOAD Feb 9 09:58:15.747840 systemd[1]: Starting systemd-networkd.service... Feb 9 09:58:15.765402 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:58:15.776896 systemd-networkd[870]: lo: Link UP Feb 9 09:58:15.776907 systemd-networkd[870]: lo: Gained carrier Feb 9 09:58:15.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:15.777326 systemd-networkd[870]: Enumeration completed Feb 9 09:58:15.780479 systemd[1]: Started systemd-networkd.service. Feb 9 09:58:15.785124 systemd[1]: Reached target network.target. Feb 9 09:58:15.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:15.785300 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:58:15.790370 systemd[1]: Starting iscsiuio.service... Feb 9 09:58:15.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:15.801066 systemd[1]: Started iscsiuio.service. Feb 9 09:58:15.838244 iscsid[879]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:58:15.838244 iscsid[879]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 09:58:15.838244 iscsid[879]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:58:15.838244 iscsid[879]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:58:15.838244 iscsid[879]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:58:15.838244 iscsid[879]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:58:15.838244 iscsid[879]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:58:15.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:15.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:15.809519 systemd[1]: Starting iscsid.service... Feb 9 09:58:15.819968 systemd[1]: Started iscsid.service. Feb 9 09:58:15.827649 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:58:15.846290 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:58:15.851473 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:58:15.862289 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:58:15.879982 systemd[1]: Reached target remote-fs.target. Feb 9 09:58:15.899759 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:58:15.919255 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:58:15.982936 systemd[1]: Finished ignition-setup.service. Feb 9 09:58:16.012456 kernel: kauditd_printk_skb: 17 callbacks suppressed Feb 9 09:58:16.012482 kernel: audit: type=1130 audit(1707472695.987:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:15.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:15.988312 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:58:16.028186 kernel: mlx5_core 0c86:00:02.0 enP3206s1: Link up Feb 9 09:58:16.074524 kernel: hv_netvsc 000d3ac4-4940-000d-3ac4-4940000d3ac4 eth0: Data path switched to VF: enP3206s1 Feb 9 09:58:16.074732 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:58:16.075034 systemd-networkd[870]: enP3206s1: Link UP Feb 9 09:58:16.075273 systemd-networkd[870]: eth0: Link UP Feb 9 09:58:16.075607 systemd-networkd[870]: eth0: Gained carrier Feb 9 09:58:16.088769 systemd-networkd[870]: enP3206s1: Gained carrier Feb 9 09:58:16.101256 systemd-networkd[870]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:58:17.715323 systemd-networkd[870]: eth0: Gained IPv6LL Feb 9 09:58:19.638683 ignition[894]: Ignition 2.14.0 Feb 9 09:58:19.638693 ignition[894]: Stage: fetch-offline Feb 9 09:58:19.638749 ignition[894]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:58:19.638772 ignition[894]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:58:19.763730 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:58:19.763893 ignition[894]: parsed url from cmdline: "" Feb 9 09:58:19.763897 ignition[894]: no config URL provided Feb 9 09:58:19.763903 ignition[894]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:58:19.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:19.778510 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:58:19.815303 kernel: audit: type=1130 audit(1707472699.787:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:19.763911 ignition[894]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:58:19.809826 systemd[1]: Starting ignition-fetch.service... Feb 9 09:58:19.763916 ignition[894]: failed to fetch config: resource requires networking Feb 9 09:58:19.764022 ignition[894]: Ignition finished successfully Feb 9 09:58:19.822121 ignition[901]: Ignition 2.14.0 Feb 9 09:58:19.822128 ignition[901]: Stage: fetch Feb 9 09:58:19.822262 ignition[901]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:58:19.822281 ignition[901]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:58:19.825249 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:58:19.825636 ignition[901]: parsed url from cmdline: "" Feb 9 09:58:19.825641 ignition[901]: no config URL provided Feb 9 09:58:19.825648 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:58:19.825661 ignition[901]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:58:19.825695 ignition[901]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 09:58:19.915141 ignition[901]: GET result: OK Feb 9 09:58:19.915269 ignition[901]: config has been read from IMDS userdata Feb 9 09:58:19.915335 ignition[901]: parsing config with SHA512: a270ff849cf73ee3abcb1a71e1358f595b5edf327865eb705dbc02e1c69b48f8446ce3230b9a10efa387b5da23dfb0945e21e1d34bccf5e2facc6f31e4cd93c1 Feb 9 09:58:19.947155 unknown[901]: fetched base config from "system" Feb 9 09:58:19.948226 unknown[901]: fetched base config from "system" Feb 9 09:58:19.948835 ignition[901]: fetch: fetch complete Feb 9 09:58:19.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:19.948233 unknown[901]: fetched user config from "azure" Feb 9 09:58:19.948840 ignition[901]: fetch: fetch passed Feb 9 09:58:19.953265 systemd[1]: Finished ignition-fetch.service. Feb 9 09:58:19.948885 ignition[901]: Ignition finished successfully Feb 9 09:58:20.006453 kernel: audit: type=1130 audit(1707472699.960:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:19.961332 systemd[1]: Starting ignition-kargs.service... Feb 9 09:58:19.993924 ignition[908]: Ignition 2.14.0 Feb 9 09:58:20.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:20.011306 systemd[1]: Finished ignition-kargs.service. Feb 9 09:58:20.043090 kernel: audit: type=1130 audit(1707472700.015:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:19.993930 ignition[908]: Stage: kargs Feb 9 09:58:20.033541 systemd[1]: Starting ignition-disks.service... Feb 9 09:58:19.994034 ignition[908]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:58:19.994052 ignition[908]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:58:20.058584 systemd[1]: Finished ignition-disks.service. Feb 9 09:58:20.090495 kernel: audit: type=1130 audit(1707472700.062:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:20.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:20.004205 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:58:20.063113 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:58:20.007193 ignition[908]: kargs: kargs passed Feb 9 09:58:20.085779 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:58:20.007237 ignition[908]: Ignition finished successfully Feb 9 09:58:20.091234 systemd[1]: Reached target local-fs.target. Feb 9 09:58:20.046474 ignition[914]: Ignition 2.14.0 Feb 9 09:58:20.101534 systemd[1]: Reached target sysinit.target. Feb 9 09:58:20.046481 ignition[914]: Stage: disks Feb 9 09:58:20.109947 systemd[1]: Reached target basic.target. Feb 9 09:58:20.046586 ignition[914]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:58:20.118216 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:58:20.046608 ignition[914]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:58:20.052243 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:58:20.053676 ignition[914]: disks: disks passed Feb 9 09:58:20.053733 ignition[914]: Ignition finished successfully Feb 9 09:58:20.187943 systemd-fsck[922]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 9 09:58:20.198259 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:58:20.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:20.227557 kernel: audit: type=1130 audit(1707472700.202:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:20.224096 systemd[1]: Mounting sysroot.mount... Feb 9 09:58:20.244183 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:58:20.244690 systemd[1]: Mounted sysroot.mount. Feb 9 09:58:20.251868 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:58:20.289194 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:58:20.293911 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 09:58:20.301325 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:58:20.301356 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:58:20.307182 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:58:20.347533 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:58:20.352798 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:58:20.382271 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (933) Feb 9 09:58:20.393760 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:58:20.393777 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:58:20.393786 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:58:20.398945 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:58:20.409590 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:58:20.437367 initrd-setup-root[964]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:58:20.446547 initrd-setup-root[972]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:58:20.455588 initrd-setup-root[980]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:58:20.936694 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:58:20.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:20.961096 systemd[1]: Starting ignition-mount.service... Feb 9 09:58:20.971026 kernel: audit: type=1130 audit(1707472700.941:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:20.971332 systemd[1]: Starting sysroot-boot.service... Feb 9 09:58:20.976027 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 09:58:20.976134 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 09:58:21.001924 ignition[998]: INFO : Ignition 2.14.0 Feb 9 09:58:21.001924 ignition[998]: INFO : Stage: mount Feb 9 09:58:21.010676 ignition[998]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:58:21.010676 ignition[998]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:58:21.010676 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:58:21.010676 ignition[998]: INFO : mount: mount passed Feb 9 09:58:21.010676 ignition[998]: INFO : Ignition finished successfully Feb 9 09:58:21.089207 kernel: audit: type=1130 audit(1707472701.014:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:21.089234 kernel: audit: type=1130 audit(1707472701.069:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:21.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:21.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:21.010632 systemd[1]: Finished ignition-mount.service. Feb 9 09:58:21.062702 systemd[1]: Finished sysroot-boot.service. Feb 9 09:58:21.897657 coreos-metadata[932]: Feb 09 09:58:21.897 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 09:58:21.906725 coreos-metadata[932]: Feb 09 09:58:21.906 INFO Fetch successful Feb 9 09:58:21.939215 coreos-metadata[932]: Feb 09 09:58:21.939 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 09:58:21.954897 coreos-metadata[932]: Feb 09 09:58:21.954 INFO Fetch successful Feb 9 09:58:21.971330 coreos-metadata[932]: Feb 09 09:58:21.971 INFO wrote hostname ci-3510.3.2-a-ff24132019 to /sysroot/etc/hostname Feb 9 09:58:21.981142 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 09:58:21.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:21.987400 systemd[1]: Starting ignition-files.service... Feb 9 09:58:22.015698 kernel: audit: type=1130 audit(1707472701.986:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:22.017997 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:58:22.037194 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1011) Feb 9 09:58:22.049047 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:58:22.049082 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:58:22.053930 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:58:22.058829 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:58:22.083511 ignition[1030]: INFO : Ignition 2.14.0 Feb 9 09:58:22.083511 ignition[1030]: INFO : Stage: files Feb 9 09:58:22.094673 ignition[1030]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:58:22.094673 ignition[1030]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:58:22.094673 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:58:22.094673 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:58:22.138847 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:58:22.138847 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:58:22.189277 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:58:22.199190 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:58:22.213983 unknown[1030]: wrote ssh authorized keys file for user: core Feb 9 09:58:22.220654 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:58:22.233477 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 09:58:22.249183 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:58:22.580046 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 09:58:22.734558 ignition[1030]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 09:58:22.750779 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 09:58:22.750779 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:58:22.750779 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 09:58:23.074389 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 09:58:23.292599 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:58:23.303562 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 09:58:23.303562 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 09:58:23.533774 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 09:58:23.780984 ignition[1030]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 09:58:23.797576 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 09:58:23.797576 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:58:23.797576 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubectl: attempt #1 Feb 9 09:58:23.943722 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 09:58:24.233762 ignition[1030]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 14be61ec35669a27acf2df0380afb85b9b42311d50ca1165718421c5f605df1119ec9ae314696a674051712e80deeaa65e62d2d62ed4d107fe99d0aaf419dafc Feb 9 09:58:24.252006 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:58:24.252006 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:58:24.252006 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:58:24.309640 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 09:58:24.603546 ignition[1030]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Feb 9 09:58:24.619706 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:58:24.619706 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:58:24.619706 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:58:24.660298 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 09:58:25.241755 ignition[1030]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Feb 9 09:58:25.257843 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:58:25.257843 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:58:25.257843 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:58:25.257843 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:58:25.257843 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 09:58:25.661658 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 09:58:25.715088 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:58:25.724896 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:58:25.724896 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:58:25.724896 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:58:25.724896 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:58:25.724896 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:58:25.724896 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:58:25.724896 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:58:25.724896 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:58:25.724896 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:58:25.724896 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:58:25.724896 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 09:58:25.724896 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:58:25.886982 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1032) Feb 9 09:58:25.887006 kernel: audit: type=1130 audit(1707472705.822:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:25.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:25.887061 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1243069572" Feb 9 09:58:25.887061 ignition[1030]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1243069572": device or resource busy Feb 9 09:58:25.887061 ignition[1030]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1243069572", trying btrfs: device or resource busy Feb 9 09:58:25.887061 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1243069572" Feb 9 09:58:25.887061 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1243069572" Feb 9 09:58:25.887061 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem1243069572" Feb 9 09:58:25.887061 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem1243069572" Feb 9 09:58:25.887061 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 09:58:25.887061 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:58:25.887061 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:58:25.887061 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem38796426" Feb 9 09:58:25.887061 ignition[1030]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem38796426": device or resource busy Feb 9 09:58:25.887061 ignition[1030]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem38796426", trying btrfs: device or resource busy Feb 9 09:58:25.887061 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem38796426" Feb 9 09:58:26.178880 kernel: audit: type=1130 audit(1707472705.934:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.178913 kernel: audit: type=1131 audit(1707472705.961:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.178924 kernel: audit: type=1130 audit(1707472705.977:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.178933 kernel: audit: type=1130 audit(1707472706.072:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.178947 kernel: audit: type=1131 audit(1707472706.072:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:25.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:25.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:25.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:25.767885 systemd[1]: mnt-oem1243069572.mount: Deactivated successfully. Feb 9 09:58:26.184581 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem38796426" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem38796426" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem38796426" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: op(1a): [started] processing unit "prepare-helm.service" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: op(1a): [finished] processing unit "prepare-helm.service" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: op(1c): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: op(1c): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: op(1e): [started] processing unit "prepare-critools.service" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:58:26.184581 ignition[1030]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:58:26.459432 kernel: audit: type=1130 audit(1707472706.188:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.459462 kernel: audit: type=1131 audit(1707472706.285:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:25.800674 systemd[1]: mnt-oem38796426.mount: Deactivated successfully. Feb 9 09:58:26.468388 ignition[1030]: INFO : files: op(1e): [finished] processing unit "prepare-critools.service" Feb 9 09:58:26.468388 ignition[1030]: INFO : files: op(20): [started] setting preset to enabled for "nvidia.service" Feb 9 09:58:26.468388 ignition[1030]: INFO : files: op(20): [finished] setting preset to enabled for "nvidia.service" Feb 9 09:58:26.468388 ignition[1030]: INFO : files: op(21): [started] setting preset to enabled for "prepare-helm.service" Feb 9 09:58:26.468388 ignition[1030]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 09:58:26.468388 ignition[1030]: INFO : files: op(22): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:58:26.468388 ignition[1030]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:58:26.468388 ignition[1030]: INFO : files: op(23): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:58:26.468388 ignition[1030]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:58:26.468388 ignition[1030]: INFO : files: op(24): [started] setting preset to enabled for "waagent.service" Feb 9 09:58:26.468388 ignition[1030]: INFO : files: op(24): [finished] setting preset to enabled for "waagent.service" Feb 9 09:58:26.468388 ignition[1030]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:58:26.468388 ignition[1030]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:58:26.468388 ignition[1030]: INFO : files: files passed Feb 9 09:58:26.468388 ignition[1030]: INFO : Ignition finished successfully Feb 9 09:58:26.743569 kernel: audit: type=1131 audit(1707472706.472:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.743602 kernel: audit: type=1131 audit(1707472706.521:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.743613 kernel: audit: type=1131 audit(1707472706.566:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.743623 kernel: audit: type=1131 audit(1707472706.604:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.743639 kernel: audit: type=1131 audit(1707472706.641:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.743795 initrd-setup-root-after-ignition[1055]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:58:26.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:25.810730 systemd[1]: Finished ignition-files.service. Feb 9 09:58:26.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:25.848819 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:58:26.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:25.863881 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:58:26.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:25.875349 systemd[1]: Starting ignition-quench.service... Feb 9 09:58:26.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:25.903528 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:58:26.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:25.903638 systemd[1]: Finished ignition-quench.service. Feb 9 09:58:25.961997 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:58:26.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.832067 ignition[1068]: INFO : Ignition 2.14.0 Feb 9 09:58:26.832067 ignition[1068]: INFO : Stage: umount Feb 9 09:58:26.832067 ignition[1068]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:58:26.832067 ignition[1068]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:58:26.832067 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:58:26.832067 ignition[1068]: INFO : umount: umount passed Feb 9 09:58:26.832067 ignition[1068]: INFO : Ignition finished successfully Feb 9 09:58:25.978301 systemd[1]: Reached target ignition-complete.target. Feb 9 09:58:26.035258 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:58:26.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.068095 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:58:26.068199 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:58:26.072826 systemd[1]: Reached target initrd-fs.target. Feb 9 09:58:26.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.110118 systemd[1]: Reached target initrd.target. Feb 9 09:58:26.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.118304 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:58:26.119141 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:58:26.179751 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:58:26.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.225627 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:58:26.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.247713 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:58:26.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.253914 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:58:26.265053 systemd[1]: Stopped target timers.target. Feb 9 09:58:27.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.275524 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:58:27.021000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:58:27.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.275590 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:58:26.309108 systemd[1]: Stopped target initrd.target. Feb 9 09:58:27.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.319640 systemd[1]: Stopped target basic.target. Feb 9 09:58:27.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.333632 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:58:27.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.349722 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:58:27.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.360975 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:58:27.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.372951 systemd[1]: Stopped target remote-fs.target. Feb 9 09:58:27.112394 kernel: hv_netvsc 000d3ac4-4940-000d-3ac4-4940000d3ac4 eth0: Data path switched from VF: enP3206s1 Feb 9 09:58:27.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.388718 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:58:27.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:27.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.405713 systemd[1]: Stopped target sysinit.target. Feb 9 09:58:26.419069 systemd[1]: Stopped target local-fs.target. Feb 9 09:58:26.431754 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:58:26.448041 systemd[1]: Stopped target swap.target. Feb 9 09:58:26.463453 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:58:26.463515 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:58:26.497247 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:58:26.509375 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:58:26.509434 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:58:26.543305 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:58:26.543368 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:58:26.566249 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:58:26.566299 systemd[1]: Stopped ignition-files.service. Feb 9 09:58:26.604524 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 09:58:26.604579 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 09:58:26.692555 systemd[1]: Stopping ignition-mount.service... Feb 9 09:58:26.703793 systemd[1]: Stopping iscsiuio.service... Feb 9 09:58:26.721896 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:58:26.733360 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:58:26.733441 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:58:26.738354 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:58:26.738392 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:58:26.748763 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:58:26.748894 systemd[1]: Stopped iscsiuio.service. Feb 9 09:58:27.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:26.761593 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:58:26.761684 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:58:26.769840 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:58:26.769932 systemd[1]: Stopped ignition-mount.service. Feb 9 09:58:26.780951 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:58:26.781403 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:58:26.781452 systemd[1]: Stopped ignition-disks.service. Feb 9 09:58:26.794561 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:58:26.794611 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:58:27.317682 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Feb 9 09:58:27.317724 iscsid[879]: iscsid shutting down. Feb 9 09:58:26.803673 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 09:58:26.803712 systemd[1]: Stopped ignition-fetch.service. Feb 9 09:58:26.808652 systemd[1]: Stopped target network.target. Feb 9 09:58:26.817971 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:58:26.818032 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:58:26.826897 systemd[1]: Stopped target paths.target. Feb 9 09:58:26.836033 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:58:26.844201 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:58:26.855218 systemd[1]: Stopped target slices.target. Feb 9 09:58:26.871643 systemd[1]: Stopped target sockets.target. Feb 9 09:58:26.882044 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:58:26.882080 systemd[1]: Closed iscsid.socket. Feb 9 09:58:26.889688 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:58:26.889731 systemd[1]: Closed iscsiuio.socket. Feb 9 09:58:26.897864 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:58:26.897911 systemd[1]: Stopped ignition-setup.service. Feb 9 09:58:26.907458 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:58:26.915624 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:58:26.924955 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:58:26.925052 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:58:26.933215 systemd-networkd[870]: eth0: DHCPv6 lease lost Feb 9 09:58:27.318000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:58:26.934434 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:58:26.934484 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:58:26.943484 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:58:26.943603 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:58:26.948556 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:58:26.948599 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:58:26.957673 systemd[1]: Stopping network-cleanup.service... Feb 9 09:58:26.968761 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:58:26.968833 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:58:26.978103 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:58:26.978153 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:58:26.991298 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:58:26.991383 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:58:26.996701 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:58:27.006271 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:58:27.006766 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:58:27.006870 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:58:27.014543 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:58:27.014662 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:58:27.023313 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:58:27.023357 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:58:27.033038 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:58:27.033072 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:58:27.037743 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:58:27.037790 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:58:27.046793 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:58:27.046841 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:58:27.055772 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:58:27.055812 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:58:27.065249 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:58:27.073840 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 09:58:27.073899 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 09:58:27.083899 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:58:27.083955 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:58:27.088487 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:58:27.088535 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:58:27.109313 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 09:58:27.109811 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:58:27.109911 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:58:27.236772 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:58:27.236883 systemd[1]: Stopped network-cleanup.service. Feb 9 09:58:27.246159 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:58:27.255606 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:58:27.273603 systemd[1]: Switching root. Feb 9 09:58:27.319444 systemd-journald[276]: Journal stopped Feb 9 09:58:40.288109 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:58:40.288129 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:58:40.288140 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:58:40.288150 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:58:40.288158 kernel: SELinux: policy capability open_perms=1 Feb 9 09:58:40.288166 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:58:40.288187 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:58:40.288196 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:58:40.288204 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:58:40.288212 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:58:40.288221 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:58:40.288231 systemd[1]: Successfully loaded SELinux policy in 304.961ms. Feb 9 09:58:40.288242 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.559ms. Feb 9 09:58:40.288252 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:58:40.288263 systemd[1]: Detected virtualization microsoft. Feb 9 09:58:40.288272 systemd[1]: Detected architecture arm64. Feb 9 09:58:40.288281 systemd[1]: Detected first boot. Feb 9 09:58:40.288290 systemd[1]: Hostname set to . Feb 9 09:58:40.288301 systemd[1]: Initializing machine ID from random generator. Feb 9 09:58:40.288310 kernel: kauditd_printk_skb: 31 callbacks suppressed Feb 9 09:58:40.288319 kernel: audit: type=1400 audit(1707472711.517:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:58:40.288328 kernel: audit: type=1400 audit(1707472711.517:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:58:40.288338 kernel: audit: type=1334 audit(1707472711.523:84): prog-id=10 op=LOAD Feb 9 09:58:40.288347 kernel: audit: type=1334 audit(1707472711.523:85): prog-id=10 op=UNLOAD Feb 9 09:58:40.288356 kernel: audit: type=1334 audit(1707472711.540:86): prog-id=11 op=LOAD Feb 9 09:58:40.288364 kernel: audit: type=1334 audit(1707472711.540:87): prog-id=11 op=UNLOAD Feb 9 09:58:40.288373 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:58:40.288383 kernel: audit: type=1400 audit(1707472712.961:88): avc: denied { associate } for pid=1101 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:58:40.288394 kernel: audit: type=1300 audit(1707472712.961:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458a2 a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:40.288404 kernel: audit: type=1327 audit(1707472712.961:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:58:40.288413 kernel: audit: type=1400 audit(1707472712.987:89): avc: denied { associate } for pid=1101 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:58:40.288422 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:58:40.288432 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:58:40.288442 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:58:40.288452 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:58:40.288488 kernel: kauditd_printk_skb: 5 callbacks suppressed Feb 9 09:58:40.288498 kernel: audit: type=1334 audit(1707472719.568:90): prog-id=12 op=LOAD Feb 9 09:58:40.288507 kernel: audit: type=1334 audit(1707472719.568:91): prog-id=3 op=UNLOAD Feb 9 09:58:40.288516 kernel: audit: type=1334 audit(1707472719.569:92): prog-id=13 op=LOAD Feb 9 09:58:40.288526 kernel: audit: type=1334 audit(1707472719.574:93): prog-id=14 op=LOAD Feb 9 09:58:40.288537 kernel: audit: type=1334 audit(1707472719.574:94): prog-id=4 op=UNLOAD Feb 9 09:58:40.288546 kernel: audit: type=1334 audit(1707472719.574:95): prog-id=5 op=UNLOAD Feb 9 09:58:40.288555 kernel: audit: type=1334 audit(1707472719.580:96): prog-id=15 op=LOAD Feb 9 09:58:40.288565 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 09:58:40.288575 kernel: audit: type=1334 audit(1707472719.580:97): prog-id=12 op=UNLOAD Feb 9 09:58:40.288584 systemd[1]: Stopped iscsid.service. Feb 9 09:58:40.288593 kernel: audit: type=1334 audit(1707472719.585:98): prog-id=16 op=LOAD Feb 9 09:58:40.288602 kernel: audit: type=1334 audit(1707472719.591:99): prog-id=17 op=LOAD Feb 9 09:58:40.288611 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 09:58:40.288620 systemd[1]: Stopped initrd-switch-root.service. Feb 9 09:58:40.288630 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 09:58:40.288643 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:58:40.288653 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:58:40.288662 systemd[1]: Created slice system-getty.slice. Feb 9 09:58:40.288672 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:58:40.288681 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:58:40.288691 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:58:40.288700 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:58:40.288709 systemd[1]: Created slice user.slice. Feb 9 09:58:40.288719 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:58:40.288730 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:58:40.288740 systemd[1]: Set up automount boot.automount. Feb 9 09:58:40.288749 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:58:40.288759 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 09:58:40.288768 systemd[1]: Stopped target initrd-fs.target. Feb 9 09:58:40.288778 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 09:58:40.288787 systemd[1]: Reached target integritysetup.target. Feb 9 09:58:40.288798 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:58:40.288807 systemd[1]: Reached target remote-fs.target. Feb 9 09:58:40.288817 systemd[1]: Reached target slices.target. Feb 9 09:58:40.288826 systemd[1]: Reached target swap.target. Feb 9 09:58:40.288836 systemd[1]: Reached target torcx.target. Feb 9 09:58:40.288845 systemd[1]: Reached target veritysetup.target. Feb 9 09:58:40.288855 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:58:40.288866 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:58:40.288876 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:58:40.288885 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:58:40.288895 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:58:40.288904 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:58:40.288914 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:58:40.288923 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:58:40.288935 systemd[1]: Mounting media.mount... Feb 9 09:58:40.288944 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:58:40.288954 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:58:40.288963 systemd[1]: Mounting tmp.mount... Feb 9 09:58:40.288973 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:58:40.288982 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:58:40.288992 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:58:40.289002 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:58:40.289011 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:58:40.289022 systemd[1]: Starting modprobe@drm.service... Feb 9 09:58:40.289032 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:58:40.289042 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:58:40.289051 systemd[1]: Starting modprobe@loop.service... Feb 9 09:58:40.289061 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:58:40.289071 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 09:58:40.289081 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 09:58:40.289090 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 09:58:40.289100 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 09:58:40.289110 kernel: fuse: init (API version 7.34) Feb 9 09:58:40.289120 systemd[1]: Stopped systemd-journald.service. Feb 9 09:58:40.289129 kernel: loop: module loaded Feb 9 09:58:40.289139 systemd[1]: systemd-journald.service: Consumed 3.284s CPU time. Feb 9 09:58:40.289149 systemd[1]: Starting systemd-journald.service... Feb 9 09:58:40.289158 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:58:40.289180 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:58:40.289195 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:58:40.289205 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:58:40.289217 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 09:58:40.289226 systemd[1]: Stopped verity-setup.service. Feb 9 09:58:40.289236 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:58:40.289245 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:58:40.289255 systemd[1]: Mounted media.mount. Feb 9 09:58:40.289265 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:58:40.289278 systemd-journald[1207]: Journal started Feb 9 09:58:40.289318 systemd-journald[1207]: Runtime Journal (/run/log/journal/39066337b1d64f25850fa081a04b8269) is 8.0M, max 78.6M, 70.6M free. Feb 9 09:58:30.569000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 09:58:31.517000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:58:31.517000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:58:31.523000 audit: BPF prog-id=10 op=LOAD Feb 9 09:58:31.523000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:58:31.540000 audit: BPF prog-id=11 op=LOAD Feb 9 09:58:31.540000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:58:32.961000 audit[1101]: AVC avc: denied { associate } for pid=1101 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:58:32.961000 audit[1101]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458a2 a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:32.961000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:58:32.987000 audit[1101]: AVC avc: denied { associate } for pid=1101 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:58:32.987000 audit[1101]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145979 a2=1ed a3=0 items=2 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:32.987000 audit: CWD cwd="/" Feb 9 09:58:32.987000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:58:32.987000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:58:32.987000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:58:39.568000 audit: BPF prog-id=12 op=LOAD Feb 9 09:58:39.568000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:58:39.569000 audit: BPF prog-id=13 op=LOAD Feb 9 09:58:39.574000 audit: BPF prog-id=14 op=LOAD Feb 9 09:58:39.574000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:58:39.574000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:58:39.580000 audit: BPF prog-id=15 op=LOAD Feb 9 09:58:39.580000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:58:39.585000 audit: BPF prog-id=16 op=LOAD Feb 9 09:58:39.591000 audit: BPF prog-id=17 op=LOAD Feb 9 09:58:39.591000 audit: BPF prog-id=13 op=UNLOAD Feb 9 09:58:39.591000 audit: BPF prog-id=14 op=UNLOAD Feb 9 09:58:39.596000 audit: BPF prog-id=18 op=LOAD Feb 9 09:58:39.596000 audit: BPF prog-id=15 op=UNLOAD Feb 9 09:58:39.602000 audit: BPF prog-id=19 op=LOAD Feb 9 09:58:39.607000 audit: BPF prog-id=20 op=LOAD Feb 9 09:58:39.607000 audit: BPF prog-id=16 op=UNLOAD Feb 9 09:58:39.607000 audit: BPF prog-id=17 op=UNLOAD Feb 9 09:58:39.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:39.628000 audit: BPF prog-id=18 op=UNLOAD Feb 9 09:58:39.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:39.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:39.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.184000 audit: BPF prog-id=21 op=LOAD Feb 9 09:58:40.184000 audit: BPF prog-id=22 op=LOAD Feb 9 09:58:40.184000 audit: BPF prog-id=23 op=LOAD Feb 9 09:58:40.184000 audit: BPF prog-id=19 op=UNLOAD Feb 9 09:58:40.184000 audit: BPF prog-id=20 op=UNLOAD Feb 9 09:58:40.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.285000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:58:40.285000 audit[1207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffd66b7f20 a2=4000 a3=1 items=0 ppid=1 pid=1207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:40.285000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:58:32.908166 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:58:39.567808 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:58:32.944434 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:58:39.608403 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 09:58:32.944454 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:58:39.608791 systemd[1]: systemd-journald.service: Consumed 3.284s CPU time. Feb 9 09:58:32.944493 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:32Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 09:58:32.944504 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:32Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 09:58:32.944540 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:32Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 09:58:32.944552 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:32Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 09:58:32.944750 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:32Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 09:58:32.944782 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:58:32.944794 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:58:32.945344 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 09:58:32.945380 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 09:58:32.945398 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 09:58:32.945422 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 09:58:32.945441 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 09:58:32.945455 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 09:58:38.592078 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:38Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:58:38.592362 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:38Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:58:38.592453 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:38Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:58:38.592609 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:38Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:58:38.592657 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:38Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 09:58:38.592712 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:58:38Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 09:58:40.299302 systemd[1]: Started systemd-journald.service. Feb 9 09:58:40.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.300112 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:58:40.304948 systemd[1]: Mounted tmp.mount. Feb 9 09:58:40.309189 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:58:40.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.314402 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:58:40.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.319429 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:58:40.319563 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:58:40.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.324686 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:58:40.324813 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:58:40.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.329857 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:58:40.329984 systemd[1]: Finished modprobe@drm.service. Feb 9 09:58:40.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.334801 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:58:40.334925 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:58:40.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.340059 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:58:40.340196 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:58:40.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.344767 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:58:40.344885 systemd[1]: Finished modprobe@loop.service. Feb 9 09:58:40.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.349786 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:58:40.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.355563 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:58:40.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.360884 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:58:40.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.366111 systemd[1]: Reached target network-pre.target. Feb 9 09:58:40.372298 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:58:40.377790 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:58:40.381957 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:58:40.399905 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:58:40.405440 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:58:40.409978 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:58:40.411059 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:58:40.415399 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:58:40.416422 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:58:40.422290 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:58:40.429479 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:58:40.434839 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:58:40.441249 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:58:40.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.447881 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:58:40.454699 systemd-journald[1207]: Time spent on flushing to /var/log/journal/39066337b1d64f25850fa081a04b8269 is 14.151ms for 1148 entries. Feb 9 09:58:40.454699 systemd-journald[1207]: System Journal (/var/log/journal/39066337b1d64f25850fa081a04b8269) is 8.0M, max 2.6G, 2.6G free. Feb 9 09:58:40.528920 systemd-journald[1207]: Received client request to flush runtime journal. Feb 9 09:58:40.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.529709 udevadm[1221]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 09:58:40.483533 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:58:40.488508 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:58:40.507656 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:58:40.529860 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:58:40.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.934995 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:58:40.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:40.940998 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:58:41.325735 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:58:41.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:41.576431 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:58:41.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:41.581000 audit: BPF prog-id=24 op=LOAD Feb 9 09:58:41.581000 audit: BPF prog-id=25 op=LOAD Feb 9 09:58:41.582000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:58:41.582000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:58:41.582846 systemd[1]: Starting systemd-udevd.service... Feb 9 09:58:41.601285 systemd-udevd[1226]: Using default interface naming scheme 'v252'. Feb 9 09:58:41.843132 systemd[1]: Started systemd-udevd.service. Feb 9 09:58:41.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:41.853000 audit: BPF prog-id=26 op=LOAD Feb 9 09:58:41.854674 systemd[1]: Starting systemd-networkd.service... Feb 9 09:58:41.880916 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 09:58:41.926091 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:58:41.925000 audit: BPF prog-id=27 op=LOAD Feb 9 09:58:41.925000 audit: BPF prog-id=28 op=LOAD Feb 9 09:58:41.925000 audit: BPF prog-id=29 op=LOAD Feb 9 09:58:41.956000 audit[1235]: AVC avc: denied { confidentiality } for pid=1235 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 09:58:41.974326 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 09:58:41.974358 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 09:58:41.974371 kernel: hv_vmbus: registering driver hv_balloon Feb 9 09:58:41.982492 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 09:58:41.989394 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 09:58:41.989464 kernel: hv_vmbus: registering driver hv_utils Feb 9 09:58:41.997543 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 9 09:58:42.006370 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 09:58:42.006443 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 09:58:42.006460 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 09:58:42.014222 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 09:58:42.014278 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 09:58:41.854337 kernel: Console: switching to colour dummy device 80x25 Feb 9 09:58:42.430168 systemd-journald[1207]: Time jumped backwards, rotating. Feb 9 09:58:42.430253 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:58:41.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:41.956000 audit[1235]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaab0c7d1570 a1=aa2c a2=ffff903724b0 a3=aaab0c72f010 items=12 ppid=1226 pid=1235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:41.956000 audit: CWD cwd="/" Feb 9 09:58:41.956000 audit: PATH item=0 name=(null) inode=5918 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:58:41.956000 audit: PATH item=1 name=(null) inode=10219 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:58:41.956000 audit: PATH item=2 name=(null) inode=10219 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:58:41.956000 audit: PATH item=3 name=(null) inode=10220 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:58:41.956000 audit: PATH item=4 name=(null) inode=10219 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:58:41.956000 audit: PATH item=5 name=(null) inode=10221 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:58:41.956000 audit: PATH item=6 name=(null) inode=10219 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:58:41.956000 audit: PATH item=7 name=(null) inode=10222 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:58:41.956000 audit: PATH item=8 name=(null) inode=10219 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:58:41.956000 audit: PATH item=9 name=(null) inode=10223 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:58:41.956000 audit: PATH item=10 name=(null) inode=10219 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:58:41.956000 audit: PATH item=11 name=(null) inode=10224 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:58:41.956000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 09:58:41.868329 systemd[1]: Started systemd-userdbd.service. Feb 9 09:58:42.518513 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1238) Feb 9 09:58:42.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:42.518859 systemd-networkd[1247]: lo: Link UP Feb 9 09:58:42.518866 systemd-networkd[1247]: lo: Gained carrier Feb 9 09:58:42.519266 systemd-networkd[1247]: Enumeration completed Feb 9 09:58:42.519372 systemd[1]: Started systemd-networkd.service. Feb 9 09:58:42.528140 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:58:42.547617 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:58:42.547796 systemd-networkd[1247]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:58:42.554744 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:58:42.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:42.560947 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:58:42.609313 kernel: mlx5_core 0c86:00:02.0 enP3206s1: Link up Feb 9 09:58:42.638316 kernel: hv_netvsc 000d3ac4-4940-000d-3ac4-4940000d3ac4 eth0: Data path switched to VF: enP3206s1 Feb 9 09:58:42.638710 systemd-networkd[1247]: enP3206s1: Link UP Feb 9 09:58:42.638884 systemd-networkd[1247]: eth0: Link UP Feb 9 09:58:42.638967 systemd-networkd[1247]: eth0: Gained carrier Feb 9 09:58:42.647590 systemd-networkd[1247]: enP3206s1: Gained carrier Feb 9 09:58:42.661439 systemd-networkd[1247]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:58:42.854657 lvm[1305]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:58:42.894300 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:58:42.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:42.899345 systemd[1]: Reached target cryptsetup.target. Feb 9 09:58:42.905371 systemd[1]: Starting lvm2-activation.service... Feb 9 09:58:42.909433 lvm[1307]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:58:42.932228 systemd[1]: Finished lvm2-activation.service. Feb 9 09:58:42.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:42.937057 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:58:42.941708 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:58:42.941737 systemd[1]: Reached target local-fs.target. Feb 9 09:58:42.946073 systemd[1]: Reached target machines.target. Feb 9 09:58:42.951797 systemd[1]: Starting ldconfig.service... Feb 9 09:58:42.955647 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:58:42.955712 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:58:42.956847 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:58:42.962098 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:58:42.969301 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:58:42.974041 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:58:42.974096 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:58:42.975139 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:58:43.010575 systemd-tmpfiles[1312]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:58:43.011964 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1309 (bootctl) Feb 9 09:58:43.013246 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:58:43.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:43.189373 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:58:43.218411 systemd-tmpfiles[1312]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:58:43.219642 systemd-tmpfiles[1312]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:58:43.809015 systemd-fsck[1317]: fsck.fat 4.2 (2021-01-31) Feb 9 09:58:43.809015 systemd-fsck[1317]: /dev/sda1: 236 files, 113719/258078 clusters Feb 9 09:58:43.810050 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:58:43.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:43.819462 systemd[1]: Mounting boot.mount... Feb 9 09:58:43.917551 systemd[1]: Mounted boot.mount. Feb 9 09:58:43.930067 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:58:43.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:44.039442 systemd-networkd[1247]: eth0: Gained IPv6LL Feb 9 09:58:44.047235 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:58:44.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:44.284966 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:58:44.285554 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:58:44.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:45.612349 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:58:45.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:45.622926 kernel: kauditd_printk_skb: 86 callbacks suppressed Feb 9 09:58:45.622989 kernel: audit: type=1130 audit(1707472725.617:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:45.624266 systemd[1]: Starting audit-rules.service... Feb 9 09:58:45.647817 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:58:45.654163 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:58:45.660000 audit: BPF prog-id=30 op=LOAD Feb 9 09:58:45.661542 systemd[1]: Starting systemd-resolved.service... Feb 9 09:58:45.671132 kernel: audit: type=1334 audit(1707472725.660:170): prog-id=30 op=LOAD Feb 9 09:58:45.672000 audit: BPF prog-id=31 op=LOAD Feb 9 09:58:45.675145 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:58:45.681468 kernel: audit: type=1334 audit(1707472725.672:171): prog-id=31 op=LOAD Feb 9 09:58:45.683040 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:58:45.719392 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:58:45.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:45.730500 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:58:45.746556 kernel: audit: type=1130 audit(1707472725.724:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:45.748000 audit[1334]: SYSTEM_BOOT pid=1334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:58:45.769333 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:58:45.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:45.794533 kernel: audit: type=1127 audit(1707472725.748:173): pid=1334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:58:45.794638 kernel: audit: type=1130 audit(1707472725.775:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:45.815846 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:58:45.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:45.821386 systemd[1]: Reached target time-set.target. Feb 9 09:58:45.844508 kernel: audit: type=1130 audit(1707472725.820:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:45.863426 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:58:45.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:45.887319 kernel: audit: type=1130 audit(1707472725.869:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:45.887601 systemd-resolved[1327]: Positive Trust Anchors: Feb 9 09:58:45.887614 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:58:45.887643 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:58:45.951097 systemd-resolved[1327]: Using system hostname 'ci-3510.3.2-a-ff24132019'. Feb 9 09:58:45.952670 systemd[1]: Started systemd-resolved.service. Feb 9 09:58:45.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:45.957483 systemd[1]: Reached target network.target. Feb 9 09:58:45.980402 kernel: audit: type=1130 audit(1707472725.957:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:45.980794 systemd[1]: Reached target network-online.target. Feb 9 09:58:45.986219 systemd[1]: Reached target nss-lookup.target. Feb 9 09:58:46.129000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:58:46.142306 kernel: audit: type=1305 audit(1707472726.129:178): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:58:46.129000 audit[1344]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdf947ed0 a2=420 a3=0 items=0 ppid=1323 pid=1344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:46.129000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:58:46.150102 systemd-timesyncd[1333]: Contacted time server 204.93.207.12:123 (0.flatcar.pool.ntp.org). Feb 9 09:58:46.150173 systemd-timesyncd[1333]: Initial clock synchronization to Fri 2024-02-09 09:58:46.132897 UTC. Feb 9 09:58:46.193981 augenrules[1344]: No rules Feb 9 09:58:46.195070 systemd[1]: Finished audit-rules.service. Feb 9 09:58:52.805698 ldconfig[1308]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:58:52.843615 systemd[1]: Finished ldconfig.service. Feb 9 09:58:52.849637 systemd[1]: Starting systemd-update-done.service... Feb 9 09:58:52.886126 systemd[1]: Finished systemd-update-done.service. Feb 9 09:58:52.890923 systemd[1]: Reached target sysinit.target. Feb 9 09:58:52.895227 systemd[1]: Started motdgen.path. Feb 9 09:58:52.898927 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:58:52.905030 systemd[1]: Started logrotate.timer. Feb 9 09:58:52.909031 systemd[1]: Started mdadm.timer. Feb 9 09:58:52.912549 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:58:52.917129 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:58:52.917161 systemd[1]: Reached target paths.target. Feb 9 09:58:52.921641 systemd[1]: Reached target timers.target. Feb 9 09:58:52.926414 systemd[1]: Listening on dbus.socket. Feb 9 09:58:52.931406 systemd[1]: Starting docker.socket... Feb 9 09:58:52.942938 systemd[1]: Listening on sshd.socket. Feb 9 09:58:52.947099 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:58:52.947606 systemd[1]: Listening on docker.socket. Feb 9 09:58:52.951754 systemd[1]: Reached target sockets.target. Feb 9 09:58:52.955993 systemd[1]: Reached target basic.target. Feb 9 09:58:52.960181 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:58:52.960209 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:58:52.961331 systemd[1]: Starting containerd.service... Feb 9 09:58:52.965929 systemd[1]: Starting dbus.service... Feb 9 09:58:52.970114 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:58:52.975343 systemd[1]: Starting extend-filesystems.service... Feb 9 09:58:52.981988 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:58:52.983014 systemd[1]: Starting motdgen.service... Feb 9 09:58:52.988868 systemd[1]: Started nvidia.service. Feb 9 09:58:52.993834 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:58:52.998984 systemd[1]: Starting prepare-critools.service... Feb 9 09:58:53.003914 systemd[1]: Starting prepare-helm.service... Feb 9 09:58:53.008549 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:58:53.013696 systemd[1]: Starting sshd-keygen.service... Feb 9 09:58:53.019940 systemd[1]: Starting systemd-logind.service... Feb 9 09:58:53.023962 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:58:53.024026 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:58:53.024421 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 09:58:53.025058 systemd[1]: Starting update-engine.service... Feb 9 09:58:53.029873 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:58:53.042527 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:58:53.042711 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:58:53.081398 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:58:53.081574 systemd[1]: Finished motdgen.service. Feb 9 09:58:53.094756 extend-filesystems[1355]: Found sda Feb 9 09:58:53.098926 extend-filesystems[1355]: Found sda1 Feb 9 09:58:53.098926 extend-filesystems[1355]: Found sda2 Feb 9 09:58:53.098926 extend-filesystems[1355]: Found sda3 Feb 9 09:58:53.098926 extend-filesystems[1355]: Found usr Feb 9 09:58:53.098926 extend-filesystems[1355]: Found sda4 Feb 9 09:58:53.098926 extend-filesystems[1355]: Found sda6 Feb 9 09:58:53.098926 extend-filesystems[1355]: Found sda7 Feb 9 09:58:53.098926 extend-filesystems[1355]: Found sda9 Feb 9 09:58:53.098926 extend-filesystems[1355]: Checking size of /dev/sda9 Feb 9 09:58:53.120827 systemd-logind[1368]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 9 09:58:53.124196 systemd-logind[1368]: New seat seat0. Feb 9 09:58:53.147301 env[1379]: time="2024-02-09T09:58:53.147238384Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:58:53.176309 env[1379]: time="2024-02-09T09:58:53.174793852Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:58:53.176309 env[1379]: time="2024-02-09T09:58:53.174957694Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:58:53.176460 env[1379]: time="2024-02-09T09:58:53.176376745Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:58:53.176460 env[1379]: time="2024-02-09T09:58:53.176409281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:58:53.177345 env[1379]: time="2024-02-09T09:58:53.176617890Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:58:53.177345 env[1379]: time="2024-02-09T09:58:53.176647309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:58:53.177345 env[1379]: time="2024-02-09T09:58:53.176663657Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:58:53.177345 env[1379]: time="2024-02-09T09:58:53.176673370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:58:53.177345 env[1379]: time="2024-02-09T09:58:53.176751074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:58:53.177345 env[1379]: time="2024-02-09T09:58:53.176963959Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:58:53.177345 env[1379]: time="2024-02-09T09:58:53.177073800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:58:53.177345 env[1379]: time="2024-02-09T09:58:53.177089668Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:58:53.177345 env[1379]: time="2024-02-09T09:58:53.177136714Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:58:53.177345 env[1379]: time="2024-02-09T09:58:53.177149065Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:58:53.190141 jq[1354]: false Feb 9 09:58:53.190928 jq[1373]: true Feb 9 09:58:53.205798 env[1379]: time="2024-02-09T09:58:53.205758170Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:58:53.205962 env[1379]: time="2024-02-09T09:58:53.205946713Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:58:53.206045 env[1379]: time="2024-02-09T09:58:53.206029893Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:58:53.206145 env[1379]: time="2024-02-09T09:58:53.206130380Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:58:53.206265 env[1379]: time="2024-02-09T09:58:53.206250973Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:58:53.206367 env[1379]: time="2024-02-09T09:58:53.206352139Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:58:53.206448 env[1379]: time="2024-02-09T09:58:53.206434000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:58:53.206856 env[1379]: time="2024-02-09T09:58:53.206832511Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:58:53.206948 env[1379]: time="2024-02-09T09:58:53.206933238Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:58:53.207028 env[1379]: time="2024-02-09T09:58:53.207013460Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:58:53.207092 env[1379]: time="2024-02-09T09:58:53.207079612Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:58:53.207166 env[1379]: time="2024-02-09T09:58:53.207152080Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:58:53.207421 env[1379]: time="2024-02-09T09:58:53.207374278Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:58:53.207614 env[1379]: time="2024-02-09T09:58:53.207597397Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:58:53.207981 env[1379]: time="2024-02-09T09:58:53.207953299Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:58:53.208074 env[1379]: time="2024-02-09T09:58:53.208058303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:58:53.208144 env[1379]: time="2024-02-09T09:58:53.208131290Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:58:53.208254 env[1379]: time="2024-02-09T09:58:53.208239491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:58:53.208395 env[1379]: time="2024-02-09T09:58:53.208378031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:58:53.208485 env[1379]: time="2024-02-09T09:58:53.208470764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:58:53.208562 env[1379]: time="2024-02-09T09:58:53.208549147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:58:53.208636 env[1379]: time="2024-02-09T09:58:53.208623093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:58:53.208709 env[1379]: time="2024-02-09T09:58:53.208683570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:58:53.208784 env[1379]: time="2024-02-09T09:58:53.208759714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:58:53.208842 env[1379]: time="2024-02-09T09:58:53.208828664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:58:53.208923 env[1379]: time="2024-02-09T09:58:53.208908926Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:58:53.209124 env[1379]: time="2024-02-09T09:58:53.209105903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:58:53.209213 env[1379]: time="2024-02-09T09:58:53.209198117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:58:53.209283 env[1379]: time="2024-02-09T09:58:53.209270984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:58:53.209382 env[1379]: time="2024-02-09T09:58:53.209368393Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:58:53.209462 env[1379]: time="2024-02-09T09:58:53.209446297Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:58:53.209535 env[1379]: time="2024-02-09T09:58:53.209521003Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:58:53.209609 env[1379]: time="2024-02-09T09:58:53.209583797Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:58:53.209699 env[1379]: time="2024-02-09T09:58:53.209685124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:58:53.210038 env[1379]: time="2024-02-09T09:58:53.209977831Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:58:53.227167 env[1379]: time="2024-02-09T09:58:53.210711100Z" level=info msg="Connect containerd service" Feb 9 09:58:53.227167 env[1379]: time="2024-02-09T09:58:53.210772655Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:58:53.227167 env[1379]: time="2024-02-09T09:58:53.212641781Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:58:53.227167 env[1379]: time="2024-02-09T09:58:53.212881927Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:58:53.227167 env[1379]: time="2024-02-09T09:58:53.212918940Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:58:53.227167 env[1379]: time="2024-02-09T09:58:53.215196369Z" level=info msg="Start subscribing containerd event" Feb 9 09:58:53.227167 env[1379]: time="2024-02-09T09:58:53.216008421Z" level=info msg="Start recovering state" Feb 9 09:58:53.227167 env[1379]: time="2024-02-09T09:58:53.218930743Z" level=info msg="Start event monitor" Feb 9 09:58:53.227167 env[1379]: time="2024-02-09T09:58:53.218965158Z" level=info msg="Start snapshots syncer" Feb 9 09:58:53.227167 env[1379]: time="2024-02-09T09:58:53.218981066Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:58:53.227167 env[1379]: time="2024-02-09T09:58:53.219012004Z" level=info msg="Start streaming server" Feb 9 09:58:53.227167 env[1379]: time="2024-02-09T09:58:53.219114889Z" level=info msg="containerd successfully booted in 0.072922s" Feb 9 09:58:53.210938 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:58:53.211136 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:58:53.229372 tar[1375]: ./ Feb 9 09:58:53.229372 tar[1375]: ./loopback Feb 9 09:58:53.213038 systemd[1]: Started containerd.service. Feb 9 09:58:53.231069 tar[1376]: crictl Feb 9 09:58:53.233975 tar[1377]: linux-arm64/helm Feb 9 09:58:53.245249 jq[1416]: true Feb 9 09:58:53.272423 extend-filesystems[1355]: Old size kept for /dev/sda9 Feb 9 09:58:53.281167 extend-filesystems[1355]: Found sr0 Feb 9 09:58:53.280661 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:58:53.288612 systemd[1]: Finished extend-filesystems.service. Feb 9 09:58:53.335475 tar[1375]: ./bandwidth Feb 9 09:58:53.339671 dbus-daemon[1353]: [system] SELinux support is enabled Feb 9 09:58:53.339848 systemd[1]: Started dbus.service. Feb 9 09:58:53.347570 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:58:53.348081 dbus-daemon[1353]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 09:58:53.347601 systemd[1]: Reached target system-config.target. Feb 9 09:58:53.354492 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:58:53.354519 systemd[1]: Reached target user-config.target. Feb 9 09:58:53.361665 systemd[1]: Started systemd-logind.service. Feb 9 09:58:53.407893 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 09:58:53.409095 bash[1434]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:58:53.409840 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:58:53.439871 tar[1375]: ./ptp Feb 9 09:58:53.548428 tar[1375]: ./vlan Feb 9 09:58:53.617062 tar[1375]: ./host-device Feb 9 09:58:53.683507 tar[1375]: ./tuning Feb 9 09:58:53.742420 tar[1375]: ./vrf Feb 9 09:58:53.801501 tar[1375]: ./sbr Feb 9 09:58:53.862745 tar[1375]: ./tap Feb 9 09:58:53.926357 update_engine[1372]: I0209 09:58:53.911568 1372 main.cc:92] Flatcar Update Engine starting Feb 9 09:58:53.934373 tar[1375]: ./dhcp Feb 9 09:58:54.007873 systemd[1]: Started update-engine.service. Feb 9 09:58:54.008322 update_engine[1372]: I0209 09:58:54.007928 1372 update_check_scheduler.cc:74] Next update check in 10m48s Feb 9 09:58:54.016499 systemd[1]: Started locksmithd.service. Feb 9 09:58:54.095833 systemd[1]: Finished prepare-critools.service. Feb 9 09:58:54.107829 tar[1375]: ./static Feb 9 09:58:54.150055 tar[1375]: ./firewall Feb 9 09:58:54.151356 tar[1377]: linux-arm64/LICENSE Feb 9 09:58:54.151421 tar[1377]: linux-arm64/README.md Feb 9 09:58:54.163646 systemd[1]: Finished prepare-helm.service. Feb 9 09:58:54.190966 tar[1375]: ./macvlan Feb 9 09:58:54.225165 tar[1375]: ./dummy Feb 9 09:58:54.258663 tar[1375]: ./bridge Feb 9 09:58:54.295773 tar[1375]: ./ipvlan Feb 9 09:58:54.328760 tar[1375]: ./portmap Feb 9 09:58:54.360321 tar[1375]: ./host-local Feb 9 09:58:54.447367 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:58:54.488721 sshd_keygen[1371]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:58:54.506276 systemd[1]: Finished sshd-keygen.service. Feb 9 09:58:54.512815 systemd[1]: Starting issuegen.service... Feb 9 09:58:54.518125 systemd[1]: Started waagent.service. Feb 9 09:58:54.523262 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:58:54.523454 systemd[1]: Finished issuegen.service. Feb 9 09:58:54.529662 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:58:54.557852 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:58:54.564600 systemd[1]: Started getty@tty1.service. Feb 9 09:58:54.571236 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 09:58:54.576600 systemd[1]: Reached target getty.target. Feb 9 09:58:54.581361 systemd[1]: Reached target multi-user.target. Feb 9 09:58:54.587969 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:58:54.596576 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:58:54.596755 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:58:54.602949 systemd[1]: Startup finished in 738ms (kernel) + 18.453s (initrd) + 24.682s (userspace) = 43.874s. Feb 9 09:58:55.695590 locksmithd[1464]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:58:56.019389 login[1482]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 09:58:56.021147 login[1483]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:58:56.080485 systemd[1]: Created slice user-500.slice. Feb 9 09:58:56.081593 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:58:56.084217 systemd-logind[1368]: New session 2 of user core. Feb 9 09:58:56.120847 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:58:56.122329 systemd[1]: Starting user@500.service... Feb 9 09:58:56.173842 (systemd)[1491]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:56.431113 systemd[1491]: Queued start job for default target default.target. Feb 9 09:58:56.431666 systemd[1491]: Reached target paths.target. Feb 9 09:58:56.431687 systemd[1491]: Reached target sockets.target. Feb 9 09:58:56.431698 systemd[1491]: Reached target timers.target. Feb 9 09:58:56.431709 systemd[1491]: Reached target basic.target. Feb 9 09:58:56.431754 systemd[1491]: Reached target default.target. Feb 9 09:58:56.431777 systemd[1491]: Startup finished in 252ms. Feb 9 09:58:56.431822 systemd[1]: Started user@500.service. Feb 9 09:58:56.432726 systemd[1]: Started session-2.scope. Feb 9 09:58:57.019773 login[1482]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:58:57.023822 systemd-logind[1368]: New session 1 of user core. Feb 9 09:58:57.024248 systemd[1]: Started session-1.scope. Feb 9 09:59:01.026088 waagent[1480]: 2024-02-09T09:59:01.025974Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 09:59:01.032673 waagent[1480]: 2024-02-09T09:59:01.032598Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 09:59:01.037376 waagent[1480]: 2024-02-09T09:59:01.037317Z INFO Daemon Daemon Python: 3.9.16 Feb 9 09:59:01.041893 waagent[1480]: 2024-02-09T09:59:01.041795Z INFO Daemon Daemon Run daemon Feb 9 09:59:01.046443 waagent[1480]: 2024-02-09T09:59:01.046376Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 09:59:01.063778 waagent[1480]: 2024-02-09T09:59:01.063648Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 09:59:01.081654 waagent[1480]: 2024-02-09T09:59:01.081513Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 09:59:01.091874 waagent[1480]: 2024-02-09T09:59:01.091802Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 09:59:01.097524 waagent[1480]: 2024-02-09T09:59:01.097457Z INFO Daemon Daemon Using waagent for provisioning Feb 9 09:59:01.103549 waagent[1480]: 2024-02-09T09:59:01.103484Z INFO Daemon Daemon Activate resource disk Feb 9 09:59:01.108608 waagent[1480]: 2024-02-09T09:59:01.108548Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 09:59:01.123280 waagent[1480]: 2024-02-09T09:59:01.123210Z INFO Daemon Daemon Found device: None Feb 9 09:59:01.127984 waagent[1480]: 2024-02-09T09:59:01.127922Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 09:59:01.136582 waagent[1480]: 2024-02-09T09:59:01.136521Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 09:59:01.148837 waagent[1480]: 2024-02-09T09:59:01.148773Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 09:59:01.154713 waagent[1480]: 2024-02-09T09:59:01.154656Z INFO Daemon Daemon Running default provisioning handler Feb 9 09:59:01.167895 waagent[1480]: 2024-02-09T09:59:01.167773Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 09:59:01.182508 waagent[1480]: 2024-02-09T09:59:01.182388Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 09:59:01.192014 waagent[1480]: 2024-02-09T09:59:01.191952Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 09:59:01.197045 waagent[1480]: 2024-02-09T09:59:01.196986Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 09:59:01.307993 waagent[1480]: 2024-02-09T09:59:01.307798Z INFO Daemon Daemon Successfully mounted dvd Feb 9 09:59:01.432266 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 09:59:01.470387 waagent[1480]: 2024-02-09T09:59:01.470214Z INFO Daemon Daemon Detect protocol endpoint Feb 9 09:59:01.475542 waagent[1480]: 2024-02-09T09:59:01.475461Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 09:59:01.481431 waagent[1480]: 2024-02-09T09:59:01.481361Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 09:59:01.487912 waagent[1480]: 2024-02-09T09:59:01.487848Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 09:59:01.493407 waagent[1480]: 2024-02-09T09:59:01.493347Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 09:59:01.498418 waagent[1480]: 2024-02-09T09:59:01.498356Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 09:59:01.614988 waagent[1480]: 2024-02-09T09:59:01.614920Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 09:59:01.622213 waagent[1480]: 2024-02-09T09:59:01.622166Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 09:59:01.627578 waagent[1480]: 2024-02-09T09:59:01.627515Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 09:59:02.211609 waagent[1480]: 2024-02-09T09:59:02.211457Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 09:59:02.227518 waagent[1480]: 2024-02-09T09:59:02.227443Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 09:59:02.233549 waagent[1480]: 2024-02-09T09:59:02.233483Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 09:59:02.309884 waagent[1480]: 2024-02-09T09:59:02.309744Z INFO Daemon Daemon Found private key matching thumbprint 4E5171DB65194287C473276C915A4B8BB4726258 Feb 9 09:59:02.318243 waagent[1480]: 2024-02-09T09:59:02.318172Z INFO Daemon Daemon Certificate with thumbprint 9363FC4A89ACEE9AAE7CEAB98992FB879C1B86FD has no matching private key. Feb 9 09:59:02.328956 waagent[1480]: 2024-02-09T09:59:02.328875Z INFO Daemon Daemon Fetch goal state completed Feb 9 09:59:02.358073 waagent[1480]: 2024-02-09T09:59:02.358015Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 9b80518c-ceda-46c3-bd37-33b9122a4c6d New eTag: 10011988503046185978] Feb 9 09:59:02.368819 waagent[1480]: 2024-02-09T09:59:02.368755Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 09:59:02.384418 waagent[1480]: 2024-02-09T09:59:02.384336Z INFO Daemon Daemon Starting provisioning Feb 9 09:59:02.389577 waagent[1480]: 2024-02-09T09:59:02.389515Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 09:59:02.394204 waagent[1480]: 2024-02-09T09:59:02.394148Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-ff24132019] Feb 9 09:59:02.493012 waagent[1480]: 2024-02-09T09:59:02.492872Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-ff24132019] Feb 9 09:59:02.499696 waagent[1480]: 2024-02-09T09:59:02.499614Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 09:59:02.506407 waagent[1480]: 2024-02-09T09:59:02.506344Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 09:59:02.522425 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 09:59:02.522597 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 09:59:02.522655 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 09:59:02.522891 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:59:02.528325 systemd-networkd[1247]: eth0: DHCPv6 lease lost Feb 9 09:59:02.529840 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:59:02.530017 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:59:02.531955 systemd[1]: Starting systemd-networkd.service... Feb 9 09:59:02.558198 systemd-networkd[1536]: enP3206s1: Link UP Feb 9 09:59:02.558574 systemd-networkd[1536]: enP3206s1: Gained carrier Feb 9 09:59:02.559652 systemd-networkd[1536]: eth0: Link UP Feb 9 09:59:02.559732 systemd-networkd[1536]: eth0: Gained carrier Feb 9 09:59:02.560107 systemd-networkd[1536]: lo: Link UP Feb 9 09:59:02.560169 systemd-networkd[1536]: lo: Gained carrier Feb 9 09:59:02.560566 systemd-networkd[1536]: eth0: Gained IPv6LL Feb 9 09:59:02.561061 systemd-networkd[1536]: Enumeration completed Feb 9 09:59:02.561734 systemd[1]: Started systemd-networkd.service. Feb 9 09:59:02.562145 systemd-networkd[1536]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:59:02.563449 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:59:02.566794 waagent[1480]: 2024-02-09T09:59:02.566635Z INFO Daemon Daemon Create user account if not exists Feb 9 09:59:02.573070 waagent[1480]: 2024-02-09T09:59:02.572989Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 09:59:02.579187 waagent[1480]: 2024-02-09T09:59:02.579107Z INFO Daemon Daemon Configure sudoer Feb 9 09:59:02.584733 waagent[1480]: 2024-02-09T09:59:02.584659Z INFO Daemon Daemon Configure sshd Feb 9 09:59:02.588928 waagent[1480]: 2024-02-09T09:59:02.588862Z INFO Daemon Daemon Deploy ssh public key. Feb 9 09:59:02.589380 systemd-networkd[1536]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:59:02.599234 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:59:03.799035 waagent[1480]: 2024-02-09T09:59:03.798936Z INFO Daemon Daemon Provisioning complete Feb 9 09:59:03.821433 waagent[1480]: 2024-02-09T09:59:03.821363Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 09:59:03.828298 waagent[1480]: 2024-02-09T09:59:03.828220Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 09:59:03.840723 waagent[1480]: 2024-02-09T09:59:03.840639Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 09:59:04.142286 waagent[1545]: 2024-02-09T09:59:04.142188Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 09:59:04.143053 waagent[1545]: 2024-02-09T09:59:04.142987Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:59:04.143183 waagent[1545]: 2024-02-09T09:59:04.143138Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:59:04.155460 waagent[1545]: 2024-02-09T09:59:04.155374Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 09:59:04.155654 waagent[1545]: 2024-02-09T09:59:04.155603Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 09:59:04.225418 waagent[1545]: 2024-02-09T09:59:04.225260Z INFO ExtHandler ExtHandler Found private key matching thumbprint 4E5171DB65194287C473276C915A4B8BB4726258 Feb 9 09:59:04.225624 waagent[1545]: 2024-02-09T09:59:04.225571Z INFO ExtHandler ExtHandler Certificate with thumbprint 9363FC4A89ACEE9AAE7CEAB98992FB879C1B86FD has no matching private key. Feb 9 09:59:04.225849 waagent[1545]: 2024-02-09T09:59:04.225801Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 09:59:04.241384 waagent[1545]: 2024-02-09T09:59:04.241323Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: afb63387-c433-400c-876f-192b850ca340 New eTag: 10011988503046185978] Feb 9 09:59:04.241988 waagent[1545]: 2024-02-09T09:59:04.241928Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 09:59:04.353544 waagent[1545]: 2024-02-09T09:59:04.353404Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 09:59:04.386714 waagent[1545]: 2024-02-09T09:59:04.386630Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1545 Feb 9 09:59:04.390450 waagent[1545]: 2024-02-09T09:59:04.390382Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 09:59:04.391796 waagent[1545]: 2024-02-09T09:59:04.391739Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 09:59:04.508448 waagent[1545]: 2024-02-09T09:59:04.508336Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 09:59:04.508780 waagent[1545]: 2024-02-09T09:59:04.508723Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 09:59:04.516505 waagent[1545]: 2024-02-09T09:59:04.516441Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 09:59:04.517042 waagent[1545]: 2024-02-09T09:59:04.516982Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 09:59:04.518200 waagent[1545]: 2024-02-09T09:59:04.518132Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 09:59:04.519641 waagent[1545]: 2024-02-09T09:59:04.519562Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 09:59:04.520284 waagent[1545]: 2024-02-09T09:59:04.520223Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:59:04.520585 waagent[1545]: 2024-02-09T09:59:04.520531Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:59:04.521301 waagent[1545]: 2024-02-09T09:59:04.521230Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 09:59:04.521700 waagent[1545]: 2024-02-09T09:59:04.521645Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 09:59:04.521700 waagent[1545]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 09:59:04.521700 waagent[1545]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 09:59:04.521700 waagent[1545]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 09:59:04.521700 waagent[1545]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:59:04.521700 waagent[1545]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:59:04.521700 waagent[1545]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:59:04.524129 waagent[1545]: 2024-02-09T09:59:04.523962Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 09:59:04.524918 waagent[1545]: 2024-02-09T09:59:04.524857Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:59:04.525185 waagent[1545]: 2024-02-09T09:59:04.525134Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:59:04.525504 waagent[1545]: 2024-02-09T09:59:04.525431Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 09:59:04.525675 waagent[1545]: 2024-02-09T09:59:04.525619Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 09:59:04.526593 waagent[1545]: 2024-02-09T09:59:04.526508Z INFO EnvHandler ExtHandler Configure routes Feb 9 09:59:04.526961 waagent[1545]: 2024-02-09T09:59:04.526906Z INFO EnvHandler ExtHandler Gateway:None Feb 9 09:59:04.527167 waagent[1545]: 2024-02-09T09:59:04.527121Z INFO EnvHandler ExtHandler Routes:None Feb 9 09:59:04.527556 waagent[1545]: 2024-02-09T09:59:04.527487Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 09:59:04.527655 waagent[1545]: 2024-02-09T09:59:04.527602Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 09:59:04.529926 waagent[1545]: 2024-02-09T09:59:04.529871Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 09:59:04.540842 waagent[1545]: 2024-02-09T09:59:04.540775Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 09:59:04.541555 waagent[1545]: 2024-02-09T09:59:04.541492Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 09:59:04.542562 waagent[1545]: 2024-02-09T09:59:04.542501Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 09:59:04.590930 waagent[1545]: 2024-02-09T09:59:04.590857Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1536' Feb 9 09:59:04.599760 waagent[1545]: 2024-02-09T09:59:04.599649Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 09:59:04.771102 waagent[1545]: 2024-02-09T09:59:04.770990Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 09:59:04.844359 waagent[1480]: 2024-02-09T09:59:04.844227Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 09:59:04.848026 waagent[1480]: 2024-02-09T09:59:04.847967Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 09:59:05.984181 waagent[1572]: 2024-02-09T09:59:05.984084Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 09:59:05.985207 waagent[1572]: 2024-02-09T09:59:05.985151Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 09:59:05.985453 waagent[1572]: 2024-02-09T09:59:05.985403Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 09:59:05.993322 waagent[1572]: 2024-02-09T09:59:05.993178Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 09:59:05.993852 waagent[1572]: 2024-02-09T09:59:05.993798Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:59:05.994092 waagent[1572]: 2024-02-09T09:59:05.994043Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:59:06.006824 waagent[1572]: 2024-02-09T09:59:06.006747Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 09:59:06.015820 waagent[1572]: 2024-02-09T09:59:06.015755Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 09:59:06.017038 waagent[1572]: 2024-02-09T09:59:06.016982Z INFO ExtHandler Feb 9 09:59:06.017280 waagent[1572]: 2024-02-09T09:59:06.017230Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7708b759-9a65-415d-bf9a-28cbc68bb8fe eTag: 10011988503046185978 source: Fabric] Feb 9 09:59:06.018134 waagent[1572]: 2024-02-09T09:59:06.018079Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 09:59:06.019465 waagent[1572]: 2024-02-09T09:59:06.019404Z INFO ExtHandler Feb 9 09:59:06.019700 waagent[1572]: 2024-02-09T09:59:06.019652Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 09:59:06.026057 waagent[1572]: 2024-02-09T09:59:06.026009Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 09:59:06.026666 waagent[1572]: 2024-02-09T09:59:06.026620Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 09:59:06.049493 waagent[1572]: 2024-02-09T09:59:06.049425Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 09:59:06.128105 waagent[1572]: 2024-02-09T09:59:06.127967Z INFO ExtHandler Downloaded certificate {'thumbprint': '9363FC4A89ACEE9AAE7CEAB98992FB879C1B86FD', 'hasPrivateKey': False} Feb 9 09:59:06.129333 waagent[1572]: 2024-02-09T09:59:06.129254Z INFO ExtHandler Downloaded certificate {'thumbprint': '4E5171DB65194287C473276C915A4B8BB4726258', 'hasPrivateKey': True} Feb 9 09:59:06.130482 waagent[1572]: 2024-02-09T09:59:06.130423Z INFO ExtHandler Fetch goal state completed Feb 9 09:59:06.157075 waagent[1572]: 2024-02-09T09:59:06.156986Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1572 Feb 9 09:59:06.160822 waagent[1572]: 2024-02-09T09:59:06.160744Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 09:59:06.162452 waagent[1572]: 2024-02-09T09:59:06.162390Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 09:59:06.167372 waagent[1572]: 2024-02-09T09:59:06.167318Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 09:59:06.167902 waagent[1572]: 2024-02-09T09:59:06.167846Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 09:59:06.176024 waagent[1572]: 2024-02-09T09:59:06.175971Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 09:59:06.176666 waagent[1572]: 2024-02-09T09:59:06.176610Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 09:59:06.182580 waagent[1572]: 2024-02-09T09:59:06.182481Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 09:59:06.186332 waagent[1572]: 2024-02-09T09:59:06.186251Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 09:59:06.187951 waagent[1572]: 2024-02-09T09:59:06.187883Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 09:59:06.188237 waagent[1572]: 2024-02-09T09:59:06.188164Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:59:06.188980 waagent[1572]: 2024-02-09T09:59:06.188902Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:59:06.189636 waagent[1572]: 2024-02-09T09:59:06.189560Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 09:59:06.190200 waagent[1572]: 2024-02-09T09:59:06.190130Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 09:59:06.190967 waagent[1572]: 2024-02-09T09:59:06.190887Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:59:06.191147 waagent[1572]: 2024-02-09T09:59:06.191096Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 09:59:06.191147 waagent[1572]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 09:59:06.191147 waagent[1572]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 09:59:06.191147 waagent[1572]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 09:59:06.191147 waagent[1572]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:59:06.191147 waagent[1572]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:59:06.191147 waagent[1572]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:59:06.191147 waagent[1572]: 2024-02-09T09:59:06.191026Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 09:59:06.191466 waagent[1572]: 2024-02-09T09:59:06.191405Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 09:59:06.193951 waagent[1572]: 2024-02-09T09:59:06.193779Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:59:06.194958 waagent[1572]: 2024-02-09T09:59:06.194863Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 09:59:06.195216 waagent[1572]: 2024-02-09T09:59:06.195143Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 09:59:06.195392 waagent[1572]: 2024-02-09T09:59:06.195314Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 09:59:06.197949 waagent[1572]: 2024-02-09T09:59:06.197876Z INFO EnvHandler ExtHandler Configure routes Feb 9 09:59:06.199669 waagent[1572]: 2024-02-09T09:59:06.199602Z INFO EnvHandler ExtHandler Gateway:None Feb 9 09:59:06.205431 waagent[1572]: 2024-02-09T09:59:06.205328Z INFO EnvHandler ExtHandler Routes:None Feb 9 09:59:06.215119 waagent[1572]: 2024-02-09T09:59:06.215039Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 09:59:06.215652 waagent[1572]: 2024-02-09T09:59:06.215569Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 09:59:06.215652 waagent[1572]: Executing ['ip', '-a', '-o', 'link']: Feb 9 09:59:06.215652 waagent[1572]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 09:59:06.215652 waagent[1572]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c4:49:40 brd ff:ff:ff:ff:ff:ff Feb 9 09:59:06.215652 waagent[1572]: 3: enP3206s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c4:49:40 brd ff:ff:ff:ff:ff:ff\ altname enP3206p0s2 Feb 9 09:59:06.215652 waagent[1572]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 09:59:06.215652 waagent[1572]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 09:59:06.215652 waagent[1572]: 2: eth0 inet 10.200.20.16/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 09:59:06.215652 waagent[1572]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 09:59:06.215652 waagent[1572]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 09:59:06.215652 waagent[1572]: 2: eth0 inet6 fe80::20d:3aff:fec4:4940/64 scope link \ valid_lft forever preferred_lft forever Feb 9 09:59:06.215957 waagent[1572]: 2024-02-09T09:59:06.215892Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 09:59:06.253665 waagent[1572]: 2024-02-09T09:59:06.253558Z INFO ExtHandler ExtHandler Feb 9 09:59:06.253774 waagent[1572]: 2024-02-09T09:59:06.253722Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 218ee307-f857-4438-a3b2-69ddc3f9d231 correlation aea173ce-e93f-4eca-b0d5-9d7a1933a6d1 created: 2024-02-09T09:57:25.574364Z] Feb 9 09:59:06.254915 waagent[1572]: 2024-02-09T09:59:06.254847Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 09:59:06.256734 waagent[1572]: 2024-02-09T09:59:06.256676Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Feb 9 09:59:06.282985 waagent[1572]: 2024-02-09T09:59:06.282899Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 09:59:06.298723 waagent[1572]: 2024-02-09T09:59:06.298642Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0EE26F91-27CD-47FC-81BD-625ECBEC6011;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 09:59:06.488155 waagent[1572]: 2024-02-09T09:59:06.488020Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 09:59:06.488155 waagent[1572]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:59:06.488155 waagent[1572]: pkts bytes target prot opt in out source destination Feb 9 09:59:06.488155 waagent[1572]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:59:06.488155 waagent[1572]: pkts bytes target prot opt in out source destination Feb 9 09:59:06.488155 waagent[1572]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:59:06.488155 waagent[1572]: pkts bytes target prot opt in out source destination Feb 9 09:59:06.488155 waagent[1572]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 09:59:06.488155 waagent[1572]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 09:59:06.488155 waagent[1572]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 09:59:06.495154 waagent[1572]: 2024-02-09T09:59:06.495034Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 09:59:06.495154 waagent[1572]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:59:06.495154 waagent[1572]: pkts bytes target prot opt in out source destination Feb 9 09:59:06.495154 waagent[1572]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:59:06.495154 waagent[1572]: pkts bytes target prot opt in out source destination Feb 9 09:59:06.495154 waagent[1572]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:59:06.495154 waagent[1572]: pkts bytes target prot opt in out source destination Feb 9 09:59:06.495154 waagent[1572]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 09:59:06.495154 waagent[1572]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 09:59:06.495154 waagent[1572]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 09:59:06.495710 waagent[1572]: 2024-02-09T09:59:06.495653Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 09:59:29.969531 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 9 09:59:32.283788 systemd[1]: Created slice system-sshd.slice. Feb 9 09:59:32.284927 systemd[1]: Started sshd@0-10.200.20.16:22-10.200.12.6:53618.service. Feb 9 09:59:32.978006 sshd[1622]: Accepted publickey for core from 10.200.12.6 port 53618 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:32.995613 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:32.999216 systemd-logind[1368]: New session 3 of user core. Feb 9 09:59:33.000002 systemd[1]: Started session-3.scope. Feb 9 09:59:33.317914 systemd[1]: Started sshd@1-10.200.20.16:22-10.200.12.6:53626.service. Feb 9 09:59:33.703788 sshd[1627]: Accepted publickey for core from 10.200.12.6 port 53626 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:33.704978 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:33.708740 systemd-logind[1368]: New session 4 of user core. Feb 9 09:59:33.709164 systemd[1]: Started session-4.scope. Feb 9 09:59:33.985613 sshd[1627]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:33.988119 systemd[1]: sshd@1-10.200.20.16:22-10.200.12.6:53626.service: Deactivated successfully. Feb 9 09:59:33.988816 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:59:33.989361 systemd-logind[1368]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:59:33.990170 systemd-logind[1368]: Removed session 4. Feb 9 09:59:34.050795 systemd[1]: Started sshd@2-10.200.20.16:22-10.200.12.6:53632.service. Feb 9 09:59:34.431181 sshd[1633]: Accepted publickey for core from 10.200.12.6 port 53632 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:34.432742 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:34.436805 systemd[1]: Started session-5.scope. Feb 9 09:59:34.438106 systemd-logind[1368]: New session 5 of user core. Feb 9 09:59:34.705669 sshd[1633]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:34.708266 systemd[1]: sshd@2-10.200.20.16:22-10.200.12.6:53632.service: Deactivated successfully. Feb 9 09:59:34.708940 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:59:34.709421 systemd-logind[1368]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:59:34.710060 systemd-logind[1368]: Removed session 5. Feb 9 09:59:34.769477 systemd[1]: Started sshd@3-10.200.20.16:22-10.200.12.6:53648.service. Feb 9 09:59:35.150344 sshd[1639]: Accepted publickey for core from 10.200.12.6 port 53648 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:35.151661 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:35.155369 systemd-logind[1368]: New session 6 of user core. Feb 9 09:59:35.155864 systemd[1]: Started session-6.scope. Feb 9 09:59:35.428993 sshd[1639]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:35.431479 systemd[1]: sshd@3-10.200.20.16:22-10.200.12.6:53648.service: Deactivated successfully. Feb 9 09:59:35.432125 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:59:35.432676 systemd-logind[1368]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:59:35.433519 systemd-logind[1368]: Removed session 6. Feb 9 09:59:35.506794 systemd[1]: Started sshd@4-10.200.20.16:22-10.200.12.6:53658.service. Feb 9 09:59:35.890757 sshd[1645]: Accepted publickey for core from 10.200.12.6 port 53658 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:35.892380 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:35.896042 systemd-logind[1368]: New session 7 of user core. Feb 9 09:59:35.896546 systemd[1]: Started session-7.scope. Feb 9 09:59:36.509928 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:59:36.510133 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:59:37.221214 systemd[1]: Starting docker.service... Feb 9 09:59:37.251088 env[1663]: time="2024-02-09T09:59:37.251029295Z" level=info msg="Starting up" Feb 9 09:59:37.253280 env[1663]: time="2024-02-09T09:59:37.253256561Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:59:37.253407 env[1663]: time="2024-02-09T09:59:37.253388795Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:59:37.253483 env[1663]: time="2024-02-09T09:59:37.253467232Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:59:37.253544 env[1663]: time="2024-02-09T09:59:37.253526749Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:59:37.255602 env[1663]: time="2024-02-09T09:59:37.255580983Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:59:37.255699 env[1663]: time="2024-02-09T09:59:37.255685178Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:59:37.255759 env[1663]: time="2024-02-09T09:59:37.255744696Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:59:37.255818 env[1663]: time="2024-02-09T09:59:37.255806413Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:59:37.261306 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3367531050-merged.mount: Deactivated successfully. Feb 9 09:59:37.384443 env[1663]: time="2024-02-09T09:59:37.384405689Z" level=info msg="Loading containers: start." Feb 9 09:59:37.587311 kernel: Initializing XFRM netlink socket Feb 9 09:59:37.609971 env[1663]: time="2024-02-09T09:59:37.609931223Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 09:59:37.719593 systemd-networkd[1536]: docker0: Link UP Feb 9 09:59:37.759183 env[1663]: time="2024-02-09T09:59:37.759137987Z" level=info msg="Loading containers: done." Feb 9 09:59:37.770145 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck539654974-merged.mount: Deactivated successfully. Feb 9 09:59:37.805480 env[1663]: time="2024-02-09T09:59:37.805429627Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 09:59:37.805661 env[1663]: time="2024-02-09T09:59:37.805637939Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 09:59:37.805769 env[1663]: time="2024-02-09T09:59:37.805748574Z" level=info msg="Daemon has completed initialization" Feb 9 09:59:37.875549 systemd[1]: Started docker.service. Feb 9 09:59:37.879932 env[1663]: time="2024-02-09T09:59:37.879878756Z" level=info msg="API listen on /run/docker.sock" Feb 9 09:59:37.895708 systemd[1]: Reloading. Feb 9 09:59:37.955402 /usr/lib/systemd/system-generators/torcx-generator[1792]: time="2024-02-09T09:59:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:59:37.955432 /usr/lib/systemd/system-generators/torcx-generator[1792]: time="2024-02-09T09:59:37Z" level=info msg="torcx already run" Feb 9 09:59:38.031005 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:59:38.031024 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:59:38.048078 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:59:38.124806 systemd[1]: Started kubelet.service. Feb 9 09:59:38.187267 kubelet[1851]: E0209 09:59:38.187201 1851 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 09:59:38.189333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:59:38.189462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:59:39.730744 update_engine[1372]: I0209 09:59:39.730702 1372 update_attempter.cc:509] Updating boot flags... Feb 9 09:59:42.332920 env[1379]: time="2024-02-09T09:59:42.332878863Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 9 09:59:43.414587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1749502679.mount: Deactivated successfully. Feb 9 09:59:46.181081 env[1379]: time="2024-02-09T09:59:46.181017646Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:46.197238 env[1379]: time="2024-02-09T09:59:46.197191263Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d19178cf7413f0942a116deaaea447983d297afb5dc7f62456c43839e7aaecfa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:46.206012 env[1379]: time="2024-02-09T09:59:46.205963575Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:46.228390 env[1379]: time="2024-02-09T09:59:46.228331485Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:46.229430 env[1379]: time="2024-02-09T09:59:46.229389300Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:d19178cf7413f0942a116deaaea447983d297afb5dc7f62456c43839e7aaecfa\"" Feb 9 09:59:46.239199 env[1379]: time="2024-02-09T09:59:46.239144029Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 9 09:59:48.316418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 09:59:48.316594 systemd[1]: Stopped kubelet.service. Feb 9 09:59:48.317976 systemd[1]: Started kubelet.service. Feb 9 09:59:48.386603 kubelet[1911]: E0209 09:59:48.386559 1911 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 09:59:48.389949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:59:48.390074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:59:48.555193 env[1379]: time="2024-02-09T09:59:48.555140966Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:48.576488 env[1379]: time="2024-02-09T09:59:48.576394603Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6b9759f115be4c68b4a500b8c1d7bbeaf16e8e887b01eaf79c135b7b267baf95,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:48.590355 env[1379]: time="2024-02-09T09:59:48.590321314Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:48.601250 env[1379]: time="2024-02-09T09:59:48.601203847Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:48.601990 env[1379]: time="2024-02-09T09:59:48.601962711Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:6b9759f115be4c68b4a500b8c1d7bbeaf16e8e887b01eaf79c135b7b267baf95\"" Feb 9 09:59:48.612034 env[1379]: time="2024-02-09T09:59:48.612001422Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 9 09:59:50.439938 env[1379]: time="2024-02-09T09:59:50.439882984Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:50.456354 env[1379]: time="2024-02-09T09:59:50.456268307Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:745369ed75bfc0dd1319e4c64383b4ef2cb163cec6630fa288ad3fb6bf6624eb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:50.469477 env[1379]: time="2024-02-09T09:59:50.469431311Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:50.481072 env[1379]: time="2024-02-09T09:59:50.481024719Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:50.481914 env[1379]: time="2024-02-09T09:59:50.481882801Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:745369ed75bfc0dd1319e4c64383b4ef2cb163cec6630fa288ad3fb6bf6624eb\"" Feb 9 09:59:50.491211 env[1379]: time="2024-02-09T09:59:50.491179416Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 09:59:51.701769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount991964719.mount: Deactivated successfully. Feb 9 09:59:52.521665 env[1379]: time="2024-02-09T09:59:52.521600175Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:52.535720 env[1379]: time="2024-02-09T09:59:52.535662806Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:52.545066 env[1379]: time="2024-02-09T09:59:52.545030235Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:52.553055 env[1379]: time="2024-02-09T09:59:52.553021295Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:52.553652 env[1379]: time="2024-02-09T09:59:52.553628412Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef\"" Feb 9 09:59:52.563545 env[1379]: time="2024-02-09T09:59:52.563514747Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 09:59:53.406019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2971324517.mount: Deactivated successfully. Feb 9 09:59:53.461549 env[1379]: time="2024-02-09T09:59:53.461509099Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:53.483119 env[1379]: time="2024-02-09T09:59:53.483079795Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:53.494264 env[1379]: time="2024-02-09T09:59:53.494216885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:53.509794 env[1379]: time="2024-02-09T09:59:53.509758052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:53.510574 env[1379]: time="2024-02-09T09:59:53.510550185Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 09:59:53.520017 env[1379]: time="2024-02-09T09:59:53.519984261Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 9 09:59:54.575074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2073972875.mount: Deactivated successfully. Feb 9 09:59:58.566397 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 09:59:58.566574 systemd[1]: Stopped kubelet.service. Feb 9 09:59:58.567928 systemd[1]: Started kubelet.service. Feb 9 09:59:58.607090 kubelet[1938]: E0209 09:59:58.607039 1938 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 09:59:58.608911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:59:58.609037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:59:58.658014 env[1379]: time="2024-02-09T09:59:58.657961236Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:58.683496 env[1379]: time="2024-02-09T09:59:58.683424152Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:58.697361 env[1379]: time="2024-02-09T09:59:58.697328314Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:58.710943 env[1379]: time="2024-02-09T09:59:58.710886824Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:58.711789 env[1379]: time="2024-02-09T09:59:58.711760779Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737\"" Feb 9 09:59:58.719763 env[1379]: time="2024-02-09T09:59:58.719728946Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 09:59:59.606024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3848409029.mount: Deactivated successfully. Feb 9 10:00:00.249311 env[1379]: time="2024-02-09T10:00:00.249254875Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:00.278650 env[1379]: time="2024-02-09T10:00:00.278583420Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:00.294175 env[1379]: time="2024-02-09T10:00:00.294139713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:00.324124 env[1379]: time="2024-02-09T10:00:00.324075943Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:00.324897 env[1379]: time="2024-02-09T10:00:00.324866316Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Feb 9 10:00:06.365574 systemd[1]: Stopped kubelet.service. Feb 9 10:00:06.381985 systemd[1]: Reloading. Feb 9 10:00:06.462094 /usr/lib/systemd/system-generators/torcx-generator[2031]: time="2024-02-09T10:00:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:00:06.462129 /usr/lib/systemd/system-generators/torcx-generator[2031]: time="2024-02-09T10:00:06Z" level=info msg="torcx already run" Feb 9 10:00:06.540571 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:00:06.540740 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:00:06.558570 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:00:06.661553 systemd[1]: Started kubelet.service. Feb 9 10:00:06.710007 kubelet[2092]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:00:06.710007 kubelet[2092]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 10:00:06.710007 kubelet[2092]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:00:06.710383 kubelet[2092]: I0209 10:00:06.710065 2092 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 10:00:07.685876 kubelet[2092]: I0209 10:00:07.685847 2092 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 10:00:07.686070 kubelet[2092]: I0209 10:00:07.686059 2092 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 10:00:07.686365 kubelet[2092]: I0209 10:00:07.686350 2092 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 10:00:07.690253 kubelet[2092]: E0209 10:00:07.690210 2092 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:07.690356 kubelet[2092]: I0209 10:00:07.690269 2092 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 10:00:07.691910 kubelet[2092]: W0209 10:00:07.691895 2092 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 10:00:07.692788 kubelet[2092]: I0209 10:00:07.692758 2092 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 10:00:07.693037 kubelet[2092]: I0209 10:00:07.693015 2092 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 10:00:07.693113 kubelet[2092]: I0209 10:00:07.693096 2092 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 10:00:07.693196 kubelet[2092]: I0209 10:00:07.693119 2092 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 10:00:07.693196 kubelet[2092]: I0209 10:00:07.693135 2092 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 10:00:07.693248 kubelet[2092]: I0209 10:00:07.693224 2092 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:00:07.697473 kubelet[2092]: I0209 10:00:07.697452 2092 kubelet.go:405] "Attempting to sync node with API server" Feb 9 10:00:07.697578 kubelet[2092]: I0209 10:00:07.697567 2092 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 10:00:07.697652 kubelet[2092]: I0209 10:00:07.697643 2092 kubelet.go:309] "Adding apiserver pod source" Feb 9 10:00:07.697712 kubelet[2092]: I0209 10:00:07.697704 2092 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 10:00:07.698464 kubelet[2092]: W0209 10:00:07.698426 2092 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:07.698586 kubelet[2092]: E0209 10:00:07.698575 2092 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:07.698721 kubelet[2092]: W0209 10:00:07.698693 2092 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-ff24132019&limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:07.698797 kubelet[2092]: E0209 10:00:07.698785 2092 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-ff24132019&limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:07.698935 kubelet[2092]: I0209 10:00:07.698923 2092 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 10:00:07.699222 kubelet[2092]: W0209 10:00:07.699206 2092 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 10:00:07.699675 kubelet[2092]: I0209 10:00:07.699658 2092 server.go:1168] "Started kubelet" Feb 9 10:00:07.707874 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 10:00:07.707990 kubelet[2092]: E0209 10:00:07.701899 2092 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-ff24132019.17b22978d6936b97", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-ff24132019", UID:"ci-3510.3.2-a-ff24132019", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-ff24132019"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 0, 7, 699639191, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 0, 7, 699639191, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.20.16:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.16:6443: connect: connection refused'(may retry after sleeping) Feb 9 10:00:07.707990 kubelet[2092]: I0209 10:00:07.702187 2092 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 10:00:07.707990 kubelet[2092]: I0209 10:00:07.702755 2092 server.go:461] "Adding debug handlers to kubelet server" Feb 9 10:00:07.707990 kubelet[2092]: I0209 10:00:07.703603 2092 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 10:00:07.708477 kubelet[2092]: I0209 10:00:07.708446 2092 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 10:00:07.708565 kubelet[2092]: E0209 10:00:07.708551 2092 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 10:00:07.708631 kubelet[2092]: E0209 10:00:07.708622 2092 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 10:00:07.712475 kubelet[2092]: E0209 10:00:07.712459 2092 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-ff24132019\" not found" Feb 9 10:00:07.712749 kubelet[2092]: I0209 10:00:07.712737 2092 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 10:00:07.712900 kubelet[2092]: I0209 10:00:07.712888 2092 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 10:00:07.713321 kubelet[2092]: W0209 10:00:07.713271 2092 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:07.713418 kubelet[2092]: E0209 10:00:07.713409 2092 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:07.713667 kubelet[2092]: E0209 10:00:07.713652 2092 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-ff24132019?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="200ms" Feb 9 10:00:07.766161 kubelet[2092]: I0209 10:00:07.766126 2092 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 10:00:07.767106 kubelet[2092]: I0209 10:00:07.767083 2092 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 10:00:07.767230 kubelet[2092]: I0209 10:00:07.767113 2092 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 10:00:07.767230 kubelet[2092]: I0209 10:00:07.767137 2092 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 10:00:07.767230 kubelet[2092]: E0209 10:00:07.767179 2092 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 10:00:07.767972 kubelet[2092]: W0209 10:00:07.767950 2092 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:07.768078 kubelet[2092]: E0209 10:00:07.768067 2092 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:07.809312 kubelet[2092]: I0209 10:00:07.809276 2092 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 10:00:07.809467 kubelet[2092]: I0209 10:00:07.809456 2092 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 10:00:07.809539 kubelet[2092]: I0209 10:00:07.809530 2092 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:00:07.814913 kubelet[2092]: I0209 10:00:07.814890 2092 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-ff24132019" Feb 9 10:00:07.817037 kubelet[2092]: E0209 10:00:07.815179 2092 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-3510.3.2-a-ff24132019" Feb 9 10:00:07.820483 kubelet[2092]: I0209 10:00:07.820465 2092 policy_none.go:49] "None policy: Start" Feb 9 10:00:07.821210 kubelet[2092]: I0209 10:00:07.821184 2092 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 10:00:07.821412 kubelet[2092]: I0209 10:00:07.821325 2092 state_mem.go:35] "Initializing new in-memory state store" Feb 9 10:00:07.835008 systemd[1]: Created slice kubepods.slice. Feb 9 10:00:07.838820 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 10:00:07.841368 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 10:00:07.852989 kubelet[2092]: I0209 10:00:07.852960 2092 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 10:00:07.853525 kubelet[2092]: I0209 10:00:07.853493 2092 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 10:00:07.855331 kubelet[2092]: E0209 10:00:07.854972 2092 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-ff24132019\" not found" Feb 9 10:00:07.867704 kubelet[2092]: I0209 10:00:07.867659 2092 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:00:07.869227 kubelet[2092]: I0209 10:00:07.869203 2092 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:00:07.870646 kubelet[2092]: I0209 10:00:07.870629 2092 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:00:07.875361 systemd[1]: Created slice kubepods-burstable-pod06802d2832255ee476b45342e262347a.slice. Feb 9 10:00:07.885112 systemd[1]: Created slice kubepods-burstable-pod104fc4290a19538fbd225b6ffd3434fb.slice. Feb 9 10:00:07.895910 systemd[1]: Created slice kubepods-burstable-podd1a3c0bbb7017cc3224abb8c60bc750e.slice. Feb 9 10:00:07.914056 kubelet[2092]: E0209 10:00:07.914023 2092 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-ff24132019?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="400ms" Feb 9 10:00:08.016347 kubelet[2092]: I0209 10:00:08.014455 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06802d2832255ee476b45342e262347a-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-ff24132019\" (UID: \"06802d2832255ee476b45342e262347a\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-ff24132019" Feb 9 10:00:08.016347 kubelet[2092]: I0209 10:00:08.014500 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06802d2832255ee476b45342e262347a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-ff24132019\" (UID: \"06802d2832255ee476b45342e262347a\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-ff24132019" Feb 9 10:00:08.016347 kubelet[2092]: I0209 10:00:08.014525 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/104fc4290a19538fbd225b6ffd3434fb-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-ff24132019\" (UID: \"104fc4290a19538fbd225b6ffd3434fb\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-ff24132019" Feb 9 10:00:08.016347 kubelet[2092]: I0209 10:00:08.014544 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/104fc4290a19538fbd225b6ffd3434fb-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-ff24132019\" (UID: \"104fc4290a19538fbd225b6ffd3434fb\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-ff24132019" Feb 9 10:00:08.016347 kubelet[2092]: I0209 10:00:08.014567 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06802d2832255ee476b45342e262347a-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-ff24132019\" (UID: \"06802d2832255ee476b45342e262347a\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-ff24132019" Feb 9 10:00:08.016532 kubelet[2092]: I0209 10:00:08.014587 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/104fc4290a19538fbd225b6ffd3434fb-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-ff24132019\" (UID: \"104fc4290a19538fbd225b6ffd3434fb\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-ff24132019" Feb 9 10:00:08.016532 kubelet[2092]: I0209 10:00:08.014607 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/104fc4290a19538fbd225b6ffd3434fb-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-ff24132019\" (UID: \"104fc4290a19538fbd225b6ffd3434fb\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-ff24132019" Feb 9 10:00:08.016532 kubelet[2092]: I0209 10:00:08.014626 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/104fc4290a19538fbd225b6ffd3434fb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-ff24132019\" (UID: \"104fc4290a19538fbd225b6ffd3434fb\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-ff24132019" Feb 9 10:00:08.016532 kubelet[2092]: I0209 10:00:08.014646 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1a3c0bbb7017cc3224abb8c60bc750e-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-ff24132019\" (UID: \"d1a3c0bbb7017cc3224abb8c60bc750e\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-ff24132019" Feb 9 10:00:08.017328 kubelet[2092]: I0209 10:00:08.017284 2092 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-ff24132019" Feb 9 10:00:08.017696 kubelet[2092]: E0209 10:00:08.017679 2092 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-3510.3.2-a-ff24132019" Feb 9 10:00:08.184431 env[1379]: time="2024-02-09T10:00:08.184133142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-ff24132019,Uid:06802d2832255ee476b45342e262347a,Namespace:kube-system,Attempt:0,}" Feb 9 10:00:08.187781 env[1379]: time="2024-02-09T10:00:08.187741035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-ff24132019,Uid:104fc4290a19538fbd225b6ffd3434fb,Namespace:kube-system,Attempt:0,}" Feb 9 10:00:08.198912 env[1379]: time="2024-02-09T10:00:08.198870553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-ff24132019,Uid:d1a3c0bbb7017cc3224abb8c60bc750e,Namespace:kube-system,Attempt:0,}" Feb 9 10:00:08.316421 kubelet[2092]: E0209 10:00:08.315039 2092 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-ff24132019?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="800ms" Feb 9 10:00:08.420698 kubelet[2092]: I0209 10:00:08.420673 2092 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-ff24132019" Feb 9 10:00:08.421122 kubelet[2092]: E0209 10:00:08.421107 2092 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-3510.3.2-a-ff24132019" Feb 9 10:00:08.614335 kubelet[2092]: W0209 10:00:08.614147 2092 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:08.614335 kubelet[2092]: E0209 10:00:08.614207 2092 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:08.662941 kubelet[2092]: W0209 10:00:08.662861 2092 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:08.662941 kubelet[2092]: E0209 10:00:08.662921 2092 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:08.895559 kubelet[2092]: W0209 10:00:08.895440 2092 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-ff24132019&limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:08.895559 kubelet[2092]: E0209 10:00:08.895502 2092 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-ff24132019&limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:09.115704 kubelet[2092]: E0209 10:00:09.115666 2092 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-ff24132019?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="1.6s" Feb 9 10:00:09.126730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933019770.mount: Deactivated successfully. Feb 9 10:00:09.223819 kubelet[2092]: I0209 10:00:09.223434 2092 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-ff24132019" Feb 9 10:00:09.223819 kubelet[2092]: E0209 10:00:09.223738 2092 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-3510.3.2-a-ff24132019" Feb 9 10:00:09.238008 env[1379]: time="2024-02-09T10:00:09.237968411Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:09.263823 env[1379]: time="2024-02-09T10:00:09.263754256Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:09.276495 env[1379]: time="2024-02-09T10:00:09.276462090Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:09.287671 kubelet[2092]: W0209 10:00:09.287586 2092 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:09.287671 kubelet[2092]: E0209 10:00:09.287645 2092 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:09.289309 env[1379]: time="2024-02-09T10:00:09.289247593Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:09.306476 env[1379]: time="2024-02-09T10:00:09.306439891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:09.328555 env[1379]: time="2024-02-09T10:00:09.328518561Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:09.348390 env[1379]: time="2024-02-09T10:00:09.348364683Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:09.362392 env[1379]: time="2024-02-09T10:00:09.362354633Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:09.369442 env[1379]: time="2024-02-09T10:00:09.369409530Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:09.395158 env[1379]: time="2024-02-09T10:00:09.395127430Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:09.420677 env[1379]: time="2024-02-09T10:00:09.420639374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:09.432611 env[1379]: time="2024-02-09T10:00:09.432567519Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:09.576903 env[1379]: time="2024-02-09T10:00:09.576828073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:00:09.576903 env[1379]: time="2024-02-09T10:00:09.576907382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:00:09.577082 env[1379]: time="2024-02-09T10:00:09.576935233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:00:09.577188 env[1379]: time="2024-02-09T10:00:09.577129705Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/651136cb9425ee066c263e72d8925645a48540bc9be597fda8bfd3d3ef01860f pid=2131 runtime=io.containerd.runc.v2 Feb 9 10:00:09.594758 systemd[1]: Started cri-containerd-651136cb9425ee066c263e72d8925645a48540bc9be597fda8bfd3d3ef01860f.scope. Feb 9 10:00:09.625361 env[1379]: time="2024-02-09T10:00:09.625284528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-ff24132019,Uid:06802d2832255ee476b45342e262347a,Namespace:kube-system,Attempt:0,} returns sandbox id \"651136cb9425ee066c263e72d8925645a48540bc9be597fda8bfd3d3ef01860f\"" Feb 9 10:00:09.628916 env[1379]: time="2024-02-09T10:00:09.628873940Z" level=info msg="CreateContainer within sandbox \"651136cb9425ee066c263e72d8925645a48540bc9be597fda8bfd3d3ef01860f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 10:00:09.639762 env[1379]: time="2024-02-09T10:00:09.639678148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:00:09.639762 env[1379]: time="2024-02-09T10:00:09.639728486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:00:09.639762 env[1379]: time="2024-02-09T10:00:09.639738970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:00:09.640157 env[1379]: time="2024-02-09T10:00:09.640093862Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f6a3f61f411bdb5d8f209f85eadc879860db7ead78b7b1d07a670a2df370aeb pid=2173 runtime=io.containerd.runc.v2 Feb 9 10:00:09.653444 systemd[1]: Started cri-containerd-7f6a3f61f411bdb5d8f209f85eadc879860db7ead78b7b1d07a670a2df370aeb.scope. Feb 9 10:00:09.660912 env[1379]: time="2024-02-09T10:00:09.654712885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:00:09.660912 env[1379]: time="2024-02-09T10:00:09.654771066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:00:09.660912 env[1379]: time="2024-02-09T10:00:09.654781470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:00:09.660912 env[1379]: time="2024-02-09T10:00:09.654883748Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/60be4919fcc0014a49892b1d32673803c4862f0b139e5ff7b7e85f3478f8bf82 pid=2198 runtime=io.containerd.runc.v2 Feb 9 10:00:09.677997 systemd[1]: Started cri-containerd-60be4919fcc0014a49892b1d32673803c4862f0b139e5ff7b7e85f3478f8bf82.scope. Feb 9 10:00:09.709942 env[1379]: time="2024-02-09T10:00:09.709889913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-ff24132019,Uid:104fc4290a19538fbd225b6ffd3434fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f6a3f61f411bdb5d8f209f85eadc879860db7ead78b7b1d07a670a2df370aeb\"" Feb 9 10:00:09.714734 env[1379]: time="2024-02-09T10:00:09.714686212Z" level=info msg="CreateContainer within sandbox \"7f6a3f61f411bdb5d8f209f85eadc879860db7ead78b7b1d07a670a2df370aeb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 10:00:09.720734 env[1379]: time="2024-02-09T10:00:09.720678315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-ff24132019,Uid:d1a3c0bbb7017cc3224abb8c60bc750e,Namespace:kube-system,Attempt:0,} returns sandbox id \"60be4919fcc0014a49892b1d32673803c4862f0b139e5ff7b7e85f3478f8bf82\"" Feb 9 10:00:09.725452 env[1379]: time="2024-02-09T10:00:09.725418153Z" level=info msg="CreateContainer within sandbox \"60be4919fcc0014a49892b1d32673803c4862f0b139e5ff7b7e85f3478f8bf82\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 10:00:09.783073 env[1379]: time="2024-02-09T10:00:09.783030805Z" level=info msg="CreateContainer within sandbox \"651136cb9425ee066c263e72d8925645a48540bc9be597fda8bfd3d3ef01860f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c134843e7034d113fe2fb52fd40f11aa9821f561c93ee371c8e07c6bed0faa78\"" Feb 9 10:00:09.783910 env[1379]: time="2024-02-09T10:00:09.783884762Z" level=info msg="StartContainer for \"c134843e7034d113fe2fb52fd40f11aa9821f561c93ee371c8e07c6bed0faa78\"" Feb 9 10:00:09.799461 systemd[1]: Started cri-containerd-c134843e7034d113fe2fb52fd40f11aa9821f561c93ee371c8e07c6bed0faa78.scope. Feb 9 10:00:09.844998 env[1379]: time="2024-02-09T10:00:09.844887791Z" level=info msg="StartContainer for \"c134843e7034d113fe2fb52fd40f11aa9821f561c93ee371c8e07c6bed0faa78\" returns successfully" Feb 9 10:00:09.862366 env[1379]: time="2024-02-09T10:00:09.862316177Z" level=info msg="CreateContainer within sandbox \"7f6a3f61f411bdb5d8f209f85eadc879860db7ead78b7b1d07a670a2df370aeb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e8d06a4002c5d2eeed261997d4514626292c2f9483715c932b3f1f9682ce8a1a\"" Feb 9 10:00:09.863089 env[1379]: time="2024-02-09T10:00:09.863048408Z" level=info msg="StartContainer for \"e8d06a4002c5d2eeed261997d4514626292c2f9483715c932b3f1f9682ce8a1a\"" Feb 9 10:00:09.869015 kubelet[2092]: E0209 10:00:09.868985 2092 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.16:6443: connect: connection refused Feb 9 10:00:09.880003 systemd[1]: Started cri-containerd-e8d06a4002c5d2eeed261997d4514626292c2f9483715c932b3f1f9682ce8a1a.scope. Feb 9 10:00:09.898235 env[1379]: time="2024-02-09T10:00:09.898160793Z" level=info msg="CreateContainer within sandbox \"60be4919fcc0014a49892b1d32673803c4862f0b139e5ff7b7e85f3478f8bf82\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"85ee4104684fc26843f81438ec58e6995a1994d88343eb35a272b2af6bf172dc\"" Feb 9 10:00:09.898758 env[1379]: time="2024-02-09T10:00:09.898730645Z" level=info msg="StartContainer for \"85ee4104684fc26843f81438ec58e6995a1994d88343eb35a272b2af6bf172dc\"" Feb 9 10:00:09.935796 systemd[1]: Started cri-containerd-85ee4104684fc26843f81438ec58e6995a1994d88343eb35a272b2af6bf172dc.scope. Feb 9 10:00:09.956488 env[1379]: time="2024-02-09T10:00:09.956417924Z" level=info msg="StartContainer for \"e8d06a4002c5d2eeed261997d4514626292c2f9483715c932b3f1f9682ce8a1a\" returns successfully" Feb 9 10:00:10.044226 env[1379]: time="2024-02-09T10:00:10.044172390Z" level=info msg="StartContainer for \"85ee4104684fc26843f81438ec58e6995a1994d88343eb35a272b2af6bf172dc\" returns successfully" Feb 9 10:00:10.125440 systemd[1]: run-containerd-runc-k8s.io-651136cb9425ee066c263e72d8925645a48540bc9be597fda8bfd3d3ef01860f-runc.gzgL2G.mount: Deactivated successfully. Feb 9 10:00:10.825695 kubelet[2092]: I0209 10:00:10.825673 2092 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-ff24132019" Feb 9 10:00:12.024029 kubelet[2092]: E0209 10:00:12.023997 2092 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-ff24132019\" not found" node="ci-3510.3.2-a-ff24132019" Feb 9 10:00:12.064942 kubelet[2092]: I0209 10:00:12.064913 2092 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-ff24132019" Feb 9 10:00:12.703836 kubelet[2092]: I0209 10:00:12.703803 2092 apiserver.go:52] "Watching apiserver" Feb 9 10:00:12.713490 kubelet[2092]: I0209 10:00:12.713460 2092 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 10:00:12.738610 kubelet[2092]: I0209 10:00:12.738578 2092 reconciler.go:41] "Reconciler: start to sync state" Feb 9 10:00:12.796136 kubelet[2092]: W0209 10:00:12.796103 2092 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 10:00:14.759523 systemd[1]: Reloading. Feb 9 10:00:14.831680 /usr/lib/systemd/system-generators/torcx-generator[2383]: time="2024-02-09T10:00:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:00:14.831725 /usr/lib/systemd/system-generators/torcx-generator[2383]: time="2024-02-09T10:00:14Z" level=info msg="torcx already run" Feb 9 10:00:14.908277 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:00:14.908304 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:00:14.925789 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:00:15.032767 kubelet[2092]: I0209 10:00:15.032660 2092 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 10:00:15.033120 systemd[1]: Stopping kubelet.service... Feb 9 10:00:15.051681 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 10:00:15.051895 systemd[1]: Stopped kubelet.service. Feb 9 10:00:15.051947 systemd[1]: kubelet.service: Consumed 1.310s CPU time. Feb 9 10:00:15.053782 systemd[1]: Started kubelet.service. Feb 9 10:00:15.147775 kubelet[2442]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:00:15.147775 kubelet[2442]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 10:00:15.147775 kubelet[2442]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:00:15.147775 kubelet[2442]: I0209 10:00:15.147205 2442 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 10:00:15.153545 kubelet[2442]: I0209 10:00:15.153513 2442 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 10:00:15.153545 kubelet[2442]: I0209 10:00:15.153540 2442 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 10:00:15.153754 kubelet[2442]: I0209 10:00:15.153734 2442 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 10:00:15.155314 kubelet[2442]: I0209 10:00:15.155281 2442 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 10:00:15.156984 kubelet[2442]: I0209 10:00:15.156963 2442 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 10:00:15.159469 kubelet[2442]: W0209 10:00:15.159448 2442 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 10:00:15.160056 kubelet[2442]: I0209 10:00:15.160025 2442 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 10:00:15.160264 kubelet[2442]: I0209 10:00:15.160236 2442 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 10:00:15.160340 kubelet[2442]: I0209 10:00:15.160311 2442 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 10:00:15.160417 kubelet[2442]: I0209 10:00:15.160348 2442 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 10:00:15.160417 kubelet[2442]: I0209 10:00:15.160366 2442 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 10:00:15.160417 kubelet[2442]: I0209 10:00:15.160392 2442 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:00:15.164994 kubelet[2442]: I0209 10:00:15.164961 2442 kubelet.go:405] "Attempting to sync node with API server" Feb 9 10:00:15.164994 kubelet[2442]: I0209 10:00:15.164994 2442 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 10:00:15.165115 kubelet[2442]: I0209 10:00:15.165024 2442 kubelet.go:309] "Adding apiserver pod source" Feb 9 10:00:15.165115 kubelet[2442]: I0209 10:00:15.165043 2442 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 10:00:15.171673 kubelet[2442]: I0209 10:00:15.171625 2442 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 10:00:15.172206 kubelet[2442]: I0209 10:00:15.172183 2442 server.go:1168] "Started kubelet" Feb 9 10:00:15.177930 kubelet[2442]: I0209 10:00:15.173119 2442 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 10:00:15.177930 kubelet[2442]: I0209 10:00:15.173962 2442 server.go:461] "Adding debug handlers to kubelet server" Feb 9 10:00:15.177930 kubelet[2442]: I0209 10:00:15.174961 2442 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 10:00:15.177930 kubelet[2442]: I0209 10:00:15.176360 2442 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 10:00:15.181678 kubelet[2442]: E0209 10:00:15.181318 2442 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 10:00:15.181817 kubelet[2442]: E0209 10:00:15.181804 2442 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 10:00:15.197069 kubelet[2442]: I0209 10:00:15.195704 2442 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 10:00:15.197069 kubelet[2442]: I0209 10:00:15.196116 2442 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 10:00:15.220513 kubelet[2442]: I0209 10:00:15.220481 2442 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 10:00:15.221408 kubelet[2442]: I0209 10:00:15.221387 2442 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 10:00:15.221461 kubelet[2442]: I0209 10:00:15.221419 2442 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 10:00:15.221461 kubelet[2442]: I0209 10:00:15.221438 2442 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 10:00:15.221509 kubelet[2442]: E0209 10:00:15.221496 2442 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 10:00:15.230692 sudo[2463]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 10:00:15.231171 sudo[2463]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 10:00:15.313001 kubelet[2442]: I0209 10:00:15.312907 2442 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 10:00:15.313001 kubelet[2442]: I0209 10:00:15.312935 2442 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 10:00:15.313001 kubelet[2442]: I0209 10:00:15.312955 2442 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:00:15.314796 kubelet[2442]: I0209 10:00:15.314764 2442 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 10:00:15.314796 kubelet[2442]: I0209 10:00:15.314794 2442 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 10:00:15.314796 kubelet[2442]: I0209 10:00:15.314803 2442 policy_none.go:49] "None policy: Start" Feb 9 10:00:15.316260 kubelet[2442]: I0209 10:00:15.315956 2442 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 10:00:15.316260 kubelet[2442]: I0209 10:00:15.315990 2442 state_mem.go:35] "Initializing new in-memory state store" Feb 9 10:00:15.316260 kubelet[2442]: I0209 10:00:15.316158 2442 state_mem.go:75] "Updated machine memory state" Feb 9 10:00:15.319696 kubelet[2442]: I0209 10:00:15.319661 2442 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 10:00:15.319921 kubelet[2442]: I0209 10:00:15.319896 2442 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 10:00:15.321608 kubelet[2442]: I0209 10:00:15.321576 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:00:15.321675 kubelet[2442]: I0209 10:00:15.321663 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:00:15.321721 kubelet[2442]: I0209 10:00:15.321702 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:00:15.327631 kubelet[2442]: I0209 10:00:15.327588 2442 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-ff24132019" Feb 9 10:00:15.327837 kubelet[2442]: W0209 10:00:15.327815 2442 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 10:00:15.338996 kubelet[2442]: W0209 10:00:15.338956 2442 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 10:00:15.347264 kubelet[2442]: W0209 10:00:15.347220 2442 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 10:00:15.347421 kubelet[2442]: E0209 10:00:15.347324 2442 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-ff24132019\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-ff24132019" Feb 9 10:00:15.355402 kubelet[2442]: I0209 10:00:15.355336 2442 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-ff24132019" Feb 9 10:00:15.355527 kubelet[2442]: I0209 10:00:15.355441 2442 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-ff24132019" Feb 9 10:00:15.406475 kubelet[2442]: I0209 10:00:15.406438 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06802d2832255ee476b45342e262347a-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-ff24132019\" (UID: \"06802d2832255ee476b45342e262347a\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-ff24132019" Feb 9 10:00:15.406475 kubelet[2442]: I0209 10:00:15.406481 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/104fc4290a19538fbd225b6ffd3434fb-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-ff24132019\" (UID: \"104fc4290a19538fbd225b6ffd3434fb\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-ff24132019" Feb 9 10:00:15.406634 kubelet[2442]: I0209 10:00:15.406515 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/104fc4290a19538fbd225b6ffd3434fb-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-ff24132019\" (UID: \"104fc4290a19538fbd225b6ffd3434fb\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-ff24132019" Feb 9 10:00:15.406634 kubelet[2442]: I0209 10:00:15.406535 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/104fc4290a19538fbd225b6ffd3434fb-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-ff24132019\" (UID: \"104fc4290a19538fbd225b6ffd3434fb\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-ff24132019" Feb 9 10:00:15.406634 kubelet[2442]: I0209 10:00:15.406561 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/104fc4290a19538fbd225b6ffd3434fb-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-ff24132019\" (UID: \"104fc4290a19538fbd225b6ffd3434fb\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-ff24132019" Feb 9 10:00:15.406634 kubelet[2442]: I0209 10:00:15.406592 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/104fc4290a19538fbd225b6ffd3434fb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-ff24132019\" (UID: \"104fc4290a19538fbd225b6ffd3434fb\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-ff24132019" Feb 9 10:00:15.406634 kubelet[2442]: I0209 10:00:15.406617 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1a3c0bbb7017cc3224abb8c60bc750e-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-ff24132019\" (UID: \"d1a3c0bbb7017cc3224abb8c60bc750e\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-ff24132019" Feb 9 10:00:15.406749 kubelet[2442]: I0209 10:00:15.406636 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06802d2832255ee476b45342e262347a-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-ff24132019\" (UID: \"06802d2832255ee476b45342e262347a\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-ff24132019" Feb 9 10:00:15.406749 kubelet[2442]: I0209 10:00:15.406669 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06802d2832255ee476b45342e262347a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-ff24132019\" (UID: \"06802d2832255ee476b45342e262347a\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-ff24132019" Feb 9 10:00:15.798497 sudo[2463]: pam_unix(sudo:session): session closed for user root Feb 9 10:00:16.166814 kubelet[2442]: I0209 10:00:16.166779 2442 apiserver.go:52] "Watching apiserver" Feb 9 10:00:16.197201 kubelet[2442]: I0209 10:00:16.197178 2442 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 10:00:16.211514 kubelet[2442]: I0209 10:00:16.211469 2442 reconciler.go:41] "Reconciler: start to sync state" Feb 9 10:00:16.294806 kubelet[2442]: W0209 10:00:16.294768 2442 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 10:00:16.294958 kubelet[2442]: E0209 10:00:16.294843 2442 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-ff24132019\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-ff24132019" Feb 9 10:00:16.379626 kubelet[2442]: I0209 10:00:16.379591 2442 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-ff24132019" podStartSLOduration=1.379525122 podCreationTimestamp="2024-02-09 10:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:00:16.363970973 +0000 UTC m=+1.303549787" watchObservedRunningTime="2024-02-09 10:00:16.379525122 +0000 UTC m=+1.319103936" Feb 9 10:00:16.411591 kubelet[2442]: I0209 10:00:16.411555 2442 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-ff24132019" podStartSLOduration=4.411516293 podCreationTimestamp="2024-02-09 10:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:00:16.386193592 +0000 UTC m=+1.325772366" watchObservedRunningTime="2024-02-09 10:00:16.411516293 +0000 UTC m=+1.351095107" Feb 9 10:00:16.428958 kubelet[2442]: I0209 10:00:16.428859 2442 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-ff24132019" podStartSLOduration=1.4288213459999999 podCreationTimestamp="2024-02-09 10:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:00:16.412009607 +0000 UTC m=+1.351588421" watchObservedRunningTime="2024-02-09 10:00:16.428821346 +0000 UTC m=+1.368400160" Feb 9 10:00:17.233506 sudo[1648]: pam_unix(sudo:session): session closed for user root Feb 9 10:00:17.320279 sshd[1645]: pam_unix(sshd:session): session closed for user core Feb 9 10:00:17.323851 systemd-logind[1368]: Session 7 logged out. Waiting for processes to exit. Feb 9 10:00:17.324456 systemd[1]: sshd@4-10.200.20.16:22-10.200.12.6:53658.service: Deactivated successfully. Feb 9 10:00:17.325202 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 10:00:17.325405 systemd[1]: session-7.scope: Consumed 7.060s CPU time. Feb 9 10:00:17.325970 systemd-logind[1368]: Removed session 7. Feb 9 10:00:28.068242 kubelet[2442]: I0209 10:00:28.068219 2442 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 10:00:28.069111 env[1379]: time="2024-02-09T10:00:28.069007717Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 10:00:28.069489 kubelet[2442]: I0209 10:00:28.069474 2442 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 10:00:28.628850 kubelet[2442]: I0209 10:00:28.628817 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:00:28.629490 kubelet[2442]: I0209 10:00:28.629468 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:00:28.637418 systemd[1]: Created slice kubepods-burstable-pod7ea98bbe_bf3c_4b9e_ba60_14c4824b2148.slice. Feb 9 10:00:28.643079 systemd[1]: Created slice kubepods-besteffort-pod90786865_1237_4fcd_9dbf_460524fa7f8f.slice. Feb 9 10:00:28.666273 kubelet[2442]: I0209 10:00:28.666244 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-hostproc\") pod \"cilium-lvm7n\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " pod="kube-system/cilium-lvm7n" Feb 9 10:00:28.666524 kubelet[2442]: I0209 10:00:28.666509 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90786865-1237-4fcd-9dbf-460524fa7f8f-xtables-lock\") pod \"kube-proxy-pprpn\" (UID: \"90786865-1237-4fcd-9dbf-460524fa7f8f\") " pod="kube-system/kube-proxy-pprpn" Feb 9 10:00:28.666662 kubelet[2442]: I0209 10:00:28.666630 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-host-proc-sys-net\") pod \"cilium-lvm7n\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " pod="kube-system/cilium-lvm7n" Feb 9 10:00:28.666726 kubelet[2442]: I0209 10:00:28.666680 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-bpf-maps\") pod \"cilium-lvm7n\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " pod="kube-system/cilium-lvm7n" Feb 9 10:00:28.666726 kubelet[2442]: I0209 10:00:28.666704 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-lib-modules\") pod \"cilium-lvm7n\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " pod="kube-system/cilium-lvm7n" Feb 9 10:00:28.666779 kubelet[2442]: I0209 10:00:28.666736 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-clustermesh-secrets\") pod \"cilium-lvm7n\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " pod="kube-system/cilium-lvm7n" Feb 9 10:00:28.666779 kubelet[2442]: I0209 10:00:28.666759 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbh67\" (UniqueName: \"kubernetes.io/projected/90786865-1237-4fcd-9dbf-460524fa7f8f-kube-api-access-wbh67\") pod \"kube-proxy-pprpn\" (UID: \"90786865-1237-4fcd-9dbf-460524fa7f8f\") " pod="kube-system/kube-proxy-pprpn" Feb 9 10:00:28.666824 kubelet[2442]: I0209 10:00:28.666800 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-etc-cni-netd\") pod \"cilium-lvm7n\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " pod="kube-system/cilium-lvm7n" Feb 9 10:00:28.666824 kubelet[2442]: I0209 10:00:28.666822 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90786865-1237-4fcd-9dbf-460524fa7f8f-lib-modules\") pod \"kube-proxy-pprpn\" (UID: \"90786865-1237-4fcd-9dbf-460524fa7f8f\") " pod="kube-system/kube-proxy-pprpn" Feb 9 10:00:28.666872 kubelet[2442]: I0209 10:00:28.666840 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-cilium-cgroup\") pod \"cilium-lvm7n\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " pod="kube-system/cilium-lvm7n" Feb 9 10:00:28.666872 kubelet[2442]: I0209 10:00:28.666858 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-cilium-config-path\") pod \"cilium-lvm7n\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " pod="kube-system/cilium-lvm7n" Feb 9 10:00:28.666914 kubelet[2442]: I0209 10:00:28.666886 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-host-proc-sys-kernel\") pod \"cilium-lvm7n\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " pod="kube-system/cilium-lvm7n" Feb 9 10:00:28.666914 kubelet[2442]: I0209 10:00:28.666910 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/90786865-1237-4fcd-9dbf-460524fa7f8f-kube-proxy\") pod \"kube-proxy-pprpn\" (UID: \"90786865-1237-4fcd-9dbf-460524fa7f8f\") " pod="kube-system/kube-proxy-pprpn" Feb 9 10:00:28.666961 kubelet[2442]: I0209 10:00:28.666928 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-cni-path\") pod \"cilium-lvm7n\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " pod="kube-system/cilium-lvm7n" Feb 9 10:00:28.666985 kubelet[2442]: I0209 10:00:28.666973 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-hubble-tls\") pod \"cilium-lvm7n\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " pod="kube-system/cilium-lvm7n" Feb 9 10:00:28.667009 kubelet[2442]: I0209 10:00:28.666998 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c95kl\" (UniqueName: \"kubernetes.io/projected/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-kube-api-access-c95kl\") pod \"cilium-lvm7n\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " pod="kube-system/cilium-lvm7n" Feb 9 10:00:28.667034 kubelet[2442]: I0209 10:00:28.667017 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-cilium-run\") pod \"cilium-lvm7n\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " pod="kube-system/cilium-lvm7n" Feb 9 10:00:28.667067 kubelet[2442]: I0209 10:00:28.667045 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-xtables-lock\") pod \"cilium-lvm7n\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " pod="kube-system/cilium-lvm7n" Feb 9 10:00:28.789919 kubelet[2442]: E0209 10:00:28.789892 2442 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 9 10:00:28.790072 kubelet[2442]: E0209 10:00:28.790061 2442 projected.go:198] Error preparing data for projected volume kube-api-access-wbh67 for pod kube-system/kube-proxy-pprpn: configmap "kube-root-ca.crt" not found Feb 9 10:00:28.790180 kubelet[2442]: E0209 10:00:28.790171 2442 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/90786865-1237-4fcd-9dbf-460524fa7f8f-kube-api-access-wbh67 podName:90786865-1237-4fcd-9dbf-460524fa7f8f nodeName:}" failed. No retries permitted until 2024-02-09 10:00:29.290151403 +0000 UTC m=+14.229730217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wbh67" (UniqueName: "kubernetes.io/projected/90786865-1237-4fcd-9dbf-460524fa7f8f-kube-api-access-wbh67") pod "kube-proxy-pprpn" (UID: "90786865-1237-4fcd-9dbf-460524fa7f8f") : configmap "kube-root-ca.crt" not found Feb 9 10:00:28.791839 kubelet[2442]: E0209 10:00:28.791818 2442 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 9 10:00:28.791942 kubelet[2442]: E0209 10:00:28.791932 2442 projected.go:198] Error preparing data for projected volume kube-api-access-c95kl for pod kube-system/cilium-lvm7n: configmap "kube-root-ca.crt" not found Feb 9 10:00:28.792029 kubelet[2442]: E0209 10:00:28.792019 2442 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-kube-api-access-c95kl podName:7ea98bbe-bf3c-4b9e-ba60-14c4824b2148 nodeName:}" failed. No retries permitted until 2024-02-09 10:00:29.292007236 +0000 UTC m=+14.231586050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c95kl" (UniqueName: "kubernetes.io/projected/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-kube-api-access-c95kl") pod "cilium-lvm7n" (UID: "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148") : configmap "kube-root-ca.crt" not found Feb 9 10:00:29.013523 kubelet[2442]: I0209 10:00:29.013429 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:00:29.018494 systemd[1]: Created slice kubepods-besteffort-pod44e4f136_8942_4728_885b_9e4678b1af9d.slice. Feb 9 10:00:29.069936 kubelet[2442]: I0209 10:00:29.069905 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44e4f136-8942-4728-885b-9e4678b1af9d-cilium-config-path\") pod \"cilium-operator-574c4bb98d-ltwxd\" (UID: \"44e4f136-8942-4728-885b-9e4678b1af9d\") " pod="kube-system/cilium-operator-574c4bb98d-ltwxd" Feb 9 10:00:29.070354 kubelet[2442]: I0209 10:00:29.070341 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmkl6\" (UniqueName: \"kubernetes.io/projected/44e4f136-8942-4728-885b-9e4678b1af9d-kube-api-access-cmkl6\") pod \"cilium-operator-574c4bb98d-ltwxd\" (UID: \"44e4f136-8942-4728-885b-9e4678b1af9d\") " pod="kube-system/cilium-operator-574c4bb98d-ltwxd" Feb 9 10:00:29.321408 env[1379]: time="2024-02-09T10:00:29.321367370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-ltwxd,Uid:44e4f136-8942-4728-885b-9e4678b1af9d,Namespace:kube-system,Attempt:0,}" Feb 9 10:00:29.404969 env[1379]: time="2024-02-09T10:00:29.404772085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:00:29.404969 env[1379]: time="2024-02-09T10:00:29.404806652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:00:29.404969 env[1379]: time="2024-02-09T10:00:29.404816655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:00:29.405154 env[1379]: time="2024-02-09T10:00:29.404997856Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa pid=2522 runtime=io.containerd.runc.v2 Feb 9 10:00:29.415324 systemd[1]: Started cri-containerd-199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa.scope. Feb 9 10:00:29.445770 env[1379]: time="2024-02-09T10:00:29.445725391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-ltwxd,Uid:44e4f136-8942-4728-885b-9e4678b1af9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa\"" Feb 9 10:00:29.449118 env[1379]: time="2024-02-09T10:00:29.449088959Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 10:00:29.541373 env[1379]: time="2024-02-09T10:00:29.541336012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lvm7n,Uid:7ea98bbe-bf3c-4b9e-ba60-14c4824b2148,Namespace:kube-system,Attempt:0,}" Feb 9 10:00:29.549062 env[1379]: time="2024-02-09T10:00:29.549018645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pprpn,Uid:90786865-1237-4fcd-9dbf-460524fa7f8f,Namespace:kube-system,Attempt:0,}" Feb 9 10:00:29.687220 env[1379]: time="2024-02-09T10:00:29.687076713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:00:29.687220 env[1379]: time="2024-02-09T10:00:29.687118283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:00:29.687408 env[1379]: time="2024-02-09T10:00:29.687129445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:00:29.688002 env[1379]: time="2024-02-09T10:00:29.687949112Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0 pid=2570 runtime=io.containerd.runc.v2 Feb 9 10:00:29.688159 env[1379]: time="2024-02-09T10:00:29.687079274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:00:29.688264 env[1379]: time="2024-02-09T10:00:29.688225775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:00:29.688264 env[1379]: time="2024-02-09T10:00:29.688248220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:00:29.689656 env[1379]: time="2024-02-09T10:00:29.688636829Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a7786316321696dbbecf515d0aa8bebec39d5fe20070252e2c35b2db951991e pid=2566 runtime=io.containerd.runc.v2 Feb 9 10:00:29.701474 systemd[1]: Started cri-containerd-3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0.scope. Feb 9 10:00:29.702311 systemd[1]: Started cri-containerd-8a7786316321696dbbecf515d0aa8bebec39d5fe20070252e2c35b2db951991e.scope. Feb 9 10:00:29.741188 env[1379]: time="2024-02-09T10:00:29.741148814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pprpn,Uid:90786865-1237-4fcd-9dbf-460524fa7f8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a7786316321696dbbecf515d0aa8bebec39d5fe20070252e2c35b2db951991e\"" Feb 9 10:00:29.745660 env[1379]: time="2024-02-09T10:00:29.745617113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lvm7n,Uid:7ea98bbe-bf3c-4b9e-ba60-14c4824b2148,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\"" Feb 9 10:00:29.747970 env[1379]: time="2024-02-09T10:00:29.747914398Z" level=info msg="CreateContainer within sandbox \"8a7786316321696dbbecf515d0aa8bebec39d5fe20070252e2c35b2db951991e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 10:00:29.820821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1327051677.mount: Deactivated successfully. Feb 9 10:00:29.899559 env[1379]: time="2024-02-09T10:00:29.899510475Z" level=info msg="CreateContainer within sandbox \"8a7786316321696dbbecf515d0aa8bebec39d5fe20070252e2c35b2db951991e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1196bf72855e54056778eb3f43c3e603f5a903addc23434e51b8c167c27eab8c\"" Feb 9 10:00:29.901597 env[1379]: time="2024-02-09T10:00:29.900481977Z" level=info msg="StartContainer for \"1196bf72855e54056778eb3f43c3e603f5a903addc23434e51b8c167c27eab8c\"" Feb 9 10:00:29.922370 systemd[1]: Started cri-containerd-1196bf72855e54056778eb3f43c3e603f5a903addc23434e51b8c167c27eab8c.scope. Feb 9 10:00:29.965329 env[1379]: time="2024-02-09T10:00:29.964579566Z" level=info msg="StartContainer for \"1196bf72855e54056778eb3f43c3e603f5a903addc23434e51b8c167c27eab8c\" returns successfully" Feb 9 10:00:31.325633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451734534.mount: Deactivated successfully. Feb 9 10:00:32.195319 env[1379]: time="2024-02-09T10:00:32.195271726Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:32.221399 env[1379]: time="2024-02-09T10:00:32.221346456Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:32.230391 env[1379]: time="2024-02-09T10:00:32.230344898Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:32.230984 env[1379]: time="2024-02-09T10:00:32.230956429Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 10:00:32.233406 env[1379]: time="2024-02-09T10:00:32.232226660Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 10:00:32.233406 env[1379]: time="2024-02-09T10:00:32.233315733Z" level=info msg="CreateContainer within sandbox \"199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 10:00:32.298476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1462777244.mount: Deactivated successfully. Feb 9 10:00:32.302947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount351916523.mount: Deactivated successfully. Feb 9 10:00:32.332324 env[1379]: time="2024-02-09T10:00:32.332259427Z" level=info msg="CreateContainer within sandbox \"199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d\"" Feb 9 10:00:32.333156 env[1379]: time="2024-02-09T10:00:32.333130453Z" level=info msg="StartContainer for \"ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d\"" Feb 9 10:00:32.350168 systemd[1]: Started cri-containerd-ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d.scope. Feb 9 10:00:32.386460 env[1379]: time="2024-02-09T10:00:32.384678504Z" level=info msg="StartContainer for \"ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d\" returns successfully" Feb 9 10:00:33.320798 kubelet[2442]: I0209 10:00:33.320749 2442 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-ltwxd" podStartSLOduration=1.536218919 podCreationTimestamp="2024-02-09 10:00:29 +0000 UTC" firstStartedPulling="2024-02-09 10:00:29.446938548 +0000 UTC m=+14.386517322" lastFinishedPulling="2024-02-09 10:00:32.23143253 +0000 UTC m=+17.171011344" observedRunningTime="2024-02-09 10:00:33.319802511 +0000 UTC m=+18.259381285" watchObservedRunningTime="2024-02-09 10:00:33.320712941 +0000 UTC m=+18.260291755" Feb 9 10:00:33.321414 kubelet[2442]: I0209 10:00:33.321386 2442 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pprpn" podStartSLOduration=5.321363837 podCreationTimestamp="2024-02-09 10:00:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:00:30.314355573 +0000 UTC m=+15.253934387" watchObservedRunningTime="2024-02-09 10:00:33.321363837 +0000 UTC m=+18.260942651" Feb 9 10:00:38.359784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1966365792.mount: Deactivated successfully. Feb 9 10:00:41.498767 env[1379]: time="2024-02-09T10:00:41.498716671Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:41.517093 env[1379]: time="2024-02-09T10:00:41.517056681Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:41.526606 env[1379]: time="2024-02-09T10:00:41.526563206Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:41.527295 env[1379]: time="2024-02-09T10:00:41.527251648Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 10:00:41.530997 env[1379]: time="2024-02-09T10:00:41.530714422Z" level=info msg="CreateContainer within sandbox \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:00:41.579910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2508132252.mount: Deactivated successfully. Feb 9 10:00:41.587035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4266420207.mount: Deactivated successfully. Feb 9 10:00:41.613233 env[1379]: time="2024-02-09T10:00:41.613183438Z" level=info msg="CreateContainer within sandbox \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94\"" Feb 9 10:00:41.613960 env[1379]: time="2024-02-09T10:00:41.613937811Z" level=info msg="StartContainer for \"fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94\"" Feb 9 10:00:41.632643 systemd[1]: Started cri-containerd-fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94.scope. Feb 9 10:00:41.670587 env[1379]: time="2024-02-09T10:00:41.670523160Z" level=info msg="StartContainer for \"fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94\" returns successfully" Feb 9 10:00:41.673028 systemd[1]: cri-containerd-fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94.scope: Deactivated successfully. Feb 9 10:00:42.577630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94-rootfs.mount: Deactivated successfully. Feb 9 10:00:42.684858 env[1379]: time="2024-02-09T10:00:42.684803868Z" level=info msg="shim disconnected" id=fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94 Feb 9 10:00:42.684858 env[1379]: time="2024-02-09T10:00:42.684858317Z" level=warning msg="cleaning up after shim disconnected" id=fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94 namespace=k8s.io Feb 9 10:00:42.685219 env[1379]: time="2024-02-09T10:00:42.684868919Z" level=info msg="cleaning up dead shim" Feb 9 10:00:42.692684 env[1379]: time="2024-02-09T10:00:42.692576539Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2874 runtime=io.containerd.runc.v2\n" Feb 9 10:00:43.335426 env[1379]: time="2024-02-09T10:00:43.335366099Z" level=info msg="CreateContainer within sandbox \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 10:00:43.385377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3615302193.mount: Deactivated successfully. Feb 9 10:00:43.410280 env[1379]: time="2024-02-09T10:00:43.410235022Z" level=info msg="CreateContainer within sandbox \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef\"" Feb 9 10:00:43.411074 env[1379]: time="2024-02-09T10:00:43.411049400Z" level=info msg="StartContainer for \"5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef\"" Feb 9 10:00:43.426507 systemd[1]: Started cri-containerd-5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef.scope. Feb 9 10:00:43.463275 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 10:00:43.463486 systemd[1]: Stopped systemd-sysctl.service. Feb 9 10:00:43.463966 systemd[1]: Stopping systemd-sysctl.service... Feb 9 10:00:43.465566 systemd[1]: Starting systemd-sysctl.service... Feb 9 10:00:43.467978 env[1379]: time="2024-02-09T10:00:43.466133310Z" level=info msg="StartContainer for \"5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef\" returns successfully" Feb 9 10:00:43.471931 systemd[1]: cri-containerd-5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef.scope: Deactivated successfully. Feb 9 10:00:43.478128 systemd[1]: Finished systemd-sysctl.service. Feb 9 10:00:43.524042 env[1379]: time="2024-02-09T10:00:43.523981772Z" level=info msg="shim disconnected" id=5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef Feb 9 10:00:43.524042 env[1379]: time="2024-02-09T10:00:43.524039501Z" level=warning msg="cleaning up after shim disconnected" id=5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef namespace=k8s.io Feb 9 10:00:43.524042 env[1379]: time="2024-02-09T10:00:43.524049583Z" level=info msg="cleaning up dead shim" Feb 9 10:00:43.531162 env[1379]: time="2024-02-09T10:00:43.531116628Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2939 runtime=io.containerd.runc.v2\n" Feb 9 10:00:44.347323 env[1379]: time="2024-02-09T10:00:44.345833515Z" level=info msg="CreateContainer within sandbox \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 10:00:44.404350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2458882397.mount: Deactivated successfully. Feb 9 10:00:44.409342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2083734032.mount: Deactivated successfully. Feb 9 10:00:44.443758 env[1379]: time="2024-02-09T10:00:44.443697722Z" level=info msg="CreateContainer within sandbox \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6\"" Feb 9 10:00:44.444471 env[1379]: time="2024-02-09T10:00:44.444409081Z" level=info msg="StartContainer for \"eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6\"" Feb 9 10:00:44.460253 systemd[1]: Started cri-containerd-eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6.scope. Feb 9 10:00:44.493583 systemd[1]: cri-containerd-eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6.scope: Deactivated successfully. Feb 9 10:00:44.495488 env[1379]: time="2024-02-09T10:00:44.495163849Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ea98bbe_bf3c_4b9e_ba60_14c4824b2148.slice/cri-containerd-eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6.scope/memory.events\": no such file or directory" Feb 9 10:00:44.508633 env[1379]: time="2024-02-09T10:00:44.508584494Z" level=info msg="StartContainer for \"eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6\" returns successfully" Feb 9 10:00:44.547022 env[1379]: time="2024-02-09T10:00:44.546827209Z" level=info msg="shim disconnected" id=eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6 Feb 9 10:00:44.547237 env[1379]: time="2024-02-09T10:00:44.547218755Z" level=warning msg="cleaning up after shim disconnected" id=eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6 namespace=k8s.io Feb 9 10:00:44.547325 env[1379]: time="2024-02-09T10:00:44.547309850Z" level=info msg="cleaning up dead shim" Feb 9 10:00:44.554678 env[1379]: time="2024-02-09T10:00:44.554637676Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2999 runtime=io.containerd.runc.v2\n" Feb 9 10:00:45.345943 env[1379]: time="2024-02-09T10:00:45.345890495Z" level=info msg="CreateContainer within sandbox \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 10:00:45.408472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3404338201.mount: Deactivated successfully. Feb 9 10:00:45.415973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3808244654.mount: Deactivated successfully. Feb 9 10:00:45.438208 env[1379]: time="2024-02-09T10:00:45.438129393Z" level=info msg="CreateContainer within sandbox \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c\"" Feb 9 10:00:45.438919 env[1379]: time="2024-02-09T10:00:45.438750415Z" level=info msg="StartContainer for \"2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c\"" Feb 9 10:00:45.454097 systemd[1]: Started cri-containerd-2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c.scope. Feb 9 10:00:45.477768 systemd[1]: cri-containerd-2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c.scope: Deactivated successfully. Feb 9 10:00:45.486583 env[1379]: time="2024-02-09T10:00:45.486534457Z" level=info msg="StartContainer for \"2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c\" returns successfully" Feb 9 10:00:45.522418 env[1379]: time="2024-02-09T10:00:45.522373179Z" level=info msg="shim disconnected" id=2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c Feb 9 10:00:45.522641 env[1379]: time="2024-02-09T10:00:45.522623980Z" level=warning msg="cleaning up after shim disconnected" id=2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c namespace=k8s.io Feb 9 10:00:45.522716 env[1379]: time="2024-02-09T10:00:45.522703313Z" level=info msg="cleaning up dead shim" Feb 9 10:00:45.530179 env[1379]: time="2024-02-09T10:00:45.530143694Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3056 runtime=io.containerd.runc.v2\n" Feb 9 10:00:46.348733 env[1379]: time="2024-02-09T10:00:46.348684471Z" level=info msg="CreateContainer within sandbox \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 10:00:46.396111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3062760572.mount: Deactivated successfully. Feb 9 10:00:46.400465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3906090800.mount: Deactivated successfully. Feb 9 10:00:46.430868 env[1379]: time="2024-02-09T10:00:46.430813821Z" level=info msg="CreateContainer within sandbox \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2\"" Feb 9 10:00:46.431586 env[1379]: time="2024-02-09T10:00:46.431560902Z" level=info msg="StartContainer for \"37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2\"" Feb 9 10:00:46.447723 systemd[1]: Started cri-containerd-37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2.scope. Feb 9 10:00:46.492907 env[1379]: time="2024-02-09T10:00:46.492850055Z" level=info msg="StartContainer for \"37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2\" returns successfully" Feb 9 10:00:46.567370 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 10:00:46.658690 kubelet[2442]: I0209 10:00:46.657103 2442 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 10:00:46.678923 kubelet[2442]: I0209 10:00:46.678883 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:00:46.684197 systemd[1]: Created slice kubepods-burstable-podfb35bdaf_e982_490b_9a97_e94ee02a8029.slice. Feb 9 10:00:46.688461 kubelet[2442]: I0209 10:00:46.688437 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:00:46.692595 systemd[1]: Created slice kubepods-burstable-podc8c1ed1e_3444_49ce_9a5f_d98696cb68e4.slice. Feb 9 10:00:46.770720 kubelet[2442]: I0209 10:00:46.770686 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb35bdaf-e982-490b-9a97-e94ee02a8029-config-volume\") pod \"coredns-5d78c9869d-p997b\" (UID: \"fb35bdaf-e982-490b-9a97-e94ee02a8029\") " pod="kube-system/coredns-5d78c9869d-p997b" Feb 9 10:00:46.770968 kubelet[2442]: I0209 10:00:46.770955 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8lvh\" (UniqueName: \"kubernetes.io/projected/fb35bdaf-e982-490b-9a97-e94ee02a8029-kube-api-access-h8lvh\") pod \"coredns-5d78c9869d-p997b\" (UID: \"fb35bdaf-e982-490b-9a97-e94ee02a8029\") " pod="kube-system/coredns-5d78c9869d-p997b" Feb 9 10:00:46.771100 kubelet[2442]: I0209 10:00:46.771089 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8c1ed1e-3444-49ce-9a5f-d98696cb68e4-config-volume\") pod \"coredns-5d78c9869d-2jwmd\" (UID: \"c8c1ed1e-3444-49ce-9a5f-d98696cb68e4\") " pod="kube-system/coredns-5d78c9869d-2jwmd" Feb 9 10:00:46.771216 kubelet[2442]: I0209 10:00:46.771206 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxqzx\" (UniqueName: \"kubernetes.io/projected/c8c1ed1e-3444-49ce-9a5f-d98696cb68e4-kube-api-access-sxqzx\") pod \"coredns-5d78c9869d-2jwmd\" (UID: \"c8c1ed1e-3444-49ce-9a5f-d98696cb68e4\") " pod="kube-system/coredns-5d78c9869d-2jwmd" Feb 9 10:00:46.924325 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 10:00:46.987646 env[1379]: time="2024-02-09T10:00:46.987596233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-p997b,Uid:fb35bdaf-e982-490b-9a97-e94ee02a8029,Namespace:kube-system,Attempt:0,}" Feb 9 10:00:46.996536 env[1379]: time="2024-02-09T10:00:46.996269471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-2jwmd,Uid:c8c1ed1e-3444-49ce-9a5f-d98696cb68e4,Namespace:kube-system,Attempt:0,}" Feb 9 10:00:47.365309 kubelet[2442]: I0209 10:00:47.364716 2442 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-lvm7n" podStartSLOduration=7.583310437 podCreationTimestamp="2024-02-09 10:00:28 +0000 UTC" firstStartedPulling="2024-02-09 10:00:29.746645068 +0000 UTC m=+14.686223882" lastFinishedPulling="2024-02-09 10:00:41.528012223 +0000 UTC m=+26.467591037" observedRunningTime="2024-02-09 10:00:47.363167753 +0000 UTC m=+32.302746567" watchObservedRunningTime="2024-02-09 10:00:47.364677592 +0000 UTC m=+32.304256486" Feb 9 10:00:48.577082 systemd-networkd[1536]: cilium_host: Link UP Feb 9 10:00:48.595030 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 10:00:48.595121 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 10:00:48.594426 systemd-networkd[1536]: cilium_net: Link UP Feb 9 10:00:48.594655 systemd-networkd[1536]: cilium_net: Gained carrier Feb 9 10:00:48.597904 systemd-networkd[1536]: cilium_host: Gained carrier Feb 9 10:00:48.770063 systemd-networkd[1536]: cilium_vxlan: Link UP Feb 9 10:00:48.770070 systemd-networkd[1536]: cilium_vxlan: Gained carrier Feb 9 10:00:48.855456 systemd-networkd[1536]: cilium_net: Gained IPv6LL Feb 9 10:00:48.920417 systemd-networkd[1536]: cilium_host: Gained IPv6LL Feb 9 10:00:49.016316 kernel: NET: Registered PF_ALG protocol family Feb 9 10:00:49.753529 systemd-networkd[1536]: lxc_health: Link UP Feb 9 10:00:49.768638 systemd-networkd[1536]: lxc_health: Gained carrier Feb 9 10:00:49.769332 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:00:50.124839 kernel: eth0: renamed from tmp0b859 Feb 9 10:00:50.128502 systemd-networkd[1536]: lxca717bee01f8a: Link UP Feb 9 10:00:50.137705 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca717bee01f8a: link becomes ready Feb 9 10:00:50.135805 systemd-networkd[1536]: lxca717bee01f8a: Gained carrier Feb 9 10:00:50.143541 systemd-networkd[1536]: lxc6e4bfcba99f8: Link UP Feb 9 10:00:50.151437 kernel: eth0: renamed from tmpa13b5 Feb 9 10:00:50.164379 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6e4bfcba99f8: link becomes ready Feb 9 10:00:50.164200 systemd-networkd[1536]: lxc6e4bfcba99f8: Gained carrier Feb 9 10:00:50.503515 systemd-networkd[1536]: cilium_vxlan: Gained IPv6LL Feb 9 10:00:51.335407 systemd-networkd[1536]: lxc_health: Gained IPv6LL Feb 9 10:00:51.911497 systemd-networkd[1536]: lxca717bee01f8a: Gained IPv6LL Feb 9 10:00:51.911752 systemd-networkd[1536]: lxc6e4bfcba99f8: Gained IPv6LL Feb 9 10:00:53.730895 env[1379]: time="2024-02-09T10:00:53.730819507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:00:53.731321 env[1379]: time="2024-02-09T10:00:53.730859392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:00:53.731321 env[1379]: time="2024-02-09T10:00:53.730887436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:00:53.731525 env[1379]: time="2024-02-09T10:00:53.731488802Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a13b5e85f0cd7a78fb718d417bd807a9d644f44f526a5e9ae549a44b902322d7 pid=3606 runtime=io.containerd.runc.v2 Feb 9 10:00:53.743741 env[1379]: time="2024-02-09T10:00:53.743679178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:00:53.743937 env[1379]: time="2024-02-09T10:00:53.743912651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:00:53.744024 env[1379]: time="2024-02-09T10:00:53.744003104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:00:53.744473 env[1379]: time="2024-02-09T10:00:53.744417203Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b8594e7673e0402fdaa5864210259531653c864c83c496bfbe7bd228054fccd pid=3626 runtime=io.containerd.runc.v2 Feb 9 10:00:53.761924 systemd[1]: Started cri-containerd-0b8594e7673e0402fdaa5864210259531653c864c83c496bfbe7bd228054fccd.scope. Feb 9 10:00:53.786856 systemd[1]: run-containerd-runc-k8s.io-a13b5e85f0cd7a78fb718d417bd807a9d644f44f526a5e9ae549a44b902322d7-runc.OQnyJM.mount: Deactivated successfully. Feb 9 10:00:53.790813 systemd[1]: Started cri-containerd-a13b5e85f0cd7a78fb718d417bd807a9d644f44f526a5e9ae549a44b902322d7.scope. Feb 9 10:00:53.837769 env[1379]: time="2024-02-09T10:00:53.837726169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-2jwmd,Uid:c8c1ed1e-3444-49ce-9a5f-d98696cb68e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a13b5e85f0cd7a78fb718d417bd807a9d644f44f526a5e9ae549a44b902322d7\"" Feb 9 10:00:53.847993 env[1379]: time="2024-02-09T10:00:53.847954946Z" level=info msg="CreateContainer within sandbox \"a13b5e85f0cd7a78fb718d417bd807a9d644f44f526a5e9ae549a44b902322d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 10:00:53.856240 env[1379]: time="2024-02-09T10:00:53.856197079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-p997b,Uid:fb35bdaf-e982-490b-9a97-e94ee02a8029,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b8594e7673e0402fdaa5864210259531653c864c83c496bfbe7bd228054fccd\"" Feb 9 10:00:53.862280 env[1379]: time="2024-02-09T10:00:53.862247101Z" level=info msg="CreateContainer within sandbox \"0b8594e7673e0402fdaa5864210259531653c864c83c496bfbe7bd228054fccd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 10:00:53.936780 env[1379]: time="2024-02-09T10:00:53.936728506Z" level=info msg="CreateContainer within sandbox \"a13b5e85f0cd7a78fb718d417bd807a9d644f44f526a5e9ae549a44b902322d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cecca728553068fed0adc203274e7c8e7810dde2c12ded8ccc8aa9b3841c162d\"" Feb 9 10:00:53.937603 env[1379]: time="2024-02-09T10:00:53.937577387Z" level=info msg="StartContainer for \"cecca728553068fed0adc203274e7c8e7810dde2c12ded8ccc8aa9b3841c162d\"" Feb 9 10:00:53.955944 systemd[1]: Started cri-containerd-cecca728553068fed0adc203274e7c8e7810dde2c12ded8ccc8aa9b3841c162d.scope. Feb 9 10:00:53.958740 env[1379]: time="2024-02-09T10:00:53.957701933Z" level=info msg="CreateContainer within sandbox \"0b8594e7673e0402fdaa5864210259531653c864c83c496bfbe7bd228054fccd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e2d4263d8866865a92bdf9f862263ced7d2a59700a4c819df830f61244325d94\"" Feb 9 10:00:53.961722 env[1379]: time="2024-02-09T10:00:53.961688741Z" level=info msg="StartContainer for \"e2d4263d8866865a92bdf9f862263ced7d2a59700a4c819df830f61244325d94\"" Feb 9 10:00:53.992257 systemd[1]: Started cri-containerd-e2d4263d8866865a92bdf9f862263ced7d2a59700a4c819df830f61244325d94.scope. Feb 9 10:00:54.007101 env[1379]: time="2024-02-09T10:00:54.007049426Z" level=info msg="StartContainer for \"cecca728553068fed0adc203274e7c8e7810dde2c12ded8ccc8aa9b3841c162d\" returns successfully" Feb 9 10:00:54.037680 env[1379]: time="2024-02-09T10:00:54.037624228Z" level=info msg="StartContainer for \"e2d4263d8866865a92bdf9f862263ced7d2a59700a4c819df830f61244325d94\" returns successfully" Feb 9 10:00:54.378099 kubelet[2442]: I0209 10:00:54.378066 2442 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-2jwmd" podStartSLOduration=25.378028539 podCreationTimestamp="2024-02-09 10:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:00:54.375085167 +0000 UTC m=+39.314663941" watchObservedRunningTime="2024-02-09 10:00:54.378028539 +0000 UTC m=+39.317607313" Feb 9 10:00:54.404049 kubelet[2442]: I0209 10:00:54.404014 2442 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-p997b" podStartSLOduration=25.403974653 podCreationTimestamp="2024-02-09 10:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:00:54.391202624 +0000 UTC m=+39.330781438" watchObservedRunningTime="2024-02-09 10:00:54.403974653 +0000 UTC m=+39.343553467" Feb 9 10:03:11.465268 systemd[1]: Started sshd@5-10.200.20.16:22-10.200.12.6:41244.service. Feb 9 10:03:11.883771 sshd[3786]: Accepted publickey for core from 10.200.12.6 port 41244 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:11.885525 sshd[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:11.890007 systemd[1]: Started session-8.scope. Feb 9 10:03:11.890327 systemd-logind[1368]: New session 8 of user core. Feb 9 10:03:12.336462 sshd[3786]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:12.338990 systemd-logind[1368]: Session 8 logged out. Waiting for processes to exit. Feb 9 10:03:12.339144 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 10:03:12.339769 systemd[1]: sshd@5-10.200.20.16:22-10.200.12.6:41244.service: Deactivated successfully. Feb 9 10:03:12.340844 systemd-logind[1368]: Removed session 8. Feb 9 10:03:17.407259 systemd[1]: Started sshd@6-10.200.20.16:22-10.200.12.6:43936.service. Feb 9 10:03:17.790123 sshd[3813]: Accepted publickey for core from 10.200.12.6 port 43936 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:17.791825 sshd[3813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:17.796073 systemd-logind[1368]: New session 9 of user core. Feb 9 10:03:17.796765 systemd[1]: Started session-9.scope. Feb 9 10:03:18.129123 sshd[3813]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:18.131604 systemd[1]: sshd@6-10.200.20.16:22-10.200.12.6:43936.service: Deactivated successfully. Feb 9 10:03:18.132418 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 10:03:18.133031 systemd-logind[1368]: Session 9 logged out. Waiting for processes to exit. Feb 9 10:03:18.133922 systemd-logind[1368]: Removed session 9. Feb 9 10:03:23.196206 systemd[1]: Started sshd@7-10.200.20.16:22-10.200.12.6:43948.service. Feb 9 10:03:23.587803 sshd[3825]: Accepted publickey for core from 10.200.12.6 port 43948 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:23.589471 sshd[3825]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:23.593937 systemd[1]: Started session-10.scope. Feb 9 10:03:23.594979 systemd-logind[1368]: New session 10 of user core. Feb 9 10:03:23.929487 sshd[3825]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:23.932334 systemd-logind[1368]: Session 10 logged out. Waiting for processes to exit. Feb 9 10:03:23.932532 systemd[1]: sshd@7-10.200.20.16:22-10.200.12.6:43948.service: Deactivated successfully. Feb 9 10:03:23.933312 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 10:03:23.934194 systemd-logind[1368]: Removed session 10. Feb 9 10:03:28.994772 systemd[1]: Started sshd@8-10.200.20.16:22-10.200.12.6:37284.service. Feb 9 10:03:29.377977 sshd[3838]: Accepted publickey for core from 10.200.12.6 port 37284 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:29.379745 sshd[3838]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:29.384503 systemd[1]: Started session-11.scope. Feb 9 10:03:29.384922 systemd-logind[1368]: New session 11 of user core. Feb 9 10:03:29.719551 sshd[3838]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:29.722148 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 10:03:29.722809 systemd-logind[1368]: Session 11 logged out. Waiting for processes to exit. Feb 9 10:03:29.723169 systemd[1]: sshd@8-10.200.20.16:22-10.200.12.6:37284.service: Deactivated successfully. Feb 9 10:03:29.724276 systemd-logind[1368]: Removed session 11. Feb 9 10:03:29.803904 systemd[1]: Started sshd@9-10.200.20.16:22-10.200.12.6:37292.service. Feb 9 10:03:30.222345 sshd[3851]: Accepted publickey for core from 10.200.12.6 port 37292 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:30.223959 sshd[3851]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:30.228319 systemd[1]: Started session-12.scope. Feb 9 10:03:30.228796 systemd-logind[1368]: New session 12 of user core. Feb 9 10:03:31.169237 sshd[3851]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:31.171797 systemd[1]: sshd@9-10.200.20.16:22-10.200.12.6:37292.service: Deactivated successfully. Feb 9 10:03:31.172582 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 10:03:31.173435 systemd-logind[1368]: Session 12 logged out. Waiting for processes to exit. Feb 9 10:03:31.174163 systemd-logind[1368]: Removed session 12. Feb 9 10:03:31.233537 systemd[1]: Started sshd@10-10.200.20.16:22-10.200.12.6:37298.service. Feb 9 10:03:31.618257 sshd[3863]: Accepted publickey for core from 10.200.12.6 port 37298 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:31.619866 sshd[3863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:31.624143 systemd[1]: Started session-13.scope. Feb 9 10:03:31.624545 systemd-logind[1368]: New session 13 of user core. Feb 9 10:03:31.962070 sshd[3863]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:31.965018 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 10:03:31.965842 systemd[1]: sshd@10-10.200.20.16:22-10.200.12.6:37298.service: Deactivated successfully. Feb 9 10:03:31.966666 systemd-logind[1368]: Session 13 logged out. Waiting for processes to exit. Feb 9 10:03:31.967318 systemd-logind[1368]: Removed session 13. Feb 9 10:03:37.032930 systemd[1]: Started sshd@11-10.200.20.16:22-10.200.12.6:56578.service. Feb 9 10:03:37.450810 sshd[3875]: Accepted publickey for core from 10.200.12.6 port 56578 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:37.452501 sshd[3875]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:37.456840 systemd[1]: Started session-14.scope. Feb 9 10:03:37.458150 systemd-logind[1368]: New session 14 of user core. Feb 9 10:03:37.812243 sshd[3875]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:37.815803 systemd[1]: sshd@11-10.200.20.16:22-10.200.12.6:56578.service: Deactivated successfully. Feb 9 10:03:37.816609 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 10:03:37.817220 systemd-logind[1368]: Session 14 logged out. Waiting for processes to exit. Feb 9 10:03:37.818031 systemd-logind[1368]: Removed session 14. Feb 9 10:03:42.876772 systemd[1]: Started sshd@12-10.200.20.16:22-10.200.12.6:56588.service. Feb 9 10:03:43.261001 sshd[3888]: Accepted publickey for core from 10.200.12.6 port 56588 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:43.261654 sshd[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:43.266558 systemd-logind[1368]: New session 15 of user core. Feb 9 10:03:43.267032 systemd[1]: Started session-15.scope. Feb 9 10:03:43.598504 sshd[3888]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:43.601004 systemd-logind[1368]: Session 15 logged out. Waiting for processes to exit. Feb 9 10:03:43.601195 systemd[1]: sshd@12-10.200.20.16:22-10.200.12.6:56588.service: Deactivated successfully. Feb 9 10:03:43.601964 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 10:03:43.602829 systemd-logind[1368]: Removed session 15. Feb 9 10:03:43.668264 systemd[1]: Started sshd@13-10.200.20.16:22-10.200.12.6:56594.service. Feb 9 10:03:44.086643 sshd[3903]: Accepted publickey for core from 10.200.12.6 port 56594 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:44.088049 sshd[3903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:44.092074 systemd-logind[1368]: New session 16 of user core. Feb 9 10:03:44.092628 systemd[1]: Started session-16.scope. Feb 9 10:03:44.471753 sshd[3903]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:44.474877 systemd-logind[1368]: Session 16 logged out. Waiting for processes to exit. Feb 9 10:03:44.474964 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 10:03:44.475689 systemd[1]: sshd@13-10.200.20.16:22-10.200.12.6:56594.service: Deactivated successfully. Feb 9 10:03:44.476871 systemd-logind[1368]: Removed session 16. Feb 9 10:03:44.539338 systemd[1]: Started sshd@14-10.200.20.16:22-10.200.12.6:56608.service. Feb 9 10:03:44.924400 sshd[3913]: Accepted publickey for core from 10.200.12.6 port 56608 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:44.926127 sshd[3913]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:44.930366 systemd[1]: Started session-17.scope. Feb 9 10:03:44.931281 systemd-logind[1368]: New session 17 of user core. Feb 9 10:03:46.040187 sshd[3913]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:46.044012 systemd-logind[1368]: Session 17 logged out. Waiting for processes to exit. Feb 9 10:03:46.044204 systemd[1]: sshd@14-10.200.20.16:22-10.200.12.6:56608.service: Deactivated successfully. Feb 9 10:03:46.045045 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 10:03:46.046116 systemd-logind[1368]: Removed session 17. Feb 9 10:03:46.106642 systemd[1]: Started sshd@15-10.200.20.16:22-10.200.12.6:56620.service. Feb 9 10:03:46.489391 sshd[3932]: Accepted publickey for core from 10.200.12.6 port 56620 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:46.491061 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:46.495721 systemd[1]: Started session-18.scope. Feb 9 10:03:46.496150 systemd-logind[1368]: New session 18 of user core. Feb 9 10:03:47.010498 sshd[3932]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:47.013462 systemd[1]: sshd@15-10.200.20.16:22-10.200.12.6:56620.service: Deactivated successfully. Feb 9 10:03:47.014815 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 10:03:47.015793 systemd-logind[1368]: Session 18 logged out. Waiting for processes to exit. Feb 9 10:03:47.016564 systemd-logind[1368]: Removed session 18. Feb 9 10:03:47.076238 systemd[1]: Started sshd@16-10.200.20.16:22-10.200.12.6:48100.service. Feb 9 10:03:47.458549 sshd[3942]: Accepted publickey for core from 10.200.12.6 port 48100 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:47.459892 sshd[3942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:47.464544 systemd[1]: Started session-19.scope. Feb 9 10:03:47.464847 systemd-logind[1368]: New session 19 of user core. Feb 9 10:03:47.800912 sshd[3942]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:47.804249 systemd[1]: sshd@16-10.200.20.16:22-10.200.12.6:48100.service: Deactivated successfully. Feb 9 10:03:47.804453 systemd-logind[1368]: Session 19 logged out. Waiting for processes to exit. Feb 9 10:03:47.805021 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 10:03:47.805820 systemd-logind[1368]: Removed session 19. Feb 9 10:03:52.866477 systemd[1]: Started sshd@17-10.200.20.16:22-10.200.12.6:48104.service. Feb 9 10:03:53.250588 sshd[3958]: Accepted publickey for core from 10.200.12.6 port 48104 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:53.252329 sshd[3958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:53.256658 systemd[1]: Started session-20.scope. Feb 9 10:03:53.257707 systemd-logind[1368]: New session 20 of user core. Feb 9 10:03:53.586792 sshd[3958]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:53.589452 systemd-logind[1368]: Session 20 logged out. Waiting for processes to exit. Feb 9 10:03:53.589621 systemd[1]: sshd@17-10.200.20.16:22-10.200.12.6:48104.service: Deactivated successfully. Feb 9 10:03:53.590397 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 10:03:53.591127 systemd-logind[1368]: Removed session 20. Feb 9 10:03:58.651325 systemd[1]: Started sshd@18-10.200.20.16:22-10.200.12.6:35844.service. Feb 9 10:03:59.034615 sshd[3970]: Accepted publickey for core from 10.200.12.6 port 35844 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:03:59.036205 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:03:59.040729 systemd[1]: Started session-21.scope. Feb 9 10:03:59.041056 systemd-logind[1368]: New session 21 of user core. Feb 9 10:03:59.376485 sshd[3970]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:59.378968 systemd[1]: sshd@18-10.200.20.16:22-10.200.12.6:35844.service: Deactivated successfully. Feb 9 10:03:59.379793 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 10:03:59.380421 systemd-logind[1368]: Session 21 logged out. Waiting for processes to exit. Feb 9 10:03:59.381324 systemd-logind[1368]: Removed session 21. Feb 9 10:04:04.441934 systemd[1]: Started sshd@19-10.200.20.16:22-10.200.12.6:35852.service. Feb 9 10:04:04.825642 sshd[3983]: Accepted publickey for core from 10.200.12.6 port 35852 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:04:04.827243 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:04:04.831089 systemd-logind[1368]: New session 22 of user core. Feb 9 10:04:04.831619 systemd[1]: Started session-22.scope. Feb 9 10:04:05.171932 sshd[3983]: pam_unix(sshd:session): session closed for user core Feb 9 10:04:05.175102 systemd-logind[1368]: Session 22 logged out. Waiting for processes to exit. Feb 9 10:04:05.175276 systemd[1]: sshd@19-10.200.20.16:22-10.200.12.6:35852.service: Deactivated successfully. Feb 9 10:04:05.176080 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 10:04:05.176794 systemd-logind[1368]: Removed session 22. Feb 9 10:04:05.236551 systemd[1]: Started sshd@20-10.200.20.16:22-10.200.12.6:35856.service. Feb 9 10:04:05.621376 sshd[3994]: Accepted publickey for core from 10.200.12.6 port 35856 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:04:05.622711 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:04:05.626664 systemd-logind[1368]: New session 23 of user core. Feb 9 10:04:05.627166 systemd[1]: Started session-23.scope. Feb 9 10:04:07.713950 systemd[1]: run-containerd-runc-k8s.io-37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2-runc.sbHapl.mount: Deactivated successfully. Feb 9 10:04:07.723428 env[1379]: time="2024-02-09T10:04:07.723386334Z" level=info msg="StopContainer for \"ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d\" with timeout 30 (s)" Feb 9 10:04:07.726579 env[1379]: time="2024-02-09T10:04:07.726542973Z" level=info msg="Stop container \"ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d\" with signal terminated" Feb 9 10:04:07.738337 systemd[1]: cri-containerd-ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d.scope: Deactivated successfully. Feb 9 10:04:07.744182 env[1379]: time="2024-02-09T10:04:07.744127032Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 10:04:07.747924 env[1379]: time="2024-02-09T10:04:07.747890613Z" level=info msg="StopContainer for \"37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2\" with timeout 1 (s)" Feb 9 10:04:07.750767 env[1379]: time="2024-02-09T10:04:07.750726839Z" level=info msg="Stop container \"37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2\" with signal terminated" Feb 9 10:04:07.763086 systemd-networkd[1536]: lxc_health: Link DOWN Feb 9 10:04:07.763093 systemd-networkd[1536]: lxc_health: Lost carrier Feb 9 10:04:07.786098 systemd[1]: cri-containerd-37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2.scope: Deactivated successfully. Feb 9 10:04:07.786434 systemd[1]: cri-containerd-37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2.scope: Consumed 6.375s CPU time. Feb 9 10:04:07.789106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d-rootfs.mount: Deactivated successfully. Feb 9 10:04:07.808984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2-rootfs.mount: Deactivated successfully. Feb 9 10:04:07.877744 env[1379]: time="2024-02-09T10:04:07.877677840Z" level=info msg="shim disconnected" id=ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d Feb 9 10:04:07.877744 env[1379]: time="2024-02-09T10:04:07.877735482Z" level=warning msg="cleaning up after shim disconnected" id=ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d namespace=k8s.io Feb 9 10:04:07.877744 env[1379]: time="2024-02-09T10:04:07.877746363Z" level=info msg="cleaning up dead shim" Feb 9 10:04:07.878676 env[1379]: time="2024-02-09T10:04:07.878634676Z" level=info msg="shim disconnected" id=37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2 Feb 9 10:04:07.878919 env[1379]: time="2024-02-09T10:04:07.878891645Z" level=warning msg="cleaning up after shim disconnected" id=37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2 namespace=k8s.io Feb 9 10:04:07.879020 env[1379]: time="2024-02-09T10:04:07.879005170Z" level=info msg="cleaning up dead shim" Feb 9 10:04:07.886559 env[1379]: time="2024-02-09T10:04:07.886498091Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4069 runtime=io.containerd.runc.v2\n" Feb 9 10:04:07.887351 env[1379]: time="2024-02-09T10:04:07.887284200Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4074 runtime=io.containerd.runc.v2\n" Feb 9 10:04:07.896644 env[1379]: time="2024-02-09T10:04:07.896575749Z" level=info msg="StopContainer for \"37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2\" returns successfully" Feb 9 10:04:07.897395 env[1379]: time="2024-02-09T10:04:07.897370618Z" level=info msg="StopPodSandbox for \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\"" Feb 9 10:04:07.897628 env[1379]: time="2024-02-09T10:04:07.897605987Z" level=info msg="Container to stop \"fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:04:07.897778 env[1379]: time="2024-02-09T10:04:07.897758673Z" level=info msg="Container to stop \"5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:04:07.897852 env[1379]: time="2024-02-09T10:04:07.897836276Z" level=info msg="Container to stop \"eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:04:07.897914 env[1379]: time="2024-02-09T10:04:07.897899318Z" level=info msg="Container to stop \"37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:04:07.897991 env[1379]: time="2024-02-09T10:04:07.897975521Z" level=info msg="Container to stop \"2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:04:07.903327 env[1379]: time="2024-02-09T10:04:07.903267520Z" level=info msg="StopContainer for \"ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d\" returns successfully" Feb 9 10:04:07.903803 env[1379]: time="2024-02-09T10:04:07.903773379Z" level=info msg="StopPodSandbox for \"199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa\"" Feb 9 10:04:07.903881 env[1379]: time="2024-02-09T10:04:07.903828981Z" level=info msg="Container to stop \"ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:04:07.904380 systemd[1]: cri-containerd-3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0.scope: Deactivated successfully. Feb 9 10:04:07.915771 systemd[1]: cri-containerd-199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa.scope: Deactivated successfully. Feb 9 10:04:07.958280 env[1379]: time="2024-02-09T10:04:07.958224100Z" level=info msg="shim disconnected" id=3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0 Feb 9 10:04:07.958280 env[1379]: time="2024-02-09T10:04:07.958269102Z" level=warning msg="cleaning up after shim disconnected" id=3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0 namespace=k8s.io Feb 9 10:04:07.958280 env[1379]: time="2024-02-09T10:04:07.958279742Z" level=info msg="cleaning up dead shim" Feb 9 10:04:07.958724 env[1379]: time="2024-02-09T10:04:07.958682878Z" level=info msg="shim disconnected" id=199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa Feb 9 10:04:07.958777 env[1379]: time="2024-02-09T10:04:07.958724239Z" level=warning msg="cleaning up after shim disconnected" id=199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa namespace=k8s.io Feb 9 10:04:07.958777 env[1379]: time="2024-02-09T10:04:07.958734119Z" level=info msg="cleaning up dead shim" Feb 9 10:04:07.967986 env[1379]: time="2024-02-09T10:04:07.967871702Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4135 runtime=io.containerd.runc.v2\n" Feb 9 10:04:07.968481 env[1379]: time="2024-02-09T10:04:07.968453404Z" level=info msg="TearDown network for sandbox \"199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa\" successfully" Feb 9 10:04:07.968586 env[1379]: time="2024-02-09T10:04:07.968569608Z" level=info msg="StopPodSandbox for \"199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa\" returns successfully" Feb 9 10:04:07.968745 env[1379]: time="2024-02-09T10:04:07.967871742Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4134 runtime=io.containerd.runc.v2\n" Feb 9 10:04:07.969056 env[1379]: time="2024-02-09T10:04:07.969034346Z" level=info msg="TearDown network for sandbox \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\" successfully" Feb 9 10:04:07.969143 env[1379]: time="2024-02-09T10:04:07.969126029Z" level=info msg="StopPodSandbox for \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\" returns successfully" Feb 9 10:04:08.016895 kubelet[2442]: I0209 10:04:08.016860 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-cilium-config-path\") pod \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " Feb 9 10:04:08.016895 kubelet[2442]: I0209 10:04:08.016904 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-xtables-lock\") pod \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " Feb 9 10:04:08.017275 kubelet[2442]: I0209 10:04:08.016923 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-lib-modules\") pod \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " Feb 9 10:04:08.017275 kubelet[2442]: I0209 10:04:08.016940 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-cni-path\") pod \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " Feb 9 10:04:08.017275 kubelet[2442]: I0209 10:04:08.016958 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-host-proc-sys-net\") pod \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " Feb 9 10:04:08.017275 kubelet[2442]: I0209 10:04:08.016978 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44e4f136-8942-4728-885b-9e4678b1af9d-cilium-config-path\") pod \"44e4f136-8942-4728-885b-9e4678b1af9d\" (UID: \"44e4f136-8942-4728-885b-9e4678b1af9d\") " Feb 9 10:04:08.017275 kubelet[2442]: I0209 10:04:08.017002 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c95kl\" (UniqueName: \"kubernetes.io/projected/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-kube-api-access-c95kl\") pod \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " Feb 9 10:04:08.017275 kubelet[2442]: I0209 10:04:08.017022 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmkl6\" (UniqueName: \"kubernetes.io/projected/44e4f136-8942-4728-885b-9e4678b1af9d-kube-api-access-cmkl6\") pod \"44e4f136-8942-4728-885b-9e4678b1af9d\" (UID: \"44e4f136-8942-4728-885b-9e4678b1af9d\") " Feb 9 10:04:08.017469 kubelet[2442]: I0209 10:04:08.017043 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-hubble-tls\") pod \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " Feb 9 10:04:08.017469 kubelet[2442]: I0209 10:04:08.017060 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-hostproc\") pod \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " Feb 9 10:04:08.017469 kubelet[2442]: I0209 10:04:08.017077 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-bpf-maps\") pod \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " Feb 9 10:04:08.017469 kubelet[2442]: I0209 10:04:08.017103 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-clustermesh-secrets\") pod \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " Feb 9 10:04:08.017469 kubelet[2442]: I0209 10:04:08.017120 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-etc-cni-netd\") pod \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " Feb 9 10:04:08.017469 kubelet[2442]: I0209 10:04:08.017135 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-cilium-run\") pod \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " Feb 9 10:04:08.017601 kubelet[2442]: I0209 10:04:08.017153 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-host-proc-sys-kernel\") pod \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " Feb 9 10:04:08.017601 kubelet[2442]: I0209 10:04:08.017169 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-cilium-cgroup\") pod \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\" (UID: \"7ea98bbe-bf3c-4b9e-ba60-14c4824b2148\") " Feb 9 10:04:08.017601 kubelet[2442]: I0209 10:04:08.017225 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" (UID: "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.017601 kubelet[2442]: I0209 10:04:08.017258 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" (UID: "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.017601 kubelet[2442]: I0209 10:04:08.017274 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" (UID: "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.017712 kubelet[2442]: I0209 10:04:08.017315 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-cni-path" (OuterVolumeSpecName: "cni-path") pod "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" (UID: "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.017712 kubelet[2442]: I0209 10:04:08.017332 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" (UID: "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.017712 kubelet[2442]: W0209 10:04:08.017489 2442 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/44e4f136-8942-4728-885b-9e4678b1af9d/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 10:04:08.017951 kubelet[2442]: W0209 10:04:08.017926 2442 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 10:04:08.019278 kubelet[2442]: I0209 10:04:08.019237 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44e4f136-8942-4728-885b-9e4678b1af9d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "44e4f136-8942-4728-885b-9e4678b1af9d" (UID: "44e4f136-8942-4728-885b-9e4678b1af9d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:04:08.020038 kubelet[2442]: I0209 10:04:08.020011 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" (UID: "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:04:08.020557 kubelet[2442]: I0209 10:04:08.020525 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-hostproc" (OuterVolumeSpecName: "hostproc") pod "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" (UID: "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.020685 kubelet[2442]: I0209 10:04:08.020670 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" (UID: "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.020794 kubelet[2442]: I0209 10:04:08.020765 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" (UID: "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.020884 kubelet[2442]: I0209 10:04:08.020872 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" (UID: "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.020983 kubelet[2442]: I0209 10:04:08.020968 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" (UID: "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:08.022184 kubelet[2442]: I0209 10:04:08.022156 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-kube-api-access-c95kl" (OuterVolumeSpecName: "kube-api-access-c95kl") pod "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" (UID: "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148"). InnerVolumeSpecName "kube-api-access-c95kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:04:08.024060 kubelet[2442]: I0209 10:04:08.024029 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e4f136-8942-4728-885b-9e4678b1af9d-kube-api-access-cmkl6" (OuterVolumeSpecName: "kube-api-access-cmkl6") pod "44e4f136-8942-4728-885b-9e4678b1af9d" (UID: "44e4f136-8942-4728-885b-9e4678b1af9d"). InnerVolumeSpecName "kube-api-access-cmkl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:04:08.024269 kubelet[2442]: I0209 10:04:08.024240 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" (UID: "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:04:08.026141 kubelet[2442]: I0209 10:04:08.026113 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" (UID: "7ea98bbe-bf3c-4b9e-ba60-14c4824b2148"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:04:08.117548 kubelet[2442]: I0209 10:04:08.117488 2442 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-etc-cni-netd\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:08.117704 kubelet[2442]: I0209 10:04:08.117562 2442 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-cilium-run\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:08.117704 kubelet[2442]: I0209 10:04:08.117584 2442 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-hostproc\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:08.117704 kubelet[2442]: I0209 10:04:08.117602 2442 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-bpf-maps\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:08.117704 kubelet[2442]: I0209 10:04:08.117612 2442 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-clustermesh-secrets\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:08.117704 kubelet[2442]: I0209 10:04:08.117623 2442 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-cilium-cgroup\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:08.117704 kubelet[2442]: I0209 10:04:08.117634 2442 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:08.117704 kubelet[2442]: I0209 10:04:08.117643 2442 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-xtables-lock\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:08.117704 kubelet[2442]: I0209 10:04:08.117654 2442 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-lib-modules\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:08.117900 kubelet[2442]: I0209 10:04:08.117666 2442 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-cni-path\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:08.117900 kubelet[2442]: I0209 10:04:08.117676 2442 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-cilium-config-path\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:08.117900 kubelet[2442]: I0209 10:04:08.117685 2442 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-host-proc-sys-net\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:08.117900 kubelet[2442]: I0209 10:04:08.117695 2442 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44e4f136-8942-4728-885b-9e4678b1af9d-cilium-config-path\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:08.117900 kubelet[2442]: I0209 10:04:08.117707 2442 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c95kl\" (UniqueName: \"kubernetes.io/projected/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-kube-api-access-c95kl\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:08.117900 kubelet[2442]: I0209 10:04:08.117717 2442 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cmkl6\" (UniqueName: \"kubernetes.io/projected/44e4f136-8942-4728-885b-9e4678b1af9d-kube-api-access-cmkl6\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:08.117900 kubelet[2442]: I0209 10:04:08.117727 2442 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148-hubble-tls\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:08.709666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0-rootfs.mount: Deactivated successfully. Feb 9 10:04:08.709772 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0-shm.mount: Deactivated successfully. Feb 9 10:04:08.709833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa-rootfs.mount: Deactivated successfully. Feb 9 10:04:08.709884 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa-shm.mount: Deactivated successfully. Feb 9 10:04:08.709937 systemd[1]: var-lib-kubelet-pods-7ea98bbe\x2dbf3c\x2d4b9e\x2dba60\x2d14c4824b2148-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc95kl.mount: Deactivated successfully. Feb 9 10:04:08.709996 systemd[1]: var-lib-kubelet-pods-44e4f136\x2d8942\x2d4728\x2d885b\x2d9e4678b1af9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcmkl6.mount: Deactivated successfully. Feb 9 10:04:08.710066 systemd[1]: var-lib-kubelet-pods-7ea98bbe\x2dbf3c\x2d4b9e\x2dba60\x2d14c4824b2148-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 10:04:08.710116 systemd[1]: var-lib-kubelet-pods-7ea98bbe\x2dbf3c\x2d4b9e\x2dba60\x2d14c4824b2148-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 10:04:08.719791 kubelet[2442]: I0209 10:04:08.719765 2442 scope.go:115] "RemoveContainer" containerID="37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2" Feb 9 10:04:08.724512 systemd[1]: Removed slice kubepods-burstable-pod7ea98bbe_bf3c_4b9e_ba60_14c4824b2148.slice. Feb 9 10:04:08.724602 systemd[1]: kubepods-burstable-pod7ea98bbe_bf3c_4b9e_ba60_14c4824b2148.slice: Consumed 6.466s CPU time. Feb 9 10:04:08.729313 env[1379]: time="2024-02-09T10:04:08.729118902Z" level=info msg="RemoveContainer for \"37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2\"" Feb 9 10:04:08.730407 systemd[1]: Removed slice kubepods-besteffort-pod44e4f136_8942_4728_885b_9e4678b1af9d.slice. Feb 9 10:04:08.748698 env[1379]: time="2024-02-09T10:04:08.748544755Z" level=info msg="RemoveContainer for \"37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2\" returns successfully" Feb 9 10:04:08.749118 kubelet[2442]: I0209 10:04:08.749087 2442 scope.go:115] "RemoveContainer" containerID="2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c" Feb 9 10:04:08.750346 env[1379]: time="2024-02-09T10:04:08.750279101Z" level=info msg="RemoveContainer for \"2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c\"" Feb 9 10:04:08.764429 env[1379]: time="2024-02-09T10:04:08.764225067Z" level=info msg="RemoveContainer for \"2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c\" returns successfully" Feb 9 10:04:08.764660 kubelet[2442]: I0209 10:04:08.764563 2442 scope.go:115] "RemoveContainer" containerID="eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6" Feb 9 10:04:08.766597 env[1379]: time="2024-02-09T10:04:08.766265784Z" level=info msg="RemoveContainer for \"eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6\"" Feb 9 10:04:08.782224 env[1379]: time="2024-02-09T10:04:08.781992977Z" level=info msg="RemoveContainer for \"eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6\" returns successfully" Feb 9 10:04:08.782536 kubelet[2442]: I0209 10:04:08.782518 2442 scope.go:115] "RemoveContainer" containerID="5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef" Feb 9 10:04:08.784694 env[1379]: time="2024-02-09T10:04:08.784390828Z" level=info msg="RemoveContainer for \"5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef\"" Feb 9 10:04:08.803214 env[1379]: time="2024-02-09T10:04:08.803078213Z" level=info msg="RemoveContainer for \"5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef\" returns successfully" Feb 9 10:04:08.803362 kubelet[2442]: I0209 10:04:08.803349 2442 scope.go:115] "RemoveContainer" containerID="fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94" Feb 9 10:04:08.804841 env[1379]: time="2024-02-09T10:04:08.804600311Z" level=info msg="RemoveContainer for \"fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94\"" Feb 9 10:04:08.825459 env[1379]: time="2024-02-09T10:04:08.825416936Z" level=info msg="RemoveContainer for \"fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94\" returns successfully" Feb 9 10:04:08.825838 kubelet[2442]: I0209 10:04:08.825820 2442 scope.go:115] "RemoveContainer" containerID="37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2" Feb 9 10:04:08.826247 env[1379]: time="2024-02-09T10:04:08.826171085Z" level=error msg="ContainerStatus for \"37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2\": not found" Feb 9 10:04:08.826478 kubelet[2442]: E0209 10:04:08.826463 2442 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2\": not found" containerID="37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2" Feb 9 10:04:08.826589 kubelet[2442]: I0209 10:04:08.826578 2442 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2} err="failed to get container status \"37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2\": rpc error: code = NotFound desc = an error occurred when try to find container \"37b07bcbc2d1e03acde1ea102e9d3104458167fc16e3ed2cabadf11b41d56fa2\": not found" Feb 9 10:04:08.826665 kubelet[2442]: I0209 10:04:08.826656 2442 scope.go:115] "RemoveContainer" containerID="2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c" Feb 9 10:04:08.826934 env[1379]: time="2024-02-09T10:04:08.826891032Z" level=error msg="ContainerStatus for \"2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c\": not found" Feb 9 10:04:08.827177 kubelet[2442]: E0209 10:04:08.827157 2442 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c\": not found" containerID="2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c" Feb 9 10:04:08.827242 kubelet[2442]: I0209 10:04:08.827191 2442 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c} err="failed to get container status \"2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c\": rpc error: code = NotFound desc = an error occurred when try to find container \"2bb954abb4d2d7c99fde9c7c88afc038ce6d419fb71b350dee9b6185022fdc1c\": not found" Feb 9 10:04:08.827242 kubelet[2442]: I0209 10:04:08.827201 2442 scope.go:115] "RemoveContainer" containerID="eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6" Feb 9 10:04:08.827451 env[1379]: time="2024-02-09T10:04:08.827402291Z" level=error msg="ContainerStatus for \"eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6\": not found" Feb 9 10:04:08.827576 kubelet[2442]: E0209 10:04:08.827556 2442 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6\": not found" containerID="eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6" Feb 9 10:04:08.827628 kubelet[2442]: I0209 10:04:08.827588 2442 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6} err="failed to get container status \"eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6\": rpc error: code = NotFound desc = an error occurred when try to find container \"eba273fe81102726eaa9ce0dc86907397c3e8d96bc43d819b77e710e0b555bb6\": not found" Feb 9 10:04:08.827628 kubelet[2442]: I0209 10:04:08.827599 2442 scope.go:115] "RemoveContainer" containerID="5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef" Feb 9 10:04:08.827858 env[1379]: time="2024-02-09T10:04:08.827812747Z" level=error msg="ContainerStatus for \"5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef\": not found" Feb 9 10:04:08.828054 kubelet[2442]: E0209 10:04:08.828028 2442 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef\": not found" containerID="5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef" Feb 9 10:04:08.828054 kubelet[2442]: I0209 10:04:08.828055 2442 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef} err="failed to get container status \"5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"5410131d1a819a7a0693999f50ef637ae94ae06e193d2676421cc8573b01f6ef\": not found" Feb 9 10:04:08.828151 kubelet[2442]: I0209 10:04:08.828064 2442 scope.go:115] "RemoveContainer" containerID="fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94" Feb 9 10:04:08.828267 env[1379]: time="2024-02-09T10:04:08.828215322Z" level=error msg="ContainerStatus for \"fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94\": not found" Feb 9 10:04:08.828417 kubelet[2442]: E0209 10:04:08.828376 2442 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94\": not found" containerID="fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94" Feb 9 10:04:08.828467 kubelet[2442]: I0209 10:04:08.828425 2442 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94} err="failed to get container status \"fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd402ad8172d3cbee119e983e77a2e74f7b7ab594dbd497d2ffcb9bc0eec7c94\": not found" Feb 9 10:04:08.828467 kubelet[2442]: I0209 10:04:08.828436 2442 scope.go:115] "RemoveContainer" containerID="ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d" Feb 9 10:04:08.829683 env[1379]: time="2024-02-09T10:04:08.829647896Z" level=info msg="RemoveContainer for \"ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d\"" Feb 9 10:04:08.851300 env[1379]: time="2024-02-09T10:04:08.851243031Z" level=info msg="RemoveContainer for \"ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d\" returns successfully" Feb 9 10:04:08.851538 kubelet[2442]: I0209 10:04:08.851511 2442 scope.go:115] "RemoveContainer" containerID="ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d" Feb 9 10:04:08.851960 env[1379]: time="2024-02-09T10:04:08.851894495Z" level=error msg="ContainerStatus for \"ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d\": not found" Feb 9 10:04:08.852185 kubelet[2442]: E0209 10:04:08.852162 2442 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d\": not found" containerID="ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d" Feb 9 10:04:08.852243 kubelet[2442]: I0209 10:04:08.852211 2442 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d} err="failed to get container status \"ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec08a93b9b728f14bff75ef89a8265524da8c4dd350ff90e47eb7354dc44545d\": not found" Feb 9 10:04:09.224497 kubelet[2442]: I0209 10:04:09.224471 2442 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=44e4f136-8942-4728-885b-9e4678b1af9d path="/var/lib/kubelet/pods/44e4f136-8942-4728-885b-9e4678b1af9d/volumes" Feb 9 10:04:09.225262 kubelet[2442]: I0209 10:04:09.225249 2442 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=7ea98bbe-bf3c-4b9e-ba60-14c4824b2148 path="/var/lib/kubelet/pods/7ea98bbe-bf3c-4b9e-ba60-14c4824b2148/volumes" Feb 9 10:04:09.721807 sshd[3994]: pam_unix(sshd:session): session closed for user core Feb 9 10:04:09.724702 systemd-logind[1368]: Session 23 logged out. Waiting for processes to exit. Feb 9 10:04:09.724890 systemd[1]: sshd@20-10.200.20.16:22-10.200.12.6:35856.service: Deactivated successfully. Feb 9 10:04:09.725629 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 10:04:09.725816 systemd[1]: session-23.scope: Consumed 1.232s CPU time. Feb 9 10:04:09.727056 systemd-logind[1368]: Removed session 23. Feb 9 10:04:09.786076 systemd[1]: Started sshd@21-10.200.20.16:22-10.200.12.6:58690.service. Feb 9 10:04:10.168813 sshd[4168]: Accepted publickey for core from 10.200.12.6 port 58690 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:04:10.170497 sshd[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:04:10.174348 systemd-logind[1368]: New session 24 of user core. Feb 9 10:04:10.175088 systemd[1]: Started session-24.scope. Feb 9 10:04:10.372748 kubelet[2442]: E0209 10:04:10.372708 2442 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:04:10.599004 kubelet[2442]: I0209 10:04:10.598980 2442 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-ff24132019" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 10:04:10.59892438 +0000 UTC m=+235.538503194 LastTransitionTime:2024-02-09 10:04:10.59892438 +0000 UTC m=+235.538503194 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 10:04:12.703744 kubelet[2442]: I0209 10:04:12.703709 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:04:12.704185 kubelet[2442]: E0209 10:04:12.704171 2442 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" containerName="apply-sysctl-overwrites" Feb 9 10:04:12.704253 kubelet[2442]: E0209 10:04:12.704244 2442 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" containerName="mount-bpf-fs" Feb 9 10:04:12.704340 kubelet[2442]: E0209 10:04:12.704330 2442 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" containerName="clean-cilium-state" Feb 9 10:04:12.704404 kubelet[2442]: E0209 10:04:12.704396 2442 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44e4f136-8942-4728-885b-9e4678b1af9d" containerName="cilium-operator" Feb 9 10:04:12.704462 kubelet[2442]: E0209 10:04:12.704453 2442 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" containerName="mount-cgroup" Feb 9 10:04:12.704517 kubelet[2442]: E0209 10:04:12.704509 2442 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" containerName="cilium-agent" Feb 9 10:04:12.704592 kubelet[2442]: I0209 10:04:12.704582 2442 memory_manager.go:346] "RemoveStaleState removing state" podUID="44e4f136-8942-4728-885b-9e4678b1af9d" containerName="cilium-operator" Feb 9 10:04:12.704642 kubelet[2442]: I0209 10:04:12.704634 2442 memory_manager.go:346] "RemoveStaleState removing state" podUID="7ea98bbe-bf3c-4b9e-ba60-14c4824b2148" containerName="cilium-agent" Feb 9 10:04:12.709438 systemd[1]: Created slice kubepods-burstable-pod32c7e09b_64e2_47a4_91c0_f1ca76b7ed16.slice. Feb 9 10:04:12.742489 kubelet[2442]: I0209 10:04:12.742453 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-host-proc-sys-kernel\") pod \"cilium-fcdp9\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " pod="kube-system/cilium-fcdp9" Feb 9 10:04:12.742723 kubelet[2442]: I0209 10:04:12.742709 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjkzd\" (UniqueName: \"kubernetes.io/projected/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-kube-api-access-kjkzd\") pod \"cilium-fcdp9\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " pod="kube-system/cilium-fcdp9" Feb 9 10:04:12.742822 kubelet[2442]: I0209 10:04:12.742810 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-xtables-lock\") pod \"cilium-fcdp9\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " pod="kube-system/cilium-fcdp9" Feb 9 10:04:12.742906 kubelet[2442]: I0209 10:04:12.742896 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cilium-ipsec-secrets\") pod \"cilium-fcdp9\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " pod="kube-system/cilium-fcdp9" Feb 9 10:04:12.743402 kubelet[2442]: I0209 10:04:12.743386 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-hubble-tls\") pod \"cilium-fcdp9\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " pod="kube-system/cilium-fcdp9" Feb 9 10:04:12.743526 kubelet[2442]: I0209 10:04:12.743514 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cilium-config-path\") pod \"cilium-fcdp9\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " pod="kube-system/cilium-fcdp9" Feb 9 10:04:12.743616 kubelet[2442]: I0209 10:04:12.743606 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cilium-run\") pod \"cilium-fcdp9\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " pod="kube-system/cilium-fcdp9" Feb 9 10:04:12.743693 kubelet[2442]: I0209 10:04:12.743683 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-clustermesh-secrets\") pod \"cilium-fcdp9\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " pod="kube-system/cilium-fcdp9" Feb 9 10:04:12.743776 kubelet[2442]: I0209 10:04:12.743766 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-hostproc\") pod \"cilium-fcdp9\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " pod="kube-system/cilium-fcdp9" Feb 9 10:04:12.743860 kubelet[2442]: I0209 10:04:12.743851 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-lib-modules\") pod \"cilium-fcdp9\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " pod="kube-system/cilium-fcdp9" Feb 9 10:04:12.743934 kubelet[2442]: I0209 10:04:12.743923 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-host-proc-sys-net\") pod \"cilium-fcdp9\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " pod="kube-system/cilium-fcdp9" Feb 9 10:04:12.744013 kubelet[2442]: I0209 10:04:12.744003 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cni-path\") pod \"cilium-fcdp9\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " pod="kube-system/cilium-fcdp9" Feb 9 10:04:12.750601 kubelet[2442]: I0209 10:04:12.750572 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-bpf-maps\") pod \"cilium-fcdp9\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " pod="kube-system/cilium-fcdp9" Feb 9 10:04:12.750727 kubelet[2442]: I0209 10:04:12.750716 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cilium-cgroup\") pod \"cilium-fcdp9\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " pod="kube-system/cilium-fcdp9" Feb 9 10:04:12.750822 kubelet[2442]: I0209 10:04:12.750812 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-etc-cni-netd\") pod \"cilium-fcdp9\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " pod="kube-system/cilium-fcdp9" Feb 9 10:04:12.759839 sshd[4168]: pam_unix(sshd:session): session closed for user core Feb 9 10:04:12.762510 systemd-logind[1368]: Session 24 logged out. Waiting for processes to exit. Feb 9 10:04:12.763768 systemd[1]: sshd@21-10.200.20.16:22-10.200.12.6:58690.service: Deactivated successfully. Feb 9 10:04:12.764558 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 10:04:12.764736 systemd[1]: session-24.scope: Consumed 2.219s CPU time. Feb 9 10:04:12.765608 systemd-logind[1368]: Removed session 24. Feb 9 10:04:12.829643 systemd[1]: Started sshd@22-10.200.20.16:22-10.200.12.6:58706.service. Feb 9 10:04:13.017348 env[1379]: time="2024-02-09T10:04:13.012884941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fcdp9,Uid:32c7e09b-64e2-47a4-91c0-f1ca76b7ed16,Namespace:kube-system,Attempt:0,}" Feb 9 10:04:13.069490 env[1379]: time="2024-02-09T10:04:13.069406179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:04:13.069490 env[1379]: time="2024-02-09T10:04:13.069456421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:04:13.069675 env[1379]: time="2024-02-09T10:04:13.069467021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:04:13.069802 env[1379]: time="2024-02-09T10:04:13.069767833Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff pid=4190 runtime=io.containerd.runc.v2 Feb 9 10:04:13.080651 systemd[1]: Started cri-containerd-bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff.scope. Feb 9 10:04:13.106324 env[1379]: time="2024-02-09T10:04:13.106257852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fcdp9,Uid:32c7e09b-64e2-47a4-91c0-f1ca76b7ed16,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff\"" Feb 9 10:04:13.111561 env[1379]: time="2024-02-09T10:04:13.111516736Z" level=info msg="CreateContainer within sandbox \"bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:04:13.156434 env[1379]: time="2024-02-09T10:04:13.156354280Z" level=info msg="CreateContainer within sandbox \"bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3\"" Feb 9 10:04:13.158583 env[1379]: time="2024-02-09T10:04:13.157258755Z" level=info msg="StartContainer for \"92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3\"" Feb 9 10:04:13.171174 systemd[1]: Started cri-containerd-92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3.scope. Feb 9 10:04:13.182206 systemd[1]: cri-containerd-92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3.scope: Deactivated successfully. Feb 9 10:04:13.214634 sshd[4178]: Accepted publickey for core from 10.200.12.6 port 58706 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:04:13.216082 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:04:13.220651 systemd[1]: Started session-25.scope. Feb 9 10:04:13.221197 systemd-logind[1368]: New session 25 of user core. Feb 9 10:04:13.260438 env[1379]: time="2024-02-09T10:04:13.260282841Z" level=info msg="shim disconnected" id=92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3 Feb 9 10:04:13.260688 env[1379]: time="2024-02-09T10:04:13.260670616Z" level=warning msg="cleaning up after shim disconnected" id=92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3 namespace=k8s.io Feb 9 10:04:13.260820 env[1379]: time="2024-02-09T10:04:13.260806301Z" level=info msg="cleaning up dead shim" Feb 9 10:04:13.269211 env[1379]: time="2024-02-09T10:04:13.268385476Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4250 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T10:04:13Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 10:04:13.269687 env[1379]: time="2024-02-09T10:04:13.269589403Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" Feb 9 10:04:13.270090 env[1379]: time="2024-02-09T10:04:13.270059021Z" level=error msg="Failed to pipe stderr of container \"92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3\"" error="reading from a closed fifo" Feb 9 10:04:13.275651 env[1379]: time="2024-02-09T10:04:13.275606917Z" level=error msg="Failed to pipe stdout of container \"92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3\"" error="reading from a closed fifo" Feb 9 10:04:13.282501 env[1379]: time="2024-02-09T10:04:13.282430342Z" level=error msg="StartContainer for \"92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 10:04:13.282755 kubelet[2442]: E0209 10:04:13.282730 2442 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3" Feb 9 10:04:13.282874 kubelet[2442]: E0209 10:04:13.282853 2442 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 10:04:13.282874 kubelet[2442]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 10:04:13.282874 kubelet[2442]: rm /hostbin/cilium-mount Feb 9 10:04:13.282960 kubelet[2442]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-kjkzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-fcdp9_kube-system(32c7e09b-64e2-47a4-91c0-f1ca76b7ed16): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 10:04:13.282960 kubelet[2442]: E0209 10:04:13.282901 2442 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-fcdp9" podUID=32c7e09b-64e2-47a4-91c0-f1ca76b7ed16 Feb 9 10:04:13.563179 sshd[4178]: pam_unix(sshd:session): session closed for user core Feb 9 10:04:13.567127 systemd[1]: sshd@22-10.200.20.16:22-10.200.12.6:58706.service: Deactivated successfully. Feb 9 10:04:13.567897 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 10:04:13.568846 systemd-logind[1368]: Session 25 logged out. Waiting for processes to exit. Feb 9 10:04:13.569657 systemd-logind[1368]: Removed session 25. Feb 9 10:04:13.627772 systemd[1]: Started sshd@23-10.200.20.16:22-10.200.12.6:58714.service. Feb 9 10:04:13.737591 env[1379]: time="2024-02-09T10:04:13.737464995Z" level=info msg="StopPodSandbox for \"bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff\"" Feb 9 10:04:13.737591 env[1379]: time="2024-02-09T10:04:13.737542878Z" level=info msg="Container to stop \"92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:04:13.746972 systemd[1]: cri-containerd-bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff.scope: Deactivated successfully. Feb 9 10:04:13.817685 env[1379]: time="2024-02-09T10:04:13.817643272Z" level=info msg="shim disconnected" id=bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff Feb 9 10:04:13.818325 env[1379]: time="2024-02-09T10:04:13.818274417Z" level=warning msg="cleaning up after shim disconnected" id=bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff namespace=k8s.io Feb 9 10:04:13.818419 env[1379]: time="2024-02-09T10:04:13.818405102Z" level=info msg="cleaning up dead shim" Feb 9 10:04:13.826105 env[1379]: time="2024-02-09T10:04:13.826069480Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4294 runtime=io.containerd.runc.v2\n" Feb 9 10:04:13.826540 env[1379]: time="2024-02-09T10:04:13.826512737Z" level=info msg="TearDown network for sandbox \"bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff\" successfully" Feb 9 10:04:13.826630 env[1379]: time="2024-02-09T10:04:13.826613461Z" level=info msg="StopPodSandbox for \"bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff\" returns successfully" Feb 9 10:04:13.856718 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff-shm.mount: Deactivated successfully. Feb 9 10:04:13.959969 kubelet[2442]: I0209 10:04:13.959845 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-bpf-maps\") pod \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " Feb 9 10:04:13.959969 kubelet[2442]: I0209 10:04:13.959895 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-hostproc\") pod \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " Feb 9 10:04:13.959969 kubelet[2442]: I0209 10:04:13.959919 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-xtables-lock\") pod \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " Feb 9 10:04:13.959969 kubelet[2442]: I0209 10:04:13.959937 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" (UID: "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.960401 kubelet[2442]: W0209 10:04:13.960162 2442 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 10:04:13.962820 kubelet[2442]: I0209 10:04:13.959945 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cilium-config-path\") pod \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " Feb 9 10:04:13.962820 kubelet[2442]: I0209 10:04:13.960495 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cilium-cgroup\") pod \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " Feb 9 10:04:13.962820 kubelet[2442]: I0209 10:04:13.960517 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cni-path\") pod \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " Feb 9 10:04:13.962820 kubelet[2442]: I0209 10:04:13.960542 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-hubble-tls\") pod \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " Feb 9 10:04:13.962820 kubelet[2442]: I0209 10:04:13.960559 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cilium-run\") pod \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " Feb 9 10:04:13.962820 kubelet[2442]: I0209 10:04:13.960575 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-host-proc-sys-net\") pod \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " Feb 9 10:04:13.962820 kubelet[2442]: I0209 10:04:13.960593 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-etc-cni-netd\") pod \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " Feb 9 10:04:13.962820 kubelet[2442]: I0209 10:04:13.960614 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjkzd\" (UniqueName: \"kubernetes.io/projected/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-kube-api-access-kjkzd\") pod \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " Feb 9 10:04:13.962820 kubelet[2442]: I0209 10:04:13.960630 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-lib-modules\") pod \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " Feb 9 10:04:13.962820 kubelet[2442]: I0209 10:04:13.960649 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-host-proc-sys-kernel\") pod \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " Feb 9 10:04:13.962820 kubelet[2442]: I0209 10:04:13.960670 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cilium-ipsec-secrets\") pod \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " Feb 9 10:04:13.962820 kubelet[2442]: I0209 10:04:13.960691 2442 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-clustermesh-secrets\") pod \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\" (UID: \"32c7e09b-64e2-47a4-91c0-f1ca76b7ed16\") " Feb 9 10:04:13.962820 kubelet[2442]: I0209 10:04:13.960732 2442 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-bpf-maps\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:13.962820 kubelet[2442]: I0209 10:04:13.961331 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" (UID: "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.962820 kubelet[2442]: I0209 10:04:13.961363 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cni-path" (OuterVolumeSpecName: "cni-path") pod "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" (UID: "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.962820 kubelet[2442]: I0209 10:04:13.962093 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" (UID: "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:04:13.963405 kubelet[2442]: I0209 10:04:13.962148 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-hostproc" (OuterVolumeSpecName: "hostproc") pod "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" (UID: "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.963405 kubelet[2442]: I0209 10:04:13.962168 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" (UID: "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.963405 kubelet[2442]: I0209 10:04:13.962430 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" (UID: "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.963405 kubelet[2442]: I0209 10:04:13.962464 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" (UID: "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.963405 kubelet[2442]: I0209 10:04:13.962483 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" (UID: "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.963405 kubelet[2442]: I0209 10:04:13.962500 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" (UID: "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.963405 kubelet[2442]: I0209 10:04:13.962515 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" (UID: "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:04:13.964583 systemd[1]: var-lib-kubelet-pods-32c7e09b\x2d64e2\x2d47a4\x2d91c0\x2df1ca76b7ed16-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 10:04:13.966958 kubelet[2442]: I0209 10:04:13.966930 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" (UID: "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:04:13.968465 systemd[1]: var-lib-kubelet-pods-32c7e09b\x2d64e2\x2d47a4\x2d91c0\x2df1ca76b7ed16-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkjkzd.mount: Deactivated successfully. Feb 9 10:04:13.971750 kubelet[2442]: I0209 10:04:13.970361 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-kube-api-access-kjkzd" (OuterVolumeSpecName: "kube-api-access-kjkzd") pod "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" (UID: "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16"). InnerVolumeSpecName "kube-api-access-kjkzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:04:13.972778 systemd[1]: var-lib-kubelet-pods-32c7e09b\x2d64e2\x2d47a4\x2d91c0\x2df1ca76b7ed16-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 10:04:13.973410 kubelet[2442]: I0209 10:04:13.973379 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" (UID: "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:04:13.974698 systemd[1]: var-lib-kubelet-pods-32c7e09b\x2d64e2\x2d47a4\x2d91c0\x2df1ca76b7ed16-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 10:04:13.975549 kubelet[2442]: I0209 10:04:13.975525 2442 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" (UID: "32c7e09b-64e2-47a4-91c0-f1ca76b7ed16"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:04:14.012538 sshd[4273]: Accepted publickey for core from 10.200.12.6 port 58714 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:04:14.013855 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:04:14.017342 systemd-logind[1368]: New session 26 of user core. Feb 9 10:04:14.017978 systemd[1]: Started session-26.scope. Feb 9 10:04:14.061187 kubelet[2442]: I0209 10:04:14.061156 2442 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-hostproc\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:14.061398 kubelet[2442]: I0209 10:04:14.061386 2442 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-xtables-lock\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:14.061470 kubelet[2442]: I0209 10:04:14.061461 2442 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cilium-config-path\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:14.061532 kubelet[2442]: I0209 10:04:14.061523 2442 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cilium-cgroup\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:14.061601 kubelet[2442]: I0209 10:04:14.061592 2442 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cni-path\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:14.061660 kubelet[2442]: I0209 10:04:14.061651 2442 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-hubble-tls\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:14.061729 kubelet[2442]: I0209 10:04:14.061720 2442 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cilium-run\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:14.061793 kubelet[2442]: I0209 10:04:14.061784 2442 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-host-proc-sys-net\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:14.061853 kubelet[2442]: I0209 10:04:14.061844 2442 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-etc-cni-netd\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:14.061917 kubelet[2442]: I0209 10:04:14.061908 2442 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kjkzd\" (UniqueName: \"kubernetes.io/projected/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-kube-api-access-kjkzd\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:14.061977 kubelet[2442]: I0209 10:04:14.061967 2442 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-lib-modules\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:14.062033 kubelet[2442]: I0209 10:04:14.062024 2442 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:14.062091 kubelet[2442]: I0209 10:04:14.062082 2442 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:14.062149 kubelet[2442]: I0209 10:04:14.062141 2442 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16-clustermesh-secrets\") on node \"ci-3510.3.2-a-ff24132019\" DevicePath \"\"" Feb 9 10:04:14.742790 kubelet[2442]: I0209 10:04:14.739705 2442 scope.go:115] "RemoveContainer" containerID="92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3" Feb 9 10:04:14.743950 systemd[1]: Removed slice kubepods-burstable-pod32c7e09b_64e2_47a4_91c0_f1ca76b7ed16.slice. Feb 9 10:04:14.746066 env[1379]: time="2024-02-09T10:04:14.745767003Z" level=info msg="RemoveContainer for \"92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3\"" Feb 9 10:04:14.760192 env[1379]: time="2024-02-09T10:04:14.760037721Z" level=info msg="RemoveContainer for \"92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3\" returns successfully" Feb 9 10:04:14.786051 kubelet[2442]: I0209 10:04:14.786003 2442 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:04:14.786206 kubelet[2442]: E0209 10:04:14.786075 2442 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" containerName="mount-cgroup" Feb 9 10:04:14.786206 kubelet[2442]: I0209 10:04:14.786100 2442 memory_manager.go:346] "RemoveStaleState removing state" podUID="32c7e09b-64e2-47a4-91c0-f1ca76b7ed16" containerName="mount-cgroup" Feb 9 10:04:14.791972 systemd[1]: Created slice kubepods-burstable-pod7e52ab74_661a_4aea_9f63_fa2114f0ba4b.slice. Feb 9 10:04:14.866441 kubelet[2442]: I0209 10:04:14.866402 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e52ab74-661a-4aea-9f63-fa2114f0ba4b-cni-path\") pod \"cilium-dbskf\" (UID: \"7e52ab74-661a-4aea-9f63-fa2114f0ba4b\") " pod="kube-system/cilium-dbskf" Feb 9 10:04:14.866683 kubelet[2442]: I0209 10:04:14.866671 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e52ab74-661a-4aea-9f63-fa2114f0ba4b-etc-cni-netd\") pod \"cilium-dbskf\" (UID: \"7e52ab74-661a-4aea-9f63-fa2114f0ba4b\") " pod="kube-system/cilium-dbskf" Feb 9 10:04:14.866798 kubelet[2442]: I0209 10:04:14.866789 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e52ab74-661a-4aea-9f63-fa2114f0ba4b-host-proc-sys-kernel\") pod \"cilium-dbskf\" (UID: \"7e52ab74-661a-4aea-9f63-fa2114f0ba4b\") " pod="kube-system/cilium-dbskf" Feb 9 10:04:14.866903 kubelet[2442]: I0209 10:04:14.866894 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e52ab74-661a-4aea-9f63-fa2114f0ba4b-cilium-cgroup\") pod \"cilium-dbskf\" (UID: \"7e52ab74-661a-4aea-9f63-fa2114f0ba4b\") " pod="kube-system/cilium-dbskf" Feb 9 10:04:14.867012 kubelet[2442]: I0209 10:04:14.867002 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e52ab74-661a-4aea-9f63-fa2114f0ba4b-bpf-maps\") pod \"cilium-dbskf\" (UID: \"7e52ab74-661a-4aea-9f63-fa2114f0ba4b\") " pod="kube-system/cilium-dbskf" Feb 9 10:04:14.867112 kubelet[2442]: I0209 10:04:14.867103 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e52ab74-661a-4aea-9f63-fa2114f0ba4b-xtables-lock\") pod \"cilium-dbskf\" (UID: \"7e52ab74-661a-4aea-9f63-fa2114f0ba4b\") " pod="kube-system/cilium-dbskf" Feb 9 10:04:14.867205 kubelet[2442]: I0209 10:04:14.867196 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e52ab74-661a-4aea-9f63-fa2114f0ba4b-hubble-tls\") pod \"cilium-dbskf\" (UID: \"7e52ab74-661a-4aea-9f63-fa2114f0ba4b\") " pod="kube-system/cilium-dbskf" Feb 9 10:04:14.867330 kubelet[2442]: I0209 10:04:14.867318 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7wlm\" (UniqueName: \"kubernetes.io/projected/7e52ab74-661a-4aea-9f63-fa2114f0ba4b-kube-api-access-h7wlm\") pod \"cilium-dbskf\" (UID: \"7e52ab74-661a-4aea-9f63-fa2114f0ba4b\") " pod="kube-system/cilium-dbskf" Feb 9 10:04:14.867451 kubelet[2442]: I0209 10:04:14.867440 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e52ab74-661a-4aea-9f63-fa2114f0ba4b-lib-modules\") pod \"cilium-dbskf\" (UID: \"7e52ab74-661a-4aea-9f63-fa2114f0ba4b\") " pod="kube-system/cilium-dbskf" Feb 9 10:04:14.867569 kubelet[2442]: I0209 10:04:14.867560 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e52ab74-661a-4aea-9f63-fa2114f0ba4b-hostproc\") pod \"cilium-dbskf\" (UID: \"7e52ab74-661a-4aea-9f63-fa2114f0ba4b\") " pod="kube-system/cilium-dbskf" Feb 9 10:04:14.867673 kubelet[2442]: I0209 10:04:14.867663 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e52ab74-661a-4aea-9f63-fa2114f0ba4b-clustermesh-secrets\") pod \"cilium-dbskf\" (UID: \"7e52ab74-661a-4aea-9f63-fa2114f0ba4b\") " pod="kube-system/cilium-dbskf" Feb 9 10:04:14.867771 kubelet[2442]: I0209 10:04:14.867763 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e52ab74-661a-4aea-9f63-fa2114f0ba4b-host-proc-sys-net\") pod \"cilium-dbskf\" (UID: \"7e52ab74-661a-4aea-9f63-fa2114f0ba4b\") " pod="kube-system/cilium-dbskf" Feb 9 10:04:14.867867 kubelet[2442]: I0209 10:04:14.867858 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7e52ab74-661a-4aea-9f63-fa2114f0ba4b-cilium-ipsec-secrets\") pod \"cilium-dbskf\" (UID: \"7e52ab74-661a-4aea-9f63-fa2114f0ba4b\") " pod="kube-system/cilium-dbskf" Feb 9 10:04:14.867967 kubelet[2442]: I0209 10:04:14.867956 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e52ab74-661a-4aea-9f63-fa2114f0ba4b-cilium-config-path\") pod \"cilium-dbskf\" (UID: \"7e52ab74-661a-4aea-9f63-fa2114f0ba4b\") " pod="kube-system/cilium-dbskf" Feb 9 10:04:14.868075 kubelet[2442]: I0209 10:04:14.868065 2442 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e52ab74-661a-4aea-9f63-fa2114f0ba4b-cilium-run\") pod \"cilium-dbskf\" (UID: \"7e52ab74-661a-4aea-9f63-fa2114f0ba4b\") " pod="kube-system/cilium-dbskf" Feb 9 10:04:15.095102 env[1379]: time="2024-02-09T10:04:15.094736988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dbskf,Uid:7e52ab74-661a-4aea-9f63-fa2114f0ba4b,Namespace:kube-system,Attempt:0,}" Feb 9 10:04:15.147585 env[1379]: time="2024-02-09T10:04:15.147498502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:04:15.147726 env[1379]: time="2024-02-09T10:04:15.147594186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:04:15.147726 env[1379]: time="2024-02-09T10:04:15.147631907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:04:15.147906 env[1379]: time="2024-02-09T10:04:15.147859956Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10d60477426e2bcb2552cf7d5182e905a589af9d9e2b1a5da75aaba75d736915 pid=4327 runtime=io.containerd.runc.v2 Feb 9 10:04:15.158610 systemd[1]: Started cri-containerd-10d60477426e2bcb2552cf7d5182e905a589af9d9e2b1a5da75aaba75d736915.scope. Feb 9 10:04:15.185143 env[1379]: time="2024-02-09T10:04:15.185102460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dbskf,Uid:7e52ab74-661a-4aea-9f63-fa2114f0ba4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"10d60477426e2bcb2552cf7d5182e905a589af9d9e2b1a5da75aaba75d736915\"" Feb 9 10:04:15.189061 env[1379]: time="2024-02-09T10:04:15.189015934Z" level=info msg="CreateContainer within sandbox \"10d60477426e2bcb2552cf7d5182e905a589af9d9e2b1a5da75aaba75d736915\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:04:15.216375 env[1379]: time="2024-02-09T10:04:15.216337649Z" level=info msg="StopPodSandbox for \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\"" Feb 9 10:04:15.216650 env[1379]: time="2024-02-09T10:04:15.216607099Z" level=info msg="TearDown network for sandbox \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\" successfully" Feb 9 10:04:15.216722 env[1379]: time="2024-02-09T10:04:15.216706223Z" level=info msg="StopPodSandbox for \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\" returns successfully" Feb 9 10:04:15.217216 env[1379]: time="2024-02-09T10:04:15.217191402Z" level=info msg="RemovePodSandbox for \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\"" Feb 9 10:04:15.217360 env[1379]: time="2024-02-09T10:04:15.217324647Z" level=info msg="Forcibly stopping sandbox \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\"" Feb 9 10:04:15.217475 env[1379]: time="2024-02-09T10:04:15.217457093Z" level=info msg="TearDown network for sandbox \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\" successfully" Feb 9 10:04:15.223415 kubelet[2442]: E0209 10:04:15.222362 2442 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5d78c9869d-2jwmd" podUID=c8c1ed1e-3444-49ce-9a5f-d98696cb68e4 Feb 9 10:04:15.225661 kubelet[2442]: I0209 10:04:15.225636 2442 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=32c7e09b-64e2-47a4-91c0-f1ca76b7ed16 path="/var/lib/kubelet/pods/32c7e09b-64e2-47a4-91c0-f1ca76b7ed16/volumes" Feb 9 10:04:15.243810 env[1379]: time="2024-02-09T10:04:15.243763407Z" level=info msg="RemovePodSandbox \"3f888d54d23e2ed0180abdb6400903104d24dcf780ed433c1cdf64a76cd3a0b0\" returns successfully" Feb 9 10:04:15.244555 env[1379]: time="2024-02-09T10:04:15.244531637Z" level=info msg="StopPodSandbox for \"199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa\"" Feb 9 10:04:15.244866 env[1379]: time="2024-02-09T10:04:15.244821968Z" level=info msg="TearDown network for sandbox \"199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa\" successfully" Feb 9 10:04:15.244944 env[1379]: time="2024-02-09T10:04:15.244928613Z" level=info msg="StopPodSandbox for \"199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa\" returns successfully" Feb 9 10:04:15.245390 env[1379]: time="2024-02-09T10:04:15.245367630Z" level=info msg="RemovePodSandbox for \"199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa\"" Feb 9 10:04:15.245510 env[1379]: time="2024-02-09T10:04:15.245478674Z" level=info msg="Forcibly stopping sandbox \"199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa\"" Feb 9 10:04:15.245606 env[1379]: time="2024-02-09T10:04:15.245590519Z" level=info msg="TearDown network for sandbox \"199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa\" successfully" Feb 9 10:04:15.285424 env[1379]: time="2024-02-09T10:04:15.285375523Z" level=info msg="CreateContainer within sandbox \"10d60477426e2bcb2552cf7d5182e905a589af9d9e2b1a5da75aaba75d736915\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"00860ead42174e9703d13d7cb9c60ab2b3650603ffcd88a458e3c1414c5352e2\"" Feb 9 10:04:15.286265 env[1379]: time="2024-02-09T10:04:15.286237997Z" level=info msg="StartContainer for \"00860ead42174e9703d13d7cb9c60ab2b3650603ffcd88a458e3c1414c5352e2\"" Feb 9 10:04:15.288572 env[1379]: time="2024-02-09T10:04:15.288042428Z" level=info msg="RemovePodSandbox \"199aed2f3120c9468fa798514eb5af6ced1d93ff5e8a6fca8763f798360994fa\" returns successfully" Feb 9 10:04:15.289002 env[1379]: time="2024-02-09T10:04:15.288971464Z" level=info msg="StopPodSandbox for \"bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff\"" Feb 9 10:04:15.289087 env[1379]: time="2024-02-09T10:04:15.289047107Z" level=info msg="TearDown network for sandbox \"bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff\" successfully" Feb 9 10:04:15.289132 env[1379]: time="2024-02-09T10:04:15.289083989Z" level=info msg="StopPodSandbox for \"bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff\" returns successfully" Feb 9 10:04:15.289517 env[1379]: time="2024-02-09T10:04:15.289455403Z" level=info msg="RemovePodSandbox for \"bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff\"" Feb 9 10:04:15.289655 env[1379]: time="2024-02-09T10:04:15.289621650Z" level=info msg="Forcibly stopping sandbox \"bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff\"" Feb 9 10:04:15.289756 env[1379]: time="2024-02-09T10:04:15.289739534Z" level=info msg="TearDown network for sandbox \"bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff\" successfully" Feb 9 10:04:15.303656 systemd[1]: Started cri-containerd-00860ead42174e9703d13d7cb9c60ab2b3650603ffcd88a458e3c1414c5352e2.scope. Feb 9 10:04:15.312478 env[1379]: time="2024-02-09T10:04:15.312420626Z" level=info msg="RemovePodSandbox \"bfee3667a47c727a149879f88a6d598c4ed4bf7bcf377a8a1a7eec5404e960ff\" returns successfully" Feb 9 10:04:15.339093 systemd[1]: cri-containerd-00860ead42174e9703d13d7cb9c60ab2b3650603ffcd88a458e3c1414c5352e2.scope: Deactivated successfully. Feb 9 10:04:15.341820 env[1379]: time="2024-02-09T10:04:15.341759300Z" level=info msg="StartContainer for \"00860ead42174e9703d13d7cb9c60ab2b3650603ffcd88a458e3c1414c5352e2\" returns successfully" Feb 9 10:04:15.373491 kubelet[2442]: E0209 10:04:15.373398 2442 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:04:15.395538 env[1379]: time="2024-02-09T10:04:15.395487252Z" level=info msg="shim disconnected" id=00860ead42174e9703d13d7cb9c60ab2b3650603ffcd88a458e3c1414c5352e2 Feb 9 10:04:15.395538 env[1379]: time="2024-02-09T10:04:15.395536574Z" level=warning msg="cleaning up after shim disconnected" id=00860ead42174e9703d13d7cb9c60ab2b3650603ffcd88a458e3c1414c5352e2 namespace=k8s.io Feb 9 10:04:15.395757 env[1379]: time="2024-02-09T10:04:15.395546934Z" level=info msg="cleaning up dead shim" Feb 9 10:04:15.402400 env[1379]: time="2024-02-09T10:04:15.402353922Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4414 runtime=io.containerd.runc.v2\n" Feb 9 10:04:15.747339 env[1379]: time="2024-02-09T10:04:15.747216840Z" level=info msg="CreateContainer within sandbox \"10d60477426e2bcb2552cf7d5182e905a589af9d9e2b1a5da75aaba75d736915\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 10:04:15.807255 env[1379]: time="2024-02-09T10:04:15.807204759Z" level=info msg="CreateContainer within sandbox \"10d60477426e2bcb2552cf7d5182e905a589af9d9e2b1a5da75aaba75d736915\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"60a48b7645c0f6ae14c82d4d9bba2f56a2e614b7097a270f17cec630519d0845\"" Feb 9 10:04:15.812535 env[1379]: time="2024-02-09T10:04:15.809606013Z" level=info msg="StartContainer for \"60a48b7645c0f6ae14c82d4d9bba2f56a2e614b7097a270f17cec630519d0845\"" Feb 9 10:04:15.829243 systemd[1]: Started cri-containerd-60a48b7645c0f6ae14c82d4d9bba2f56a2e614b7097a270f17cec630519d0845.scope. Feb 9 10:04:15.866681 systemd[1]: cri-containerd-60a48b7645c0f6ae14c82d4d9bba2f56a2e614b7097a270f17cec630519d0845.scope: Deactivated successfully. Feb 9 10:04:15.867960 env[1379]: time="2024-02-09T10:04:15.867922266Z" level=info msg="StartContainer for \"60a48b7645c0f6ae14c82d4d9bba2f56a2e614b7097a270f17cec630519d0845\" returns successfully" Feb 9 10:04:15.914094 env[1379]: time="2024-02-09T10:04:15.914035999Z" level=info msg="shim disconnected" id=60a48b7645c0f6ae14c82d4d9bba2f56a2e614b7097a270f17cec630519d0845 Feb 9 10:04:15.914412 env[1379]: time="2024-02-09T10:04:15.914394653Z" level=warning msg="cleaning up after shim disconnected" id=60a48b7645c0f6ae14c82d4d9bba2f56a2e614b7097a270f17cec630519d0845 namespace=k8s.io Feb 9 10:04:15.914511 env[1379]: time="2024-02-09T10:04:15.914497297Z" level=info msg="cleaning up dead shim" Feb 9 10:04:15.922413 env[1379]: time="2024-02-09T10:04:15.922375687Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4478 runtime=io.containerd.runc.v2\n" Feb 9 10:04:16.371072 kubelet[2442]: W0209 10:04:16.370867 2442 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32c7e09b_64e2_47a4_91c0_f1ca76b7ed16.slice/cri-containerd-92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3.scope WatchSource:0}: container "92d3529ab916f2dd93506afa84b00930c704e6b45e15739a731f7a04ed5392b3" in namespace "k8s.io": not found Feb 9 10:04:16.749064 env[1379]: time="2024-02-09T10:04:16.748876579Z" level=info msg="CreateContainer within sandbox \"10d60477426e2bcb2552cf7d5182e905a589af9d9e2b1a5da75aaba75d736915\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 10:04:16.810925 env[1379]: time="2024-02-09T10:04:16.810868229Z" level=info msg="CreateContainer within sandbox \"10d60477426e2bcb2552cf7d5182e905a589af9d9e2b1a5da75aaba75d736915\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"251afda46bd279027033c662cf8e7dac3fb819ee1c8fda0f1214b95494a606e8\"" Feb 9 10:04:16.811563 env[1379]: time="2024-02-09T10:04:16.811540416Z" level=info msg="StartContainer for \"251afda46bd279027033c662cf8e7dac3fb819ee1c8fda0f1214b95494a606e8\"" Feb 9 10:04:16.833547 systemd[1]: Started cri-containerd-251afda46bd279027033c662cf8e7dac3fb819ee1c8fda0f1214b95494a606e8.scope. Feb 9 10:04:16.865252 systemd[1]: cri-containerd-251afda46bd279027033c662cf8e7dac3fb819ee1c8fda0f1214b95494a606e8.scope: Deactivated successfully. Feb 9 10:04:16.876258 env[1379]: time="2024-02-09T10:04:16.876215052Z" level=info msg="StartContainer for \"251afda46bd279027033c662cf8e7dac3fb819ee1c8fda0f1214b95494a606e8\" returns successfully" Feb 9 10:04:16.913180 env[1379]: time="2024-02-09T10:04:16.913133271Z" level=info msg="shim disconnected" id=251afda46bd279027033c662cf8e7dac3fb819ee1c8fda0f1214b95494a606e8 Feb 9 10:04:16.913567 env[1379]: time="2024-02-09T10:04:16.913543168Z" level=warning msg="cleaning up after shim disconnected" id=251afda46bd279027033c662cf8e7dac3fb819ee1c8fda0f1214b95494a606e8 namespace=k8s.io Feb 9 10:04:16.913716 env[1379]: time="2024-02-09T10:04:16.913681573Z" level=info msg="cleaning up dead shim" Feb 9 10:04:16.921408 env[1379]: time="2024-02-09T10:04:16.921368237Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4538 runtime=io.containerd.runc.v2\n" Feb 9 10:04:16.974698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-251afda46bd279027033c662cf8e7dac3fb819ee1c8fda0f1214b95494a606e8-rootfs.mount: Deactivated successfully. Feb 9 10:04:17.222853 kubelet[2442]: E0209 10:04:17.222535 2442 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5d78c9869d-2jwmd" podUID=c8c1ed1e-3444-49ce-9a5f-d98696cb68e4 Feb 9 10:04:17.753095 env[1379]: time="2024-02-09T10:04:17.753018346Z" level=info msg="CreateContainer within sandbox \"10d60477426e2bcb2552cf7d5182e905a589af9d9e2b1a5da75aaba75d736915\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 10:04:17.808543 env[1379]: time="2024-02-09T10:04:17.808486750Z" level=info msg="CreateContainer within sandbox \"10d60477426e2bcb2552cf7d5182e905a589af9d9e2b1a5da75aaba75d736915\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"07db79acf05d68c1b9da6a53d634a454b1c7a3ef634ec44815f287bd2f7ac7a4\"" Feb 9 10:04:17.809065 env[1379]: time="2024-02-09T10:04:17.809040452Z" level=info msg="StartContainer for \"07db79acf05d68c1b9da6a53d634a454b1c7a3ef634ec44815f287bd2f7ac7a4\"" Feb 9 10:04:17.828439 systemd[1]: Started cri-containerd-07db79acf05d68c1b9da6a53d634a454b1c7a3ef634ec44815f287bd2f7ac7a4.scope. Feb 9 10:04:17.854908 systemd[1]: cri-containerd-07db79acf05d68c1b9da6a53d634a454b1c7a3ef634ec44815f287bd2f7ac7a4.scope: Deactivated successfully. Feb 9 10:04:17.861610 env[1379]: time="2024-02-09T10:04:17.861567500Z" level=info msg="StartContainer for \"07db79acf05d68c1b9da6a53d634a454b1c7a3ef634ec44815f287bd2f7ac7a4\" returns successfully" Feb 9 10:04:17.901630 env[1379]: time="2024-02-09T10:04:17.901581890Z" level=info msg="shim disconnected" id=07db79acf05d68c1b9da6a53d634a454b1c7a3ef634ec44815f287bd2f7ac7a4 Feb 9 10:04:17.901857 env[1379]: time="2024-02-09T10:04:17.901839700Z" level=warning msg="cleaning up after shim disconnected" id=07db79acf05d68c1b9da6a53d634a454b1c7a3ef634ec44815f287bd2f7ac7a4 namespace=k8s.io Feb 9 10:04:17.901917 env[1379]: time="2024-02-09T10:04:17.901905582Z" level=info msg="cleaning up dead shim" Feb 9 10:04:17.910140 env[1379]: time="2024-02-09T10:04:17.910096348Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4593 runtime=io.containerd.runc.v2\n" Feb 9 10:04:17.974742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07db79acf05d68c1b9da6a53d634a454b1c7a3ef634ec44815f287bd2f7ac7a4-rootfs.mount: Deactivated successfully. Feb 9 10:04:18.756903 env[1379]: time="2024-02-09T10:04:18.756858548Z" level=info msg="CreateContainer within sandbox \"10d60477426e2bcb2552cf7d5182e905a589af9d9e2b1a5da75aaba75d736915\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 10:04:18.818031 env[1379]: time="2024-02-09T10:04:18.817975709Z" level=info msg="CreateContainer within sandbox \"10d60477426e2bcb2552cf7d5182e905a589af9d9e2b1a5da75aaba75d736915\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e50ccc5215c06fa406ee5631884321e9aa704dbb58427dc067276070d6c1c8a0\"" Feb 9 10:04:18.818636 env[1379]: time="2024-02-09T10:04:18.818558053Z" level=info msg="StartContainer for \"e50ccc5215c06fa406ee5631884321e9aa704dbb58427dc067276070d6c1c8a0\"" Feb 9 10:04:18.835386 systemd[1]: Started cri-containerd-e50ccc5215c06fa406ee5631884321e9aa704dbb58427dc067276070d6c1c8a0.scope. Feb 9 10:04:18.871719 env[1379]: time="2024-02-09T10:04:18.871664974Z" level=info msg="StartContainer for \"e50ccc5215c06fa406ee5631884321e9aa704dbb58427dc067276070d6c1c8a0\" returns successfully" Feb 9 10:04:19.222404 kubelet[2442]: E0209 10:04:19.222364 2442 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5d78c9869d-2jwmd" podUID=c8c1ed1e-3444-49ce-9a5f-d98696cb68e4 Feb 9 10:04:19.276320 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 10:04:19.487952 kubelet[2442]: W0209 10:04:19.487830 2442 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e52ab74_661a_4aea_9f63_fa2114f0ba4b.slice/cri-containerd-00860ead42174e9703d13d7cb9c60ab2b3650603ffcd88a458e3c1414c5352e2.scope WatchSource:0}: task 00860ead42174e9703d13d7cb9c60ab2b3650603ffcd88a458e3c1414c5352e2 not found: not found Feb 9 10:04:19.775515 kubelet[2442]: I0209 10:04:19.775417 2442 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dbskf" podStartSLOduration=5.775381665 podCreationTimestamp="2024-02-09 10:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:04:19.774230858 +0000 UTC m=+244.713809672" watchObservedRunningTime="2024-02-09 10:04:19.775381665 +0000 UTC m=+244.714960439" Feb 9 10:04:20.427601 systemd[1]: run-containerd-runc-k8s.io-e50ccc5215c06fa406ee5631884321e9aa704dbb58427dc067276070d6c1c8a0-runc.Xwctd6.mount: Deactivated successfully. Feb 9 10:04:21.854373 systemd-networkd[1536]: lxc_health: Link UP Feb 9 10:04:21.879522 systemd-networkd[1536]: lxc_health: Gained carrier Feb 9 10:04:21.880313 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:04:22.574814 systemd[1]: run-containerd-runc-k8s.io-e50ccc5215c06fa406ee5631884321e9aa704dbb58427dc067276070d6c1c8a0-runc.Ah175f.mount: Deactivated successfully. Feb 9 10:04:22.596781 kubelet[2442]: W0209 10:04:22.596085 2442 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e52ab74_661a_4aea_9f63_fa2114f0ba4b.slice/cri-containerd-60a48b7645c0f6ae14c82d4d9bba2f56a2e614b7097a270f17cec630519d0845.scope WatchSource:0}: task 60a48b7645c0f6ae14c82d4d9bba2f56a2e614b7097a270f17cec630519d0845 not found: not found Feb 9 10:04:23.047449 systemd-networkd[1536]: lxc_health: Gained IPv6LL Feb 9 10:04:24.754157 systemd[1]: run-containerd-runc-k8s.io-e50ccc5215c06fa406ee5631884321e9aa704dbb58427dc067276070d6c1c8a0-runc.QNTwCB.mount: Deactivated successfully. Feb 9 10:04:25.704808 kubelet[2442]: W0209 10:04:25.704770 2442 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e52ab74_661a_4aea_9f63_fa2114f0ba4b.slice/cri-containerd-251afda46bd279027033c662cf8e7dac3fb819ee1c8fda0f1214b95494a606e8.scope WatchSource:0}: task 251afda46bd279027033c662cf8e7dac3fb819ee1c8fda0f1214b95494a606e8 not found: not found Feb 9 10:04:26.926810 systemd[1]: run-containerd-runc-k8s.io-e50ccc5215c06fa406ee5631884321e9aa704dbb58427dc067276070d6c1c8a0-runc.3drpzE.mount: Deactivated successfully. Feb 9 10:04:27.036518 sshd[4273]: pam_unix(sshd:session): session closed for user core Feb 9 10:04:27.039111 systemd[1]: sshd@23-10.200.20.16:22-10.200.12.6:58714.service: Deactivated successfully. Feb 9 10:04:27.039885 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 10:04:27.040479 systemd-logind[1368]: Session 26 logged out. Waiting for processes to exit. Feb 9 10:04:27.041188 systemd-logind[1368]: Removed session 26. Feb 9 10:04:28.810473 kubelet[2442]: W0209 10:04:28.810425 2442 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e52ab74_661a_4aea_9f63_fa2114f0ba4b.slice/cri-containerd-07db79acf05d68c1b9da6a53d634a454b1c7a3ef634ec44815f287bd2f7ac7a4.scope WatchSource:0}: task 07db79acf05d68c1b9da6a53d634a454b1c7a3ef634ec44815f287bd2f7ac7a4 not found: not found Feb 9 10:04:41.904818 systemd[1]: cri-containerd-e8d06a4002c5d2eeed261997d4514626292c2f9483715c932b3f1f9682ce8a1a.scope: Deactivated successfully. Feb 9 10:04:41.905130 systemd[1]: cri-containerd-e8d06a4002c5d2eeed261997d4514626292c2f9483715c932b3f1f9682ce8a1a.scope: Consumed 3.450s CPU time. Feb 9 10:04:41.924250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8d06a4002c5d2eeed261997d4514626292c2f9483715c932b3f1f9682ce8a1a-rootfs.mount: Deactivated successfully. Feb 9 10:04:41.990912 env[1379]: time="2024-02-09T10:04:41.990859621Z" level=info msg="shim disconnected" id=e8d06a4002c5d2eeed261997d4514626292c2f9483715c932b3f1f9682ce8a1a Feb 9 10:04:41.990912 env[1379]: time="2024-02-09T10:04:41.990907983Z" level=warning msg="cleaning up after shim disconnected" id=e8d06a4002c5d2eeed261997d4514626292c2f9483715c932b3f1f9682ce8a1a namespace=k8s.io Feb 9 10:04:41.990912 env[1379]: time="2024-02-09T10:04:41.990917504Z" level=info msg="cleaning up dead shim" Feb 9 10:04:41.998483 env[1379]: time="2024-02-09T10:04:41.998429993Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5271 runtime=io.containerd.runc.v2\n" Feb 9 10:04:42.798140 kubelet[2442]: I0209 10:04:42.798108 2442 scope.go:115] "RemoveContainer" containerID="e8d06a4002c5d2eeed261997d4514626292c2f9483715c932b3f1f9682ce8a1a" Feb 9 10:04:42.801211 env[1379]: time="2024-02-09T10:04:42.801168116Z" level=info msg="CreateContainer within sandbox \"7f6a3f61f411bdb5d8f209f85eadc879860db7ead78b7b1d07a670a2df370aeb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 10:04:42.841132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1906062605.mount: Deactivated successfully. Feb 9 10:04:42.845650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4149641159.mount: Deactivated successfully. Feb 9 10:04:42.870353 env[1379]: time="2024-02-09T10:04:42.870280957Z" level=info msg="CreateContainer within sandbox \"7f6a3f61f411bdb5d8f209f85eadc879860db7ead78b7b1d07a670a2df370aeb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8ab712931022a146ad668b19b3a3a1cbfba5fca6ac6c79e293eba4bdf2b95e85\"" Feb 9 10:04:42.870848 env[1379]: time="2024-02-09T10:04:42.870824821Z" level=info msg="StartContainer for \"8ab712931022a146ad668b19b3a3a1cbfba5fca6ac6c79e293eba4bdf2b95e85\"" Feb 9 10:04:42.885506 systemd[1]: Started cri-containerd-8ab712931022a146ad668b19b3a3a1cbfba5fca6ac6c79e293eba4bdf2b95e85.scope. Feb 9 10:04:42.931975 env[1379]: time="2024-02-09T10:04:42.931911509Z" level=info msg="StartContainer for \"8ab712931022a146ad668b19b3a3a1cbfba5fca6ac6c79e293eba4bdf2b95e85\" returns successfully" Feb 9 10:04:45.752810 systemd[1]: cri-containerd-85ee4104684fc26843f81438ec58e6995a1994d88343eb35a272b2af6bf172dc.scope: Deactivated successfully. Feb 9 10:04:45.753101 systemd[1]: cri-containerd-85ee4104684fc26843f81438ec58e6995a1994d88343eb35a272b2af6bf172dc.scope: Consumed 3.149s CPU time. Feb 9 10:04:45.774089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85ee4104684fc26843f81438ec58e6995a1994d88343eb35a272b2af6bf172dc-rootfs.mount: Deactivated successfully. Feb 9 10:04:45.805900 env[1379]: time="2024-02-09T10:04:45.805849513Z" level=info msg="shim disconnected" id=85ee4104684fc26843f81438ec58e6995a1994d88343eb35a272b2af6bf172dc Feb 9 10:04:45.806358 env[1379]: time="2024-02-09T10:04:45.805903835Z" level=warning msg="cleaning up after shim disconnected" id=85ee4104684fc26843f81438ec58e6995a1994d88343eb35a272b2af6bf172dc namespace=k8s.io Feb 9 10:04:45.806358 env[1379]: time="2024-02-09T10:04:45.805914396Z" level=info msg="cleaning up dead shim" Feb 9 10:04:45.814220 env[1379]: time="2024-02-09T10:04:45.814173803Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:04:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5330 runtime=io.containerd.runc.v2\n" Feb 9 10:04:46.216731 kubelet[2442]: E0209 10:04:46.216462 2442 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-ff24132019.17b229b7411ce28b", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-ff24132019", UID:"06802d2832255ee476b45342e262347a", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-ff24132019"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 4, 35, 775005323, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 4, 35, 775005323, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.16:35296->10.200.20.27:2379: read: connection timed out' (will not retry!) Feb 9 10:04:46.409723 kubelet[2442]: E0209 10:04:46.409472 2442 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.16:35482->10.200.20.27:2379: read: connection timed out" Feb 9 10:04:46.810807 kubelet[2442]: I0209 10:04:46.810435 2442 scope.go:115] "RemoveContainer" containerID="85ee4104684fc26843f81438ec58e6995a1994d88343eb35a272b2af6bf172dc" Feb 9 10:04:46.812414 env[1379]: time="2024-02-09T10:04:46.812373480Z" level=info msg="CreateContainer within sandbox \"60be4919fcc0014a49892b1d32673803c4862f0b139e5ff7b7e85f3478f8bf82\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 10:04:46.865866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3694003993.mount: Deactivated successfully. Feb 9 10:04:46.894145 env[1379]: time="2024-02-09T10:04:46.894079119Z" level=info msg="CreateContainer within sandbox \"60be4919fcc0014a49892b1d32673803c4862f0b139e5ff7b7e85f3478f8bf82\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3f415aa9db17d84e1f48a4117badf18f7cf3d453663986b540c4e26d01f40bd4\"" Feb 9 10:04:46.894831 env[1379]: time="2024-02-09T10:04:46.894808792Z" level=info msg="StartContainer for \"3f415aa9db17d84e1f48a4117badf18f7cf3d453663986b540c4e26d01f40bd4\"" Feb 9 10:04:46.915114 systemd[1]: Started cri-containerd-3f415aa9db17d84e1f48a4117badf18f7cf3d453663986b540c4e26d01f40bd4.scope. Feb 9 10:04:46.960717 env[1379]: time="2024-02-09T10:04:46.960654245Z" level=info msg="StartContainer for \"3f415aa9db17d84e1f48a4117badf18f7cf3d453663986b540c4e26d01f40bd4\" returns successfully"