Feb 9 09:54:06.025721 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 09:54:06.025743 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:54:06.025751 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 9 09:54:06.025758 kernel: printk: bootconsole [pl11] enabled Feb 9 09:54:06.025763 kernel: efi: EFI v2.70 by EDK II Feb 9 09:54:06.025768 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 9 09:54:06.025775 kernel: random: crng init done Feb 9 09:54:06.025780 kernel: ACPI: Early table checksum verification disabled Feb 9 09:54:06.025785 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 9 09:54:06.025791 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:06.025796 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:06.025803 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 09:54:06.025808 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:06.025814 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:06.025820 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:06.025826 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:06.025832 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:06.025839 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:06.025845 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 9 09:54:06.025851 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:06.025856 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 9 09:54:06.025862 kernel: NUMA: Failed to initialise from firmware Feb 9 09:54:06.025868 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 09:54:06.025874 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Feb 9 09:54:06.025879 kernel: Zone ranges: Feb 9 09:54:06.025885 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 9 09:54:06.025891 kernel: DMA32 empty Feb 9 09:54:06.025898 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 09:54:06.025903 kernel: Movable zone start for each node Feb 9 09:54:06.025909 kernel: Early memory node ranges Feb 9 09:54:06.025915 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 9 09:54:06.025921 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 9 09:54:06.025926 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 9 09:54:06.025932 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 9 09:54:06.025938 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 9 09:54:06.025944 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 9 09:54:06.025949 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 9 09:54:06.025955 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 9 09:54:06.025961 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 09:54:06.025968 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 09:54:06.025976 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 9 09:54:06.025982 kernel: psci: probing for conduit method from ACPI. Feb 9 09:54:06.025988 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 09:54:06.025994 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:54:06.026002 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 9 09:54:06.026008 kernel: psci: SMC Calling Convention v1.4 Feb 9 09:54:06.026014 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 9 09:54:06.026020 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 9 09:54:06.026026 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:54:06.026032 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:54:06.026038 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 09:54:06.026044 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:54:06.026050 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:54:06.026056 kernel: CPU features: detected: Hardware dirty bit management Feb 9 09:54:06.026062 kernel: CPU features: detected: Spectre-BHB Feb 9 09:54:06.026068 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:54:06.026076 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:54:06.026082 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 09:54:06.026088 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 9 09:54:06.026094 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 9 09:54:06.026100 kernel: Policy zone: Normal Feb 9 09:54:06.026108 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:54:06.026114 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:54:06.026121 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:54:06.026127 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:54:06.026133 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:54:06.026145 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 9 09:54:06.026154 kernel: Memory: 3991936K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202224K reserved, 0K cma-reserved) Feb 9 09:54:06.026161 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 09:54:06.026167 kernel: trace event string verifier disabled Feb 9 09:54:06.026173 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:54:06.026180 kernel: rcu: RCU event tracing is enabled. Feb 9 09:54:06.026186 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 09:54:06.026193 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:54:06.026202 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:54:06.026209 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:54:06.026216 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 09:54:06.026224 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:54:06.026230 kernel: GICv3: 960 SPIs implemented Feb 9 09:54:06.026236 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:54:06.026242 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:54:06.026248 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:54:06.026257 kernel: GICv3: 16 PPIs implemented Feb 9 09:54:06.026263 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 9 09:54:06.026270 kernel: ITS: No ITS available, not enabling LPIs Feb 9 09:54:06.026276 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:54:06.026282 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 09:54:06.026288 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 09:54:06.026295 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 09:54:06.026302 kernel: Console: colour dummy device 80x25 Feb 9 09:54:06.026312 kernel: printk: console [tty1] enabled Feb 9 09:54:06.026318 kernel: ACPI: Core revision 20210730 Feb 9 09:54:06.026325 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 09:54:06.026331 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:54:06.026337 kernel: LSM: Security Framework initializing Feb 9 09:54:06.026343 kernel: SELinux: Initializing. Feb 9 09:54:06.026350 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:54:06.026359 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:54:06.026367 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 9 09:54:06.026373 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 9 09:54:06.026379 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:54:06.026386 kernel: Remapping and enabling EFI services. Feb 9 09:54:06.026392 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:54:06.026398 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:54:06.026404 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 9 09:54:06.026411 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:54:06.026417 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 09:54:06.026425 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 09:54:06.026431 kernel: SMP: Total of 2 processors activated. Feb 9 09:54:06.026440 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:54:06.026447 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 9 09:54:06.026454 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 09:54:06.026460 kernel: CPU features: detected: CRC32 instructions Feb 9 09:54:06.026466 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 09:54:06.026472 kernel: CPU features: detected: LSE atomic instructions Feb 9 09:54:06.026479 kernel: CPU features: detected: Privileged Access Never Feb 9 09:54:06.026486 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:54:06.026495 kernel: alternatives: patching kernel code Feb 9 09:54:06.026506 kernel: devtmpfs: initialized Feb 9 09:54:06.026514 kernel: KASLR enabled Feb 9 09:54:06.026520 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:54:06.026527 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 09:54:06.026534 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:54:06.026540 kernel: SMBIOS 3.1.0 present. Feb 9 09:54:06.026550 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 09:54:06.026556 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:54:06.026564 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:54:06.026571 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:54:06.026578 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:54:06.026584 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:54:06.026591 kernel: audit: type=2000 audit(0.090:1): state=initialized audit_enabled=0 res=1 Feb 9 09:54:06.026600 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:54:06.026615 kernel: cpuidle: using governor menu Feb 9 09:54:06.026624 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:54:06.026630 kernel: ASID allocator initialised with 32768 entries Feb 9 09:54:06.026637 kernel: ACPI: bus type PCI registered Feb 9 09:54:06.026644 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:54:06.026650 kernel: Serial: AMBA PL011 UART driver Feb 9 09:54:06.026660 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:54:06.026667 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:54:06.026674 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:54:06.026680 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:54:06.026689 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:54:06.026695 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:54:06.026702 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:54:06.026708 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:54:06.026715 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:54:06.026722 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:54:06.026728 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:54:06.026735 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:54:06.026742 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:54:06.026753 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:54:06.026759 kernel: ACPI: Interpreter enabled Feb 9 09:54:06.026766 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:54:06.026772 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 9 09:54:06.026779 kernel: printk: console [ttyAMA0] enabled Feb 9 09:54:06.026786 kernel: printk: bootconsole [pl11] disabled Feb 9 09:54:06.026792 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 9 09:54:06.026799 kernel: iommu: Default domain type: Translated Feb 9 09:54:06.026808 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:54:06.026816 kernel: vgaarb: loaded Feb 9 09:54:06.026823 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:54:06.026829 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:54:06.026836 kernel: PTP clock support registered Feb 9 09:54:06.026843 kernel: Registered efivars operations Feb 9 09:54:06.026852 kernel: No ACPI PMU IRQ for CPU0 Feb 9 09:54:06.026859 kernel: No ACPI PMU IRQ for CPU1 Feb 9 09:54:06.026865 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:54:06.026872 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:54:06.026880 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:54:06.026886 kernel: pnp: PnP ACPI init Feb 9 09:54:06.026893 kernel: pnp: PnP ACPI: found 0 devices Feb 9 09:54:06.026899 kernel: NET: Registered PF_INET protocol family Feb 9 09:54:06.026909 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:54:06.026916 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:54:06.026923 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:54:06.026930 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:54:06.026939 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:54:06.026947 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:54:06.026954 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:54:06.026961 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:54:06.026967 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:54:06.026977 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:54:06.026983 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 9 09:54:06.026990 kernel: kvm [1]: HYP mode not available Feb 9 09:54:06.026997 kernel: Initialise system trusted keyrings Feb 9 09:54:06.027003 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:54:06.027011 kernel: Key type asymmetric registered Feb 9 09:54:06.027018 kernel: Asymmetric key parser 'x509' registered Feb 9 09:54:06.027027 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:54:06.027034 kernel: io scheduler mq-deadline registered Feb 9 09:54:06.027040 kernel: io scheduler kyber registered Feb 9 09:54:06.027047 kernel: io scheduler bfq registered Feb 9 09:54:06.027054 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:54:06.027060 kernel: thunder_xcv, ver 1.0 Feb 9 09:54:06.027069 kernel: thunder_bgx, ver 1.0 Feb 9 09:54:06.027079 kernel: nicpf, ver 1.0 Feb 9 09:54:06.027085 kernel: nicvf, ver 1.0 Feb 9 09:54:06.027208 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:54:06.027277 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:54:05 UTC (1707472445) Feb 9 09:54:06.027287 kernel: efifb: probing for efifb Feb 9 09:54:06.027294 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 09:54:06.027301 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 09:54:06.027307 kernel: efifb: scrolling: redraw Feb 9 09:54:06.027316 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 09:54:06.027327 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:54:06.027334 kernel: fb0: EFI VGA frame buffer device Feb 9 09:54:06.027340 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 9 09:54:06.027347 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:54:06.027354 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:54:06.027360 kernel: Segment Routing with IPv6 Feb 9 09:54:06.027367 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:54:06.027373 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:54:06.027384 kernel: Key type dns_resolver registered Feb 9 09:54:06.027390 kernel: registered taskstats version 1 Feb 9 09:54:06.027397 kernel: Loading compiled-in X.509 certificates Feb 9 09:54:06.027404 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:54:06.027410 kernel: Key type .fscrypt registered Feb 9 09:54:06.027416 kernel: Key type fscrypt-provisioning registered Feb 9 09:54:06.027423 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:54:06.027430 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:54:06.027439 kernel: ima: No architecture policies found Feb 9 09:54:06.027447 kernel: Freeing unused kernel memory: 34688K Feb 9 09:54:06.027454 kernel: Run /init as init process Feb 9 09:54:06.027460 kernel: with arguments: Feb 9 09:54:06.027467 kernel: /init Feb 9 09:54:06.027473 kernel: with environment: Feb 9 09:54:06.027480 kernel: HOME=/ Feb 9 09:54:06.027489 kernel: TERM=linux Feb 9 09:54:06.027495 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:54:06.027504 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:54:06.027514 systemd[1]: Detected virtualization microsoft. Feb 9 09:54:06.031674 systemd[1]: Detected architecture arm64. Feb 9 09:54:06.031699 systemd[1]: Running in initrd. Feb 9 09:54:06.031707 systemd[1]: No hostname configured, using default hostname. Feb 9 09:54:06.031715 systemd[1]: Hostname set to . Feb 9 09:54:06.031722 systemd[1]: Initializing machine ID from random generator. Feb 9 09:54:06.031730 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:54:06.031741 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:54:06.031748 systemd[1]: Reached target cryptsetup.target. Feb 9 09:54:06.031755 systemd[1]: Reached target paths.target. Feb 9 09:54:06.031763 systemd[1]: Reached target slices.target. Feb 9 09:54:06.031770 systemd[1]: Reached target swap.target. Feb 9 09:54:06.031777 systemd[1]: Reached target timers.target. Feb 9 09:54:06.031785 systemd[1]: Listening on iscsid.socket. Feb 9 09:54:06.031792 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:54:06.031800 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:54:06.031808 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:54:06.031815 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:54:06.031822 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:54:06.031829 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:54:06.031836 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:54:06.031844 systemd[1]: Reached target sockets.target. Feb 9 09:54:06.031851 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:54:06.031858 systemd[1]: Finished network-cleanup.service. Feb 9 09:54:06.031867 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:54:06.031874 systemd[1]: Starting systemd-journald.service... Feb 9 09:54:06.031881 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:54:06.031888 systemd[1]: Starting systemd-resolved.service... Feb 9 09:54:06.031895 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:54:06.031907 systemd-journald[276]: Journal started Feb 9 09:54:06.031960 systemd-journald[276]: Runtime Journal (/run/log/journal/7261095ce9d4463cba5895e9d69fc405) is 8.0M, max 78.6M, 70.6M free. Feb 9 09:54:06.009862 systemd-modules-load[277]: Inserted module 'overlay' Feb 9 09:54:06.055628 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:54:06.070193 systemd[1]: Started systemd-journald.service. Feb 9 09:54:06.070250 kernel: Bridge firewalling registered Feb 9 09:54:06.070346 systemd-modules-load[277]: Inserted module 'br_netfilter' Feb 9 09:54:06.123848 kernel: audit: type=1130 audit(1707472446.076:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.123874 kernel: SCSI subsystem initialized Feb 9 09:54:06.123883 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:54:06.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.102227 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:54:06.157946 kernel: audit: type=1130 audit(1707472446.128:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.157969 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:54:06.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.104450 systemd-resolved[278]: Positive Trust Anchors: Feb 9 09:54:06.195004 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:54:06.195028 kernel: audit: type=1130 audit(1707472446.162:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.104458 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:54:06.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.104486 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:54:06.306652 kernel: audit: type=1130 audit(1707472446.199:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.306690 kernel: audit: type=1130 audit(1707472446.274:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.106556 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 9 09:54:06.129372 systemd[1]: Started systemd-resolved.service. Feb 9 09:54:06.347906 kernel: audit: type=1130 audit(1707472446.317:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.162946 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:54:06.194376 systemd-modules-load[277]: Inserted module 'dm_multipath' Feb 9 09:54:06.229157 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:54:06.274795 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:54:06.317986 systemd[1]: Reached target nss-lookup.target. Feb 9 09:54:06.359668 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:54:06.433183 kernel: audit: type=1130 audit(1707472446.403:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.367187 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:54:06.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.378302 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:54:06.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.388166 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:54:06.504740 kernel: audit: type=1130 audit(1707472446.437:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.504763 kernel: audit: type=1130 audit(1707472446.466:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.404276 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:54:06.438575 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:54:06.514942 dracut-cmdline[299]: dracut-dracut-053 Feb 9 09:54:06.514942 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=t Feb 9 09:54:06.514942 dracut-cmdline[299]: tyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:54:06.491046 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:54:06.573629 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:54:06.585623 kernel: iscsi: registered transport (tcp) Feb 9 09:54:06.606758 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:54:06.606817 kernel: QLogic iSCSI HBA Driver Feb 9 09:54:06.636043 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:54:06.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:06.642346 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:54:06.698628 kernel: raid6: neonx8 gen() 13823 MB/s Feb 9 09:54:06.719627 kernel: raid6: neonx8 xor() 10834 MB/s Feb 9 09:54:06.741618 kernel: raid6: neonx4 gen() 13579 MB/s Feb 9 09:54:06.762617 kernel: raid6: neonx4 xor() 11200 MB/s Feb 9 09:54:06.782634 kernel: raid6: neonx2 gen() 12940 MB/s Feb 9 09:54:06.804618 kernel: raid6: neonx2 xor() 10248 MB/s Feb 9 09:54:06.825616 kernel: raid6: neonx1 gen() 10516 MB/s Feb 9 09:54:06.846617 kernel: raid6: neonx1 xor() 8803 MB/s Feb 9 09:54:06.868618 kernel: raid6: int64x8 gen() 6294 MB/s Feb 9 09:54:06.889616 kernel: raid6: int64x8 xor() 3550 MB/s Feb 9 09:54:06.910616 kernel: raid6: int64x4 gen() 7275 MB/s Feb 9 09:54:06.931617 kernel: raid6: int64x4 xor() 3855 MB/s Feb 9 09:54:06.952616 kernel: raid6: int64x2 gen() 6156 MB/s Feb 9 09:54:06.972617 kernel: raid6: int64x2 xor() 3324 MB/s Feb 9 09:54:06.994618 kernel: raid6: int64x1 gen() 5040 MB/s Feb 9 09:54:07.019794 kernel: raid6: int64x1 xor() 2647 MB/s Feb 9 09:54:07.019804 kernel: raid6: using algorithm neonx8 gen() 13823 MB/s Feb 9 09:54:07.019812 kernel: raid6: .... xor() 10834 MB/s, rmw enabled Feb 9 09:54:07.024813 kernel: raid6: using neon recovery algorithm Feb 9 09:54:07.048228 kernel: xor: measuring software checksum speed Feb 9 09:54:07.048239 kernel: 8regs : 17293 MB/sec Feb 9 09:54:07.052616 kernel: 32regs : 20755 MB/sec Feb 9 09:54:07.063093 kernel: arm64_neon : 27854 MB/sec Feb 9 09:54:07.063103 kernel: xor: using function: arm64_neon (27854 MB/sec) Feb 9 09:54:07.120629 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:54:07.129974 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:54:07.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.138000 audit: BPF prog-id=7 op=LOAD Feb 9 09:54:07.138000 audit: BPF prog-id=8 op=LOAD Feb 9 09:54:07.139837 systemd[1]: Starting systemd-udevd.service... Feb 9 09:54:07.173797 systemd-udevd[476]: Using default interface naming scheme 'v252'. Feb 9 09:54:07.179028 systemd[1]: Started systemd-udevd.service. Feb 9 09:54:07.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.190153 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:54:07.207418 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Feb 9 09:54:07.234864 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:54:07.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.240403 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:54:07.280200 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:54:07.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.335692 kernel: hv_vmbus: Vmbus version:5.3 Feb 9 09:54:07.343629 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 09:54:07.361639 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 09:54:07.361692 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 09:54:07.370919 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 09:54:07.370970 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 09:54:07.391450 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 09:54:07.391504 kernel: scsi host1: storvsc_host_t Feb 9 09:54:07.391543 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 09:54:07.391683 kernel: scsi host0: storvsc_host_t Feb 9 09:54:07.395636 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 09:54:07.408838 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 09:54:07.429504 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 09:54:07.429730 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 09:54:07.442602 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 09:54:07.442879 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 09:54:07.442976 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 09:54:07.447412 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 09:54:07.455005 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 09:54:07.455162 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 09:54:07.467379 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:54:07.467427 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 09:54:07.485631 kernel: hv_netvsc 000d3afc-8196-000d-3afc-8196000d3afc eth0: VF slot 1 added Feb 9 09:54:07.496632 kernel: hv_vmbus: registering driver hv_pci Feb 9 09:54:07.505631 kernel: hv_pci a8bd0933-2d00-450b-8c32-8bb538495b91: PCI VMBus probing: Using version 0x10004 Feb 9 09:54:07.536974 kernel: hv_pci a8bd0933-2d00-450b-8c32-8bb538495b91: PCI host bridge to bus 2d00:00 Feb 9 09:54:07.537158 kernel: pci_bus 2d00:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 9 09:54:07.537259 kernel: pci_bus 2d00:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 09:54:07.550640 kernel: pci 2d00:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 9 09:54:07.563682 kernel: pci 2d00:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 09:54:07.587041 kernel: pci 2d00:00:02.0: enabling Extended Tags Feb 9 09:54:07.608642 kernel: pci 2d00:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2d00:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 9 09:54:07.608867 kernel: pci_bus 2d00:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 09:54:07.621408 kernel: pci 2d00:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 09:54:07.664638 kernel: mlx5_core 2d00:00:02.0: firmware version: 16.30.1284 Feb 9 09:54:07.822630 kernel: mlx5_core 2d00:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 9 09:54:07.852705 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:54:07.899654 kernel: hv_netvsc 000d3afc-8196-000d-3afc-8196000d3afc eth0: VF registering: eth1 Feb 9 09:54:07.899864 kernel: mlx5_core 2d00:00:02.0 eth1: joined to eth0 Feb 9 09:54:07.915624 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (544) Feb 9 09:54:07.923628 kernel: mlx5_core 2d00:00:02.0 enP11520s1: renamed from eth1 Feb 9 09:54:07.931466 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:54:08.071133 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:54:08.106086 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:54:08.123727 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:54:08.132163 systemd[1]: Starting disk-uuid.service... Feb 9 09:54:08.161643 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:54:08.170626 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:54:09.181478 disk-uuid[605]: The operation has completed successfully. Feb 9 09:54:09.188806 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:54:09.233747 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:54:09.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:09.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:09.233844 systemd[1]: Finished disk-uuid.service. Feb 9 09:54:09.248908 systemd[1]: Starting verity-setup.service... Feb 9 09:54:09.295445 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:54:09.546316 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:54:09.552886 systemd[1]: Finished verity-setup.service. Feb 9 09:54:09.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:09.564483 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:54:09.629644 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:54:09.629286 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:54:09.634626 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:54:09.635468 systemd[1]: Starting ignition-setup.service... Feb 9 09:54:09.661942 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:54:09.689688 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:54:09.689735 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:54:09.695231 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:54:09.756096 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:54:09.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:09.766000 audit: BPF prog-id=9 op=LOAD Feb 9 09:54:09.767953 systemd[1]: Starting systemd-networkd.service... Feb 9 09:54:09.782381 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:54:09.800302 systemd-networkd[847]: lo: Link UP Feb 9 09:54:09.800311 systemd-networkd[847]: lo: Gained carrier Feb 9 09:54:09.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:09.801054 systemd-networkd[847]: Enumeration completed Feb 9 09:54:09.804824 systemd[1]: Started systemd-networkd.service. Feb 9 09:54:09.810723 systemd[1]: Reached target network.target. Feb 9 09:54:09.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:09.820957 systemd-networkd[847]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:54:09.853605 iscsid[855]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:54:09.853605 iscsid[855]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 09:54:09.853605 iscsid[855]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:54:09.853605 iscsid[855]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:54:09.853605 iscsid[855]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:54:09.853605 iscsid[855]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:54:09.853605 iscsid[855]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:54:09.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:09.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:09.822102 systemd[1]: Starting iscsiuio.service... Feb 9 09:54:09.830214 systemd[1]: Started iscsiuio.service. Feb 9 09:54:09.840600 systemd[1]: Starting iscsid.service... Feb 9 09:54:09.992774 kernel: mlx5_core 2d00:00:02.0 enP11520s1: Link up Feb 9 09:54:09.858205 systemd[1]: Started iscsid.service. Feb 9 09:54:09.872683 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:54:10.053651 kernel: kauditd_printk_skb: 16 callbacks suppressed Feb 9 09:54:10.053679 kernel: audit: type=1130 audit(1707472450.012:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:10.053690 kernel: hv_netvsc 000d3afc-8196-000d-3afc-8196000d3afc eth0: Data path switched to VF: enP11520s1 Feb 9 09:54:10.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:09.941947 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:54:09.948010 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:54:10.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:09.961742 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:54:10.108810 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:54:10.108840 kernel: audit: type=1130 audit(1707472450.071:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:09.972277 systemd[1]: Reached target remote-fs.target. Feb 9 09:54:09.988980 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:54:10.006908 systemd[1]: Finished ignition-setup.service. Feb 9 09:54:10.012695 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:54:10.094714 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:54:10.095637 systemd-networkd[847]: enP11520s1: Link UP Feb 9 09:54:10.095714 systemd-networkd[847]: eth0: Link UP Feb 9 09:54:10.095827 systemd-networkd[847]: eth0: Gained carrier Feb 9 09:54:10.109802 systemd-networkd[847]: enP11520s1: Gained carrier Feb 9 09:54:10.138698 systemd-networkd[847]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:54:11.834752 systemd-networkd[847]: eth0: Gained IPv6LL Feb 9 09:54:13.470788 ignition[870]: Ignition 2.14.0 Feb 9 09:54:13.470800 ignition[870]: Stage: fetch-offline Feb 9 09:54:13.470853 ignition[870]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:13.470877 ignition[870]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:13.534556 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:13.534731 ignition[870]: parsed url from cmdline: "" Feb 9 09:54:13.541373 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:54:13.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:13.534735 ignition[870]: no config URL provided Feb 9 09:54:13.549494 systemd[1]: Starting ignition-fetch.service... Feb 9 09:54:13.591062 kernel: audit: type=1130 audit(1707472453.547:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:13.534740 ignition[870]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:54:13.534749 ignition[870]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:54:13.534755 ignition[870]: failed to fetch config: resource requires networking Feb 9 09:54:13.535068 ignition[870]: Ignition finished successfully Feb 9 09:54:13.578737 ignition[876]: Ignition 2.14.0 Feb 9 09:54:13.578743 ignition[876]: Stage: fetch Feb 9 09:54:13.578839 ignition[876]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:13.578857 ignition[876]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:13.581279 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:13.581382 ignition[876]: parsed url from cmdline: "" Feb 9 09:54:13.581386 ignition[876]: no config URL provided Feb 9 09:54:13.581391 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:54:13.581398 ignition[876]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:54:13.581425 ignition[876]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 09:54:13.699160 ignition[876]: GET result: OK Feb 9 09:54:13.699277 ignition[876]: config has been read from IMDS userdata Feb 9 09:54:13.699335 ignition[876]: parsing config with SHA512: f1dfd05425888d0ddb55c9dafe340a9a5fb7fd5711152fbda93e4bc8449b560936081136c7f3ff834bfc550666e5fb32809b95fca8626c7f3376e874fc4d3db6 Feb 9 09:54:13.733148 unknown[876]: fetched base config from "system" Feb 9 09:54:13.733165 unknown[876]: fetched base config from "system" Feb 9 09:54:13.734091 ignition[876]: fetch: fetch complete Feb 9 09:54:13.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:13.733170 unknown[876]: fetched user config from "azure" Feb 9 09:54:13.782059 kernel: audit: type=1130 audit(1707472453.748:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:13.734114 ignition[876]: fetch: fetch passed Feb 9 09:54:13.741179 systemd[1]: Finished ignition-fetch.service. Feb 9 09:54:13.734202 ignition[876]: Ignition finished successfully Feb 9 09:54:13.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:13.750019 systemd[1]: Starting ignition-kargs.service... Feb 9 09:54:13.828032 kernel: audit: type=1130 audit(1707472453.795:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:13.780088 ignition[882]: Ignition 2.14.0 Feb 9 09:54:13.869158 kernel: audit: type=1130 audit(1707472453.837:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:13.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:13.791307 systemd[1]: Finished ignition-kargs.service. Feb 9 09:54:13.780095 ignition[882]: Stage: kargs Feb 9 09:54:13.817997 systemd[1]: Starting ignition-disks.service... Feb 9 09:54:13.780206 ignition[882]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:13.831966 systemd[1]: Finished ignition-disks.service. Feb 9 09:54:13.780225 ignition[882]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:13.837507 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:54:13.783168 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:13.874309 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:54:13.786679 ignition[882]: kargs: kargs passed Feb 9 09:54:13.879766 systemd[1]: Reached target local-fs.target. Feb 9 09:54:13.786769 ignition[882]: Ignition finished successfully Feb 9 09:54:13.889132 systemd[1]: Reached target sysinit.target. Feb 9 09:54:13.824777 ignition[888]: Ignition 2.14.0 Feb 9 09:54:13.898502 systemd[1]: Reached target basic.target. Feb 9 09:54:13.824784 ignition[888]: Stage: disks Feb 9 09:54:13.917426 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:54:13.824887 ignition[888]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:13.824914 ignition[888]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:13.990274 systemd-fsck[896]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 9 09:54:13.828313 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:13.993725 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:54:14.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:13.831097 ignition[888]: disks: disks passed Feb 9 09:54:13.831150 ignition[888]: Ignition finished successfully Feb 9 09:54:14.040852 systemd[1]: Mounting sysroot.mount... Feb 9 09:54:14.056631 kernel: audit: type=1130 audit(1707472454.015:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.066634 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:54:14.066913 systemd[1]: Mounted sysroot.mount. Feb 9 09:54:14.070787 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:54:14.107897 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:54:14.113080 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 09:54:14.121562 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:54:14.121602 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:54:14.128220 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:54:14.170375 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:54:14.175824 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:54:14.208101 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (907) Feb 9 09:54:14.208150 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:54:14.208160 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:54:14.219134 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:54:14.224463 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:54:14.229815 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:54:14.256117 initrd-setup-root[938]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:54:14.265487 initrd-setup-root[946]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:54:14.274304 initrd-setup-root[954]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:54:14.710900 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:54:14.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.717916 systemd[1]: Starting ignition-mount.service... Feb 9 09:54:14.762815 kernel: audit: type=1130 audit(1707472454.716:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.751637 systemd[1]: Starting sysroot-boot.service... Feb 9 09:54:14.759459 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 09:54:14.759586 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 09:54:14.791872 ignition[973]: INFO : Ignition 2.14.0 Feb 9 09:54:14.791872 ignition[973]: INFO : Stage: mount Feb 9 09:54:14.791872 ignition[973]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:14.791872 ignition[973]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:14.855587 kernel: audit: type=1130 audit(1707472454.806:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.801493 systemd[1]: Finished ignition-mount.service. Feb 9 09:54:14.861243 ignition[973]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:14.861243 ignition[973]: INFO : mount: mount passed Feb 9 09:54:14.861243 ignition[973]: INFO : Ignition finished successfully Feb 9 09:54:14.910683 kernel: audit: type=1130 audit(1707472454.866:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.857132 systemd[1]: Finished sysroot-boot.service. Feb 9 09:54:15.567479 coreos-metadata[906]: Feb 09 09:54:15.567 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 09:54:15.578412 coreos-metadata[906]: Feb 09 09:54:15.578 INFO Fetch successful Feb 9 09:54:15.615013 coreos-metadata[906]: Feb 09 09:54:15.614 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 09:54:15.642520 coreos-metadata[906]: Feb 09 09:54:15.642 INFO Fetch successful Feb 9 09:54:15.658593 coreos-metadata[906]: Feb 09 09:54:15.658 INFO wrote hostname ci-3510.3.2-a-b353ffea6c to /sysroot/etc/hostname Feb 9 09:54:15.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:15.668383 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 09:54:15.701970 kernel: audit: type=1130 audit(1707472455.673:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:15.688670 systemd[1]: Starting ignition-files.service... Feb 9 09:54:15.703027 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:54:15.726636 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (985) Feb 9 09:54:15.738203 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:54:15.738240 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:54:15.743410 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:54:15.749925 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:54:15.768259 ignition[1004]: INFO : Ignition 2.14.0 Feb 9 09:54:15.768259 ignition[1004]: INFO : Stage: files Feb 9 09:54:15.778104 ignition[1004]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:15.778104 ignition[1004]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:15.778104 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:15.778104 ignition[1004]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:54:15.815865 ignition[1004]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:54:15.815865 ignition[1004]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:54:15.941637 ignition[1004]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:54:15.950163 ignition[1004]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:54:15.964153 unknown[1004]: wrote ssh authorized keys file for user: core Feb 9 09:54:15.969717 ignition[1004]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:54:15.983773 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 09:54:15.994851 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:54:16.419893 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 09:54:16.570228 ignition[1004]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 09:54:16.585705 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 09:54:16.585705 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:54:16.585705 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 09:54:16.742906 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 09:54:17.075867 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:54:17.075867 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 09:54:17.099052 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 09:54:17.444939 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 09:54:17.716292 ignition[1004]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 09:54:17.736640 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 09:54:17.736640 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:54:17.736640 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubectl: attempt #1 Feb 9 09:54:17.899131 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 09:54:18.211623 ignition[1004]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 6a5c9c02a29126949f096415bb1761a0c0ad44168e2ab3d0409982701da58f96223bec354828ddf958e945ef1ce63c0ad41e77cbcbcce0756163e71b4fbae432 Feb 9 09:54:18.231788 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:54:18.231788 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:54:18.231788 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:54:18.271539 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 09:54:18.581292 ignition[1004]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 5a08b81f9cc82d3cce21130856ca63b8dafca9149d9775dd25b376eb0f18209aa0e4a47c0a6d7e6fb1316aacd5d59dec770f26c09120c866949d70bc415518b3 Feb 9 09:54:18.612937 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:54:18.612937 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:54:18.612937 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:54:18.655001 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 09:54:19.266964 ignition[1004]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 5a898ef543a6482895101ea58e33602e3c0a7682d322aaf08ac3dc8a5a3c8da8f09600d577024549288f8cebb1a86f9c79927796b69a3d8fe989ca8f12b147d6 Feb 9 09:54:19.286166 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:54:19.286166 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:54:19.286166 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:54:19.286166 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:54:19.286166 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 09:54:19.658890 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 09:54:19.708952 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:54:19.723881 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:54:19.723881 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:54:19.723881 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:54:19.723881 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:54:19.723881 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:54:19.723881 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:54:19.723881 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:54:19.723881 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:54:19.847703 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1008) Feb 9 09:54:19.845870 systemd[1]: mnt-oem2161246240.mount: Deactivated successfully. Feb 9 09:54:19.855650 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:54:19.855650 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:54:19.855650 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 09:54:19.855650 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:54:19.855650 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2161246240" Feb 9 09:54:19.855650 ignition[1004]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2161246240": device or resource busy Feb 9 09:54:19.855650 ignition[1004]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2161246240", trying btrfs: device or resource busy Feb 9 09:54:19.855650 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2161246240" Feb 9 09:54:19.855650 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2161246240" Feb 9 09:54:19.855650 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem2161246240" Feb 9 09:54:19.855650 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem2161246240" Feb 9 09:54:19.855650 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 09:54:19.855650 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:54:19.855650 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:54:20.113405 kernel: audit: type=1130 audit(1707472459.885:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.113433 kernel: audit: type=1130 audit(1707472459.976:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.113443 kernel: audit: type=1131 audit(1707472459.976:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:19.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:19.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:19.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.113548 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem124257251" Feb 9 09:54:20.113548 ignition[1004]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem124257251": device or resource busy Feb 9 09:54:20.113548 ignition[1004]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem124257251", trying btrfs: device or resource busy Feb 9 09:54:20.113548 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem124257251" Feb 9 09:54:20.113548 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem124257251" Feb 9 09:54:20.113548 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem124257251" Feb 9 09:54:20.113548 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem124257251" Feb 9 09:54:20.113548 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:54:20.113548 ignition[1004]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 9 09:54:20.113548 ignition[1004]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 9 09:54:20.113548 ignition[1004]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 9 09:54:20.113548 ignition[1004]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 9 09:54:20.113548 ignition[1004]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:54:20.113548 ignition[1004]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:54:20.113548 ignition[1004]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:54:20.113548 ignition[1004]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:54:20.113548 ignition[1004]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Feb 9 09:54:20.375056 kernel: audit: type=1130 audit(1707472460.242:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.375085 kernel: audit: type=1130 audit(1707472460.346:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:19.871197 systemd[1]: Finished ignition-files.service. Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(20): [started] setting preset to enabled for "waagent.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(20): [finished] setting preset to enabled for "waagent.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(21): [started] setting preset to enabled for "nvidia.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(21): [finished] setting preset to enabled for "nvidia.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(22): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(23): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:54:20.382564 ignition[1004]: INFO : files: files passed Feb 9 09:54:20.382564 ignition[1004]: INFO : Ignition finished successfully Feb 9 09:54:20.743842 kernel: audit: type=1131 audit(1707472460.377:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.743879 kernel: audit: type=1130 audit(1707472460.499:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.743890 kernel: audit: type=1131 audit(1707472460.626:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:19.917791 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:54:19.923233 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:54:20.776250 initrd-setup-root-after-ignition[1029]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:54:19.923961 systemd[1]: Starting ignition-quench.service... Feb 9 09:54:19.952222 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:54:19.952335 systemd[1]: Finished ignition-quench.service. Feb 9 09:54:20.235895 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:54:20.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.242935 systemd[1]: Reached target ignition-complete.target. Feb 9 09:54:20.287359 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:54:20.333483 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:54:20.918110 kernel: audit: type=1131 audit(1707472460.833:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.918132 kernel: audit: type=1131 audit(1707472460.884:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.333585 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:54:20.951587 kernel: audit: type=1131 audit(1707472460.918:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.377953 systemd[1]: Reached target initrd-fs.target. Feb 9 09:54:20.983123 kernel: audit: type=1131 audit(1707472460.951:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.388009 systemd[1]: Reached target initrd.target. Feb 9 09:54:21.022663 kernel: audit: type=1131 audit(1707472460.983:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.428124 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:54:21.032215 iscsid[855]: iscsid shutting down. Feb 9 09:54:20.440741 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:54:21.118726 kernel: audit: type=1131 audit(1707472461.050:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.118756 kernel: audit: type=1131 audit(1707472461.084:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.118863 ignition[1042]: INFO : Ignition 2.14.0 Feb 9 09:54:21.118863 ignition[1042]: INFO : Stage: umount Feb 9 09:54:21.118863 ignition[1042]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:21.118863 ignition[1042]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:21.118863 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:21.118863 ignition[1042]: INFO : umount: umount passed Feb 9 09:54:21.118863 ignition[1042]: INFO : Ignition finished successfully Feb 9 09:54:21.267636 kernel: audit: type=1131 audit(1707472461.124:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.267660 kernel: audit: type=1130 audit(1707472461.157:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.267677 kernel: audit: type=1131 audit(1707472461.157:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.494202 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:54:21.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.540526 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:54:20.567711 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:54:20.580566 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:54:20.597153 systemd[1]: Stopped target timers.target. Feb 9 09:54:20.611779 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:54:20.611846 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:54:20.657054 systemd[1]: Stopped target initrd.target. Feb 9 09:54:21.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.672318 systemd[1]: Stopped target basic.target. Feb 9 09:54:20.686992 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:54:20.705879 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:54:21.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.726467 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:54:21.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.737886 systemd[1]: Stopped target remote-fs.target. Feb 9 09:54:20.749836 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:54:20.762814 systemd[1]: Stopped target sysinit.target. Feb 9 09:54:21.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.781638 systemd[1]: Stopped target local-fs.target. Feb 9 09:54:20.798014 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:54:20.809873 systemd[1]: Stopped target swap.target. Feb 9 09:54:20.822575 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:54:21.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.822645 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:54:20.862036 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:54:21.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.872687 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:54:21.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.872738 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:54:21.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.884440 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:54:20.884485 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:54:21.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.918571 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:54:21.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.918632 systemd[1]: Stopped ignition-files.service. Feb 9 09:54:21.561000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:54:20.952088 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 09:54:21.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.952140 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 09:54:21.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.006324 systemd[1]: Stopping ignition-mount.service... Feb 9 09:54:21.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.018141 systemd[1]: Stopping iscsid.service... Feb 9 09:54:21.031580 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:54:21.044205 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:54:21.044288 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:54:21.647889 kernel: hv_netvsc 000d3afc-8196-000d-3afc-8196000d3afc eth0: Data path switched from VF: enP11520s1 Feb 9 09:54:21.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.051000 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:54:21.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.051060 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:54:21.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.105242 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 09:54:21.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.105359 systemd[1]: Stopped iscsid.service. Feb 9 09:54:21.124745 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:54:21.124842 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:54:21.187511 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:54:21.187970 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:54:21.188058 systemd[1]: Stopped ignition-mount.service. Feb 9 09:54:21.233454 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:54:21.233510 systemd[1]: Stopped ignition-disks.service. Feb 9 09:54:21.246485 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:54:21.246531 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:54:21.252518 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 09:54:21.252556 systemd[1]: Stopped ignition-fetch.service. Feb 9 09:54:21.261787 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:54:21.261826 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:54:21.273682 systemd[1]: Stopped target paths.target. Feb 9 09:54:21.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.283472 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:54:21.293038 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:54:21.299338 systemd[1]: Stopped target slices.target. Feb 9 09:54:21.309747 systemd[1]: Stopped target sockets.target. Feb 9 09:54:21.319156 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:54:21.319201 systemd[1]: Closed iscsid.socket. Feb 9 09:54:21.330414 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:54:21.330456 systemd[1]: Stopped ignition-setup.service. Feb 9 09:54:21.340984 systemd[1]: Stopping iscsiuio.service... Feb 9 09:54:21.358857 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:54:21.358955 systemd[1]: Stopped iscsiuio.service. Feb 9 09:54:21.370739 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:54:21.370814 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:54:21.381301 systemd[1]: Stopped target network.target. Feb 9 09:54:21.391839 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:54:21.391886 systemd[1]: Closed iscsiuio.socket. Feb 9 09:54:21.401070 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:54:21.401111 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:54:21.413380 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:54:21.832636 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Feb 9 09:54:21.423235 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:54:21.436424 systemd-networkd[847]: eth0: DHCPv6 lease lost Feb 9 09:54:21.832000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:54:21.441699 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:54:21.441803 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:54:21.447937 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:54:21.447970 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:54:21.461861 systemd[1]: Stopping network-cleanup.service... Feb 9 09:54:21.473220 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:54:21.473286 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:54:21.481756 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:54:21.481807 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:54:21.499545 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:54:21.499598 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:54:21.508550 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:54:21.521133 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:54:21.521657 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:54:21.521764 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:54:21.536164 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:54:21.536279 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:54:21.549798 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:54:21.549852 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:54:21.562431 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:54:21.562465 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:54:21.568190 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:54:21.568237 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:54:21.577810 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:54:21.577853 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:54:21.589251 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:54:21.589288 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:54:21.601378 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:54:21.629346 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 09:54:21.629445 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 09:54:21.647747 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:54:21.647805 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:54:21.653074 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:54:21.653120 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:54:21.665423 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 09:54:21.665921 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:54:21.666020 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:54:21.752016 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:54:21.752119 systemd[1]: Stopped network-cleanup.service. Feb 9 09:54:21.762502 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:54:21.772200 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:54:21.792752 systemd[1]: Switching root. Feb 9 09:54:21.833781 systemd-journald[276]: Journal stopped Feb 9 09:54:34.036976 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:54:34.036995 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:54:34.037006 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:54:34.037015 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:54:34.037023 kernel: SELinux: policy capability open_perms=1 Feb 9 09:54:34.037031 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:54:34.037040 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:54:34.037048 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:54:34.037056 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:54:34.037064 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:54:34.037073 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:54:34.037082 systemd[1]: Successfully loaded SELinux policy in 282.809ms. Feb 9 09:54:34.037093 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.611ms. Feb 9 09:54:34.037103 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:54:34.037114 systemd[1]: Detected virtualization microsoft. Feb 9 09:54:34.037124 systemd[1]: Detected architecture arm64. Feb 9 09:54:34.037132 systemd[1]: Detected first boot. Feb 9 09:54:34.037141 systemd[1]: Hostname set to . Feb 9 09:54:34.037150 systemd[1]: Initializing machine ID from random generator. Feb 9 09:54:34.037159 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:54:34.037167 kernel: kauditd_printk_skb: 33 callbacks suppressed Feb 9 09:54:34.037177 kernel: audit: type=1400 audit(1707472466.387:89): avc: denied { associate } for pid=1075 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:54:34.037188 kernel: audit: type=1300 audit(1707472466.387:89): arch=c00000b7 syscall=5 success=yes exit=0 a0=4000022802 a1=4000028ae0 a2=4000026d00 a3=32 items=0 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:34.037198 kernel: audit: type=1327 audit(1707472466.387:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:54:34.037207 kernel: audit: type=1400 audit(1707472466.398:90): avc: denied { associate } for pid=1075 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:54:34.037216 kernel: audit: type=1300 audit(1707472466.398:90): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000228d9 a2=1ed a3=0 items=2 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:34.037225 kernel: audit: type=1307 audit(1707472466.398:90): cwd="/" Feb 9 09:54:34.037236 kernel: audit: type=1302 audit(1707472466.398:90): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:34.037245 kernel: audit: type=1302 audit(1707472466.398:90): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:34.037254 kernel: audit: type=1327 audit(1707472466.398:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:54:34.037263 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:54:34.037272 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:54:34.037282 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:54:34.037292 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:54:34.037302 kernel: audit: type=1334 audit(1707472473.269:91): prog-id=12 op=LOAD Feb 9 09:54:34.037311 kernel: audit: type=1334 audit(1707472473.269:92): prog-id=3 op=UNLOAD Feb 9 09:54:34.037319 kernel: audit: type=1334 audit(1707472473.269:93): prog-id=13 op=LOAD Feb 9 09:54:34.037328 kernel: audit: type=1334 audit(1707472473.269:94): prog-id=14 op=LOAD Feb 9 09:54:34.037337 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 09:54:34.037345 kernel: audit: type=1334 audit(1707472473.269:95): prog-id=4 op=UNLOAD Feb 9 09:54:34.037356 systemd[1]: Stopped initrd-switch-root.service. Feb 9 09:54:34.037367 kernel: audit: type=1334 audit(1707472473.269:96): prog-id=5 op=UNLOAD Feb 9 09:54:34.037375 kernel: audit: type=1334 audit(1707472473.276:97): prog-id=15 op=LOAD Feb 9 09:54:34.037385 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 09:54:34.037394 kernel: audit: type=1334 audit(1707472473.276:98): prog-id=12 op=UNLOAD Feb 9 09:54:34.037403 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:54:34.037412 kernel: audit: type=1334 audit(1707472473.282:99): prog-id=16 op=LOAD Feb 9 09:54:34.037421 kernel: audit: type=1334 audit(1707472473.288:100): prog-id=17 op=LOAD Feb 9 09:54:34.037430 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:54:34.037440 systemd[1]: Created slice system-getty.slice. Feb 9 09:54:34.037450 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:54:34.037459 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:54:34.037468 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:54:34.037478 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:54:34.037487 systemd[1]: Created slice user.slice. Feb 9 09:54:34.037497 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:54:34.037506 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:54:34.037515 systemd[1]: Set up automount boot.automount. Feb 9 09:54:34.037525 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:54:34.037535 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 09:54:34.037544 systemd[1]: Stopped target initrd-fs.target. Feb 9 09:54:34.037554 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 09:54:34.037563 systemd[1]: Reached target integritysetup.target. Feb 9 09:54:34.037572 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:54:34.037581 systemd[1]: Reached target remote-fs.target. Feb 9 09:54:34.037590 systemd[1]: Reached target slices.target. Feb 9 09:54:34.037601 systemd[1]: Reached target swap.target. Feb 9 09:54:34.037618 systemd[1]: Reached target torcx.target. Feb 9 09:54:34.037628 systemd[1]: Reached target veritysetup.target. Feb 9 09:54:34.037637 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:54:34.037646 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:54:34.037655 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:54:34.037666 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:54:34.037676 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:54:34.037685 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:54:34.037695 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:54:34.037704 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:54:34.037715 systemd[1]: Mounting media.mount... Feb 9 09:54:34.037724 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:54:34.037734 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:54:34.037744 systemd[1]: Mounting tmp.mount... Feb 9 09:54:34.037753 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:54:34.037763 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:54:34.037772 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:54:34.037782 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:54:34.037791 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:54:34.037800 systemd[1]: Starting modprobe@drm.service... Feb 9 09:54:34.037809 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:54:34.037819 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:54:34.037829 systemd[1]: Starting modprobe@loop.service... Feb 9 09:54:34.037839 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:54:34.037849 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 09:54:34.037858 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 09:54:34.037867 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 09:54:34.037876 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 09:54:34.037886 systemd[1]: Stopped systemd-journald.service. Feb 9 09:54:34.037895 systemd[1]: systemd-journald.service: Consumed 4.075s CPU time. Feb 9 09:54:34.037906 kernel: fuse: init (API version 7.34) Feb 9 09:54:34.037915 systemd[1]: Starting systemd-journald.service... Feb 9 09:54:34.037924 kernel: loop: module loaded Feb 9 09:54:34.037933 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:54:34.037943 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:54:34.037952 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:54:34.037961 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:54:34.037971 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 09:54:34.037980 systemd[1]: Stopped verity-setup.service. Feb 9 09:54:34.037990 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:54:34.038000 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:54:34.038009 systemd[1]: Mounted media.mount. Feb 9 09:54:34.038018 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:54:34.038027 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:54:34.038040 systemd-journald[1182]: Journal started Feb 9 09:54:34.038079 systemd-journald[1182]: Runtime Journal (/run/log/journal/4388c0ba79cf476ab80ac5a56ae1c030) is 8.0M, max 78.6M, 70.6M free. Feb 9 09:54:24.108000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 09:54:24.905000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:54:24.905000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:54:24.905000 audit: BPF prog-id=10 op=LOAD Feb 9 09:54:24.905000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:54:24.905000 audit: BPF prog-id=11 op=LOAD Feb 9 09:54:24.905000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:54:26.387000 audit[1075]: AVC avc: denied { associate } for pid=1075 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:54:26.387000 audit[1075]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=4000022802 a1=4000028ae0 a2=4000026d00 a3=32 items=0 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:26.387000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:54:26.398000 audit[1075]: AVC avc: denied { associate } for pid=1075 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:54:26.398000 audit[1075]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000228d9 a2=1ed a3=0 items=2 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:26.398000 audit: CWD cwd="/" Feb 9 09:54:26.398000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:26.398000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:26.398000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:54:33.269000 audit: BPF prog-id=12 op=LOAD Feb 9 09:54:33.269000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:54:33.269000 audit: BPF prog-id=13 op=LOAD Feb 9 09:54:33.269000 audit: BPF prog-id=14 op=LOAD Feb 9 09:54:33.269000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:54:33.269000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:54:33.276000 audit: BPF prog-id=15 op=LOAD Feb 9 09:54:33.276000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:54:33.282000 audit: BPF prog-id=16 op=LOAD Feb 9 09:54:33.288000 audit: BPF prog-id=17 op=LOAD Feb 9 09:54:33.288000 audit: BPF prog-id=13 op=UNLOAD Feb 9 09:54:33.288000 audit: BPF prog-id=14 op=UNLOAD Feb 9 09:54:33.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:33.324000 audit: BPF prog-id=15 op=UNLOAD Feb 9 09:54:33.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:33.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:33.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:33.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:33.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:33.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:33.913000 audit: BPF prog-id=18 op=LOAD Feb 9 09:54:33.913000 audit: BPF prog-id=19 op=LOAD Feb 9 09:54:33.913000 audit: BPF prog-id=20 op=LOAD Feb 9 09:54:33.914000 audit: BPF prog-id=16 op=UNLOAD Feb 9 09:54:33.914000 audit: BPF prog-id=17 op=UNLOAD Feb 9 09:54:33.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.034000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:54:34.034000 audit[1182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=fffffb88ab10 a2=4000 a3=1 items=0 ppid=1 pid=1182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:34.034000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:54:33.268505 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:54:26.321384 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:54:33.289741 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 09:54:26.370683 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:54:33.290084 systemd[1]: systemd-journald.service: Consumed 4.075s CPU time. Feb 9 09:54:26.370705 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:54:26.370744 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:26Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 09:54:26.370754 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:26Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 09:54:26.370794 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:26Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 09:54:26.370806 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:26Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 09:54:26.371004 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:26Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 09:54:26.371035 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:54:26.371046 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:54:26.371446 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 09:54:26.371477 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 09:54:26.371495 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 09:54:26.371509 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 09:54:26.371526 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 09:54:26.371539 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 09:54:31.901216 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:31Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:54:31.901468 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:31Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:54:31.901557 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:31Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:54:31.901727 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:31Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:54:31.901774 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:31Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 09:54:31.901827 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:54:31Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 09:54:34.048715 systemd[1]: Started systemd-journald.service. Feb 9 09:54:34.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.049507 systemd[1]: Mounted tmp.mount. Feb 9 09:54:34.053539 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:54:34.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.059225 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:54:34.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.064972 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:54:34.065088 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:54:34.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.070593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:54:34.070720 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:54:34.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.075964 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:54:34.076079 systemd[1]: Finished modprobe@drm.service. Feb 9 09:54:34.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.081182 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:54:34.081311 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:54:34.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.086829 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:54:34.086945 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:54:34.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.093374 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:54:34.093495 systemd[1]: Finished modprobe@loop.service. Feb 9 09:54:34.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.100466 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:54:34.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.108440 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:54:34.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.114669 systemd[1]: Reached target network-pre.target. Feb 9 09:54:34.120869 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:54:34.126596 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:54:34.131158 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:54:34.165277 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:54:34.171685 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:54:34.176642 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:54:34.177759 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:54:34.182632 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:54:34.183740 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:54:34.190137 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:54:34.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.195792 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:54:34.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.202444 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:54:34.208618 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:54:34.214148 systemd-journald[1182]: Time spent on flushing to /var/log/journal/4388c0ba79cf476ab80ac5a56ae1c030 is 13.982ms for 1145 entries. Feb 9 09:54:34.214148 systemd-journald[1182]: System Journal (/var/log/journal/4388c0ba79cf476ab80ac5a56ae1c030) is 8.0M, max 2.6G, 2.6G free. Feb 9 09:54:34.297928 systemd-journald[1182]: Received client request to flush runtime journal. Feb 9 09:54:34.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.223043 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:54:34.231210 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:54:34.251773 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:54:34.299283 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 09:54:34.257250 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:54:34.298938 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:54:34.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.364201 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:54:34.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.786269 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:54:34.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:34.791939 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:54:35.178037 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:54:35.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.325903 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:54:35.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.331000 audit: BPF prog-id=21 op=LOAD Feb 9 09:54:35.331000 audit: BPF prog-id=22 op=LOAD Feb 9 09:54:35.331000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:54:35.331000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:54:35.332511 systemd[1]: Starting systemd-udevd.service... Feb 9 09:54:35.350662 systemd-udevd[1201]: Using default interface naming scheme 'v252'. Feb 9 09:54:35.600095 systemd[1]: Started systemd-udevd.service. Feb 9 09:54:35.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.610000 audit: BPF prog-id=23 op=LOAD Feb 9 09:54:35.612111 systemd[1]: Starting systemd-networkd.service... Feb 9 09:54:35.642554 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 09:54:35.687902 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:54:35.686000 audit: BPF prog-id=24 op=LOAD Feb 9 09:54:35.686000 audit: BPF prog-id=25 op=LOAD Feb 9 09:54:35.686000 audit: BPF prog-id=26 op=LOAD Feb 9 09:54:35.700630 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 09:54:35.723000 audit[1205]: AVC avc: denied { confidentiality } for pid=1205 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 09:54:35.732636 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 09:54:35.732746 kernel: hv_vmbus: registering driver hv_balloon Feb 9 09:54:35.732779 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 09:54:35.739428 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 09:54:35.739500 kernel: hv_vmbus: registering driver hv_utils Feb 9 09:54:35.757959 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 09:54:35.758057 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 09:54:35.758083 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 09:54:35.763765 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 9 09:54:35.775190 kernel: Console: switching to colour dummy device 80x25 Feb 9 09:54:35.775304 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 09:54:35.775370 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 09:54:35.481249 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:54:35.552995 systemd-journald[1182]: Time jumped backwards, rotating. Feb 9 09:54:35.723000 audit[1205]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaf1d93970 a1=aa2c a2=ffff9b9124b0 a3=aaaaf1cf4010 items=12 ppid=1201 pid=1205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:35.723000 audit: CWD cwd="/" Feb 9 09:54:35.723000 audit: PATH item=0 name=(null) inode=5981 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:35.723000 audit: PATH item=1 name=(null) inode=11454 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:35.723000 audit: PATH item=2 name=(null) inode=11454 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:35.723000 audit: PATH item=3 name=(null) inode=11455 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:35.723000 audit: PATH item=4 name=(null) inode=11454 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:35.723000 audit: PATH item=5 name=(null) inode=11456 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:35.723000 audit: PATH item=6 name=(null) inode=11454 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:35.723000 audit: PATH item=7 name=(null) inode=11457 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:35.723000 audit: PATH item=8 name=(null) inode=11454 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:35.723000 audit: PATH item=9 name=(null) inode=11458 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:35.723000 audit: PATH item=10 name=(null) inode=11454 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:35.723000 audit: PATH item=11 name=(null) inode=11459 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:35.723000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 09:54:35.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.494029 systemd[1]: Started systemd-userdbd.service. Feb 9 09:54:35.785208 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1208) Feb 9 09:54:35.802759 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:54:35.810742 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:54:35.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.817064 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:54:35.859079 systemd-networkd[1222]: lo: Link UP Feb 9 09:54:35.859400 systemd-networkd[1222]: lo: Gained carrier Feb 9 09:54:35.859862 systemd-networkd[1222]: Enumeration completed Feb 9 09:54:35.860057 systemd[1]: Started systemd-networkd.service. Feb 9 09:54:35.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.866557 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:54:35.888796 systemd-networkd[1222]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:54:35.938305 kernel: mlx5_core 2d00:00:02.0 enP11520s1: Link up Feb 9 09:54:35.965205 kernel: hv_netvsc 000d3afc-8196-000d-3afc-8196000d3afc eth0: Data path switched to VF: enP11520s1 Feb 9 09:54:35.966328 systemd-networkd[1222]: enP11520s1: Link UP Feb 9 09:54:35.966630 systemd-networkd[1222]: eth0: Link UP Feb 9 09:54:35.966639 systemd-networkd[1222]: eth0: Gained carrier Feb 9 09:54:35.970653 systemd-networkd[1222]: enP11520s1: Gained carrier Feb 9 09:54:35.978278 systemd-networkd[1222]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:54:36.125706 lvm[1278]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:54:36.179213 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:54:36.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:36.184243 systemd[1]: Reached target cryptsetup.target. Feb 9 09:54:36.189797 systemd[1]: Starting lvm2-activation.service... Feb 9 09:54:36.193831 lvm[1281]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:54:36.215216 systemd[1]: Finished lvm2-activation.service. Feb 9 09:54:36.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:36.220385 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:54:36.225363 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:54:36.225391 systemd[1]: Reached target local-fs.target. Feb 9 09:54:36.229599 systemd[1]: Reached target machines.target. Feb 9 09:54:36.235884 systemd[1]: Starting ldconfig.service... Feb 9 09:54:36.240216 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:54:36.240280 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:54:36.241426 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:54:36.246730 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:54:36.254742 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:54:36.259490 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:54:36.259549 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:54:36.260780 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:54:36.288064 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:54:36.298077 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1283 (bootctl) Feb 9 09:54:36.299385 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:54:36.312780 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:54:36.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:36.341906 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:54:36.404798 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:54:36.933286 systemd-fsck[1291]: fsck.fat 4.2 (2021-01-31) Feb 9 09:54:36.933286 systemd-fsck[1291]: /dev/sda1: 236 files, 113719/258078 clusters Feb 9 09:54:36.936615 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:54:36.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:36.945904 systemd[1]: Mounting boot.mount... Feb 9 09:54:37.010244 systemd[1]: Mounted boot.mount. Feb 9 09:54:37.022647 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:54:37.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:37.222159 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:54:37.222740 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:54:37.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:37.706336 systemd-networkd[1222]: eth0: Gained IPv6LL Feb 9 09:54:37.711069 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:54:37.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:37.748003 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:54:37.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:37.755342 systemd[1]: Starting audit-rules.service... Feb 9 09:54:37.761018 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:54:37.767562 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:54:37.776000 audit: BPF prog-id=27 op=LOAD Feb 9 09:54:37.778945 systemd[1]: Starting systemd-resolved.service... Feb 9 09:54:37.783000 audit: BPF prog-id=28 op=LOAD Feb 9 09:54:37.785863 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:54:37.794524 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:54:37.850473 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:54:37.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:37.856971 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:54:37.876787 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:54:37.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:37.882346 systemd[1]: Reached target time-set.target. Feb 9 09:54:37.900000 audit[1302]: SYSTEM_BOOT pid=1302 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:54:37.903847 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:54:37.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:37.989897 systemd-resolved[1300]: Positive Trust Anchors: Feb 9 09:54:37.989912 systemd-resolved[1300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:54:37.989939 systemd-resolved[1300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:54:38.070418 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:54:38.076982 systemd-resolved[1300]: Using system hostname 'ci-3510.3.2-a-b353ffea6c'. Feb 9 09:54:38.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.079346 systemd[1]: Started systemd-resolved.service. Feb 9 09:54:38.085527 kernel: kauditd_printk_skb: 86 callbacks suppressed Feb 9 09:54:38.085599 kernel: audit: type=1130 audit(1707472478.077:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.115110 kernel: audit: type=1130 audit(1707472478.113:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.115087 systemd[1]: Reached target network.target. Feb 9 09:54:38.142974 systemd[1]: Reached target network-online.target. Feb 9 09:54:38.149320 systemd[1]: Reached target nss-lookup.target. Feb 9 09:54:38.155335 systemd-timesyncd[1301]: Contacted time server 23.150.40.242:123 (0.flatcar.pool.ntp.org). Feb 9 09:54:38.155404 systemd-timesyncd[1301]: Initial clock synchronization to Fri 2024-02-09 09:54:38.167759 UTC. Feb 9 09:54:38.186000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:54:38.186000 audit[1318]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc473cc10 a2=420 a3=0 items=0 ppid=1297 pid=1318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:38.229070 kernel: audit: type=1305 audit(1707472478.186:172): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:54:38.229182 kernel: audit: type=1300 audit(1707472478.186:172): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc473cc10 a2=420 a3=0 items=0 ppid=1297 pid=1318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:38.229225 augenrules[1318]: No rules Feb 9 09:54:38.186000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:54:38.229996 systemd[1]: Finished audit-rules.service. Feb 9 09:54:38.241849 kernel: audit: type=1327 audit(1707472478.186:172): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:54:45.156814 ldconfig[1282]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:54:45.172214 systemd[1]: Finished ldconfig.service. Feb 9 09:54:45.178524 systemd[1]: Starting systemd-update-done.service... Feb 9 09:54:45.244767 systemd[1]: Finished systemd-update-done.service. Feb 9 09:54:45.250545 systemd[1]: Reached target sysinit.target. Feb 9 09:54:45.255484 systemd[1]: Started motdgen.path. Feb 9 09:54:45.260864 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:54:45.268126 systemd[1]: Started logrotate.timer. Feb 9 09:54:45.273028 systemd[1]: Started mdadm.timer. Feb 9 09:54:45.277099 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:54:45.281806 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:54:45.281842 systemd[1]: Reached target paths.target. Feb 9 09:54:45.285913 systemd[1]: Reached target timers.target. Feb 9 09:54:45.291242 systemd[1]: Listening on dbus.socket. Feb 9 09:54:45.296509 systemd[1]: Starting docker.socket... Feb 9 09:54:45.302827 systemd[1]: Listening on sshd.socket. Feb 9 09:54:45.307319 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:54:45.307796 systemd[1]: Listening on docker.socket. Feb 9 09:54:45.312347 systemd[1]: Reached target sockets.target. Feb 9 09:54:45.317005 systemd[1]: Reached target basic.target. Feb 9 09:54:45.321729 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:54:45.321754 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:54:45.322840 systemd[1]: Starting containerd.service... Feb 9 09:54:45.327831 systemd[1]: Starting dbus.service... Feb 9 09:54:45.332006 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:54:45.337339 systemd[1]: Starting extend-filesystems.service... Feb 9 09:54:45.341491 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:54:45.344965 systemd[1]: Starting motdgen.service... Feb 9 09:54:45.349376 systemd[1]: Started nvidia.service. Feb 9 09:54:45.354554 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:54:45.360000 systemd[1]: Starting prepare-critools.service... Feb 9 09:54:45.365458 systemd[1]: Starting prepare-helm.service... Feb 9 09:54:45.370598 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:54:45.376430 systemd[1]: Starting sshd-keygen.service... Feb 9 09:54:45.383350 systemd[1]: Starting systemd-logind.service... Feb 9 09:54:45.387322 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:54:45.387385 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:54:45.387782 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 09:54:45.388468 systemd[1]: Starting update-engine.service... Feb 9 09:54:45.393527 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:54:45.407078 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:54:45.407303 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:54:45.433285 extend-filesystems[1329]: Found sda Feb 9 09:54:45.437850 extend-filesystems[1329]: Found sda1 Feb 9 09:54:45.437850 extend-filesystems[1329]: Found sda2 Feb 9 09:54:45.437850 extend-filesystems[1329]: Found sda3 Feb 9 09:54:45.437850 extend-filesystems[1329]: Found usr Feb 9 09:54:45.437850 extend-filesystems[1329]: Found sda4 Feb 9 09:54:45.437850 extend-filesystems[1329]: Found sda6 Feb 9 09:54:45.437850 extend-filesystems[1329]: Found sda7 Feb 9 09:54:45.437850 extend-filesystems[1329]: Found sda9 Feb 9 09:54:45.437850 extend-filesystems[1329]: Checking size of /dev/sda9 Feb 9 09:54:45.473636 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:54:45.473826 systemd[1]: Finished motdgen.service. Feb 9 09:54:45.495835 systemd-logind[1343]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 09:54:45.500471 systemd-logind[1343]: New seat seat0. Feb 9 09:54:45.532297 jq[1347]: true Feb 9 09:54:45.532554 jq[1328]: false Feb 9 09:54:45.543498 env[1354]: time="2024-02-09T09:54:45.543442582Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:54:45.548244 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:54:45.548412 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:54:45.566102 env[1354]: time="2024-02-09T09:54:45.566042826Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:54:45.566405 env[1354]: time="2024-02-09T09:54:45.566267466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:45.567442 env[1354]: time="2024-02-09T09:54:45.567404870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:54:45.567493 env[1354]: time="2024-02-09T09:54:45.567443290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:45.567701 env[1354]: time="2024-02-09T09:54:45.567672812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:54:45.567701 env[1354]: time="2024-02-09T09:54:45.567698346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:45.567754 env[1354]: time="2024-02-09T09:54:45.567713514Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:54:45.567754 env[1354]: time="2024-02-09T09:54:45.567723839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:45.568199 env[1354]: time="2024-02-09T09:54:45.568164673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:45.568459 env[1354]: time="2024-02-09T09:54:45.568431735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:45.568592 env[1354]: time="2024-02-09T09:54:45.568566647Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:54:45.568592 env[1354]: time="2024-02-09T09:54:45.568589419Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:54:45.568666 env[1354]: time="2024-02-09T09:54:45.568644728Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:54:45.568666 env[1354]: time="2024-02-09T09:54:45.568662898Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:54:45.580450 env[1354]: time="2024-02-09T09:54:45.580108378Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:54:45.580450 env[1354]: time="2024-02-09T09:54:45.580149039Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:54:45.580450 env[1354]: time="2024-02-09T09:54:45.580162166Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:54:45.580450 env[1354]: time="2024-02-09T09:54:45.580262780Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:54:45.580450 env[1354]: time="2024-02-09T09:54:45.580289154Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:54:45.580450 env[1354]: time="2024-02-09T09:54:45.580303201Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:54:45.580450 env[1354]: time="2024-02-09T09:54:45.580320050Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:54:45.580646 jq[1376]: true Feb 9 09:54:45.580817 env[1354]: time="2024-02-09T09:54:45.580737432Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:54:45.580817 env[1354]: time="2024-02-09T09:54:45.580761805Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:54:45.580817 env[1354]: time="2024-02-09T09:54:45.580774812Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:54:45.580817 env[1354]: time="2024-02-09T09:54:45.580789619Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:54:45.580817 env[1354]: time="2024-02-09T09:54:45.580802706Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:54:45.580949 env[1354]: time="2024-02-09T09:54:45.580914046Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:54:45.582201 env[1354]: time="2024-02-09T09:54:45.582124448Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:54:45.585869 env[1354]: time="2024-02-09T09:54:45.585713755Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:54:45.585869 env[1354]: time="2024-02-09T09:54:45.585777589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:54:45.585869 env[1354]: time="2024-02-09T09:54:45.585792237Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:54:45.586601 env[1354]: time="2024-02-09T09:54:45.585860073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:54:45.586639 env[1354]: time="2024-02-09T09:54:45.586609991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:54:45.586639 env[1354]: time="2024-02-09T09:54:45.586627280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:54:45.586686 env[1354]: time="2024-02-09T09:54:45.586659297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:54:45.586686 env[1354]: time="2024-02-09T09:54:45.586673985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:54:45.586724 env[1354]: time="2024-02-09T09:54:45.586686392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:54:45.586724 env[1354]: time="2024-02-09T09:54:45.586702760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:54:45.586724 env[1354]: time="2024-02-09T09:54:45.586714807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:54:45.586784 env[1354]: time="2024-02-09T09:54:45.586739900Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:54:45.587139 env[1354]: time="2024-02-09T09:54:45.586919235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:54:45.587139 env[1354]: time="2024-02-09T09:54:45.586946090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:54:45.587139 env[1354]: time="2024-02-09T09:54:45.586971143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:54:45.587139 env[1354]: time="2024-02-09T09:54:45.586984830Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:54:45.587139 env[1354]: time="2024-02-09T09:54:45.586999958Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:54:45.587986 env[1354]: time="2024-02-09T09:54:45.587535042Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:54:45.587986 env[1354]: time="2024-02-09T09:54:45.587575424Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:54:45.587986 env[1354]: time="2024-02-09T09:54:45.587647782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:54:45.588093 env[1354]: time="2024-02-09T09:54:45.587896875Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:54:45.588093 env[1354]: time="2024-02-09T09:54:45.587973595Z" level=info msg="Connect containerd service" Feb 9 09:54:45.588093 env[1354]: time="2024-02-09T09:54:45.588031826Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:54:45.606158 env[1354]: time="2024-02-09T09:54:45.588923060Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:54:45.606158 env[1354]: time="2024-02-09T09:54:45.589281210Z" level=info msg="Start subscribing containerd event" Feb 9 09:54:45.606158 env[1354]: time="2024-02-09T09:54:45.589344323Z" level=info msg="Start recovering state" Feb 9 09:54:45.606158 env[1354]: time="2024-02-09T09:54:45.589417963Z" level=info msg="Start event monitor" Feb 9 09:54:45.606158 env[1354]: time="2024-02-09T09:54:45.589437893Z" level=info msg="Start snapshots syncer" Feb 9 09:54:45.606158 env[1354]: time="2024-02-09T09:54:45.589448419Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:54:45.606158 env[1354]: time="2024-02-09T09:54:45.589456543Z" level=info msg="Start streaming server" Feb 9 09:54:45.606158 env[1354]: time="2024-02-09T09:54:45.589304902Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:54:45.606158 env[1354]: time="2024-02-09T09:54:45.589592175Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:54:45.606158 env[1354]: time="2024-02-09T09:54:45.589644603Z" level=info msg="containerd successfully booted in 0.046941s" Feb 9 09:54:45.606406 extend-filesystems[1329]: Old size kept for /dev/sda9 Feb 9 09:54:45.606406 extend-filesystems[1329]: Found sr0 Feb 9 09:54:45.589761 systemd[1]: Started containerd.service. Feb 9 09:54:45.690137 tar[1351]: linux-arm64/helm Feb 9 09:54:45.690379 tar[1350]: crictl Feb 9 09:54:45.690510 tar[1349]: ./ Feb 9 09:54:45.690510 tar[1349]: ./loopback Feb 9 09:54:45.600301 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:54:45.612514 systemd[1]: Finished extend-filesystems.service. Feb 9 09:54:45.714923 bash[1399]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:54:45.715664 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:54:45.745778 tar[1349]: ./bandwidth Feb 9 09:54:45.783621 dbus-daemon[1327]: [system] SELinux support is enabled Feb 9 09:54:45.783787 systemd[1]: Started dbus.service. Feb 9 09:54:45.789329 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:54:45.789358 systemd[1]: Reached target system-config.target. Feb 9 09:54:45.797166 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:54:45.797217 systemd[1]: Reached target user-config.target. Feb 9 09:54:45.805533 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 09:54:45.806637 dbus-daemon[1327]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 09:54:45.806878 systemd[1]: Started systemd-logind.service. Feb 9 09:54:45.862741 tar[1349]: ./ptp Feb 9 09:54:45.956968 tar[1349]: ./vlan Feb 9 09:54:46.041489 tar[1349]: ./host-device Feb 9 09:54:46.133139 tar[1349]: ./tuning Feb 9 09:54:46.173084 tar[1351]: linux-arm64/LICENSE Feb 9 09:54:46.173232 tar[1351]: linux-arm64/README.md Feb 9 09:54:46.179424 systemd[1]: Finished prepare-helm.service. Feb 9 09:54:46.208160 tar[1349]: ./vrf Feb 9 09:54:46.232598 update_engine[1345]: I0209 09:54:46.214682 1345 main.cc:92] Flatcar Update Engine starting Feb 9 09:54:46.263925 tar[1349]: ./sbr Feb 9 09:54:46.281229 systemd[1]: Started update-engine.service. Feb 9 09:54:46.281471 update_engine[1345]: I0209 09:54:46.281260 1345 update_check_scheduler.cc:74] Next update check in 11m5s Feb 9 09:54:46.289366 systemd[1]: Started locksmithd.service. Feb 9 09:54:46.317548 tar[1349]: ./tap Feb 9 09:54:46.382760 tar[1349]: ./dhcp Feb 9 09:54:46.537097 systemd[1]: Finished prepare-critools.service. Feb 9 09:54:46.547986 tar[1349]: ./static Feb 9 09:54:46.569167 tar[1349]: ./firewall Feb 9 09:54:46.601304 tar[1349]: ./macvlan Feb 9 09:54:46.630229 tar[1349]: ./dummy Feb 9 09:54:46.659514 tar[1349]: ./bridge Feb 9 09:54:46.691932 tar[1349]: ./ipvlan Feb 9 09:54:46.721055 tar[1349]: ./portmap Feb 9 09:54:46.748526 tar[1349]: ./host-local Feb 9 09:54:46.857179 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:54:48.156790 locksmithd[1431]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:54:50.108866 sshd_keygen[1346]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:54:50.125690 systemd[1]: Finished sshd-keygen.service. Feb 9 09:54:50.135894 systemd[1]: Starting issuegen.service... Feb 9 09:54:50.142062 systemd[1]: Started waagent.service. Feb 9 09:54:50.147674 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:54:50.147842 systemd[1]: Finished issuegen.service. Feb 9 09:54:50.155833 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:54:50.205385 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:54:50.213621 systemd[1]: Started getty@tty1.service. Feb 9 09:54:50.219973 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 09:54:50.225689 systemd[1]: Reached target getty.target. Feb 9 09:54:50.230465 systemd[1]: Reached target multi-user.target. Feb 9 09:54:50.237289 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:54:50.251243 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:54:50.251418 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:54:50.257717 systemd[1]: Startup finished in 749ms (kernel) + 17.983s (initrd) + 26.937s (userspace) = 45.670s. Feb 9 09:54:51.538841 login[1453]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 09:54:51.539336 login[1452]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:54:51.621615 systemd[1]: Created slice user-500.slice. Feb 9 09:54:51.622679 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:54:51.624736 systemd-logind[1343]: New session 1 of user core. Feb 9 09:54:51.679258 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:54:51.680612 systemd[1]: Starting user@500.service... Feb 9 09:54:51.720158 (systemd)[1456]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:54:51.981122 systemd[1456]: Queued start job for default target default.target. Feb 9 09:54:51.982343 systemd[1456]: Reached target paths.target. Feb 9 09:54:51.982467 systemd[1456]: Reached target sockets.target. Feb 9 09:54:51.982545 systemd[1456]: Reached target timers.target. Feb 9 09:54:51.982612 systemd[1456]: Reached target basic.target. Feb 9 09:54:51.982727 systemd[1456]: Reached target default.target. Feb 9 09:54:51.982790 systemd[1]: Started user@500.service. Feb 9 09:54:51.983175 systemd[1456]: Startup finished in 257ms. Feb 9 09:54:51.983648 systemd[1]: Started session-1.scope. Feb 9 09:54:52.540282 login[1453]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:54:52.543555 systemd-logind[1343]: New session 2 of user core. Feb 9 09:54:52.544390 systemd[1]: Started session-2.scope. Feb 9 09:54:57.256942 waagent[1450]: 2024-02-09T09:54:57.256838Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 09:54:57.284269 waagent[1450]: 2024-02-09T09:54:57.284157Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 09:54:57.289165 waagent[1450]: 2024-02-09T09:54:57.289090Z INFO Daemon Daemon Python: 3.9.16 Feb 9 09:54:57.294121 waagent[1450]: 2024-02-09T09:54:57.294043Z INFO Daemon Daemon Run daemon Feb 9 09:54:57.299178 waagent[1450]: 2024-02-09T09:54:57.299110Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 09:54:57.318948 waagent[1450]: 2024-02-09T09:54:57.318808Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 09:54:57.334703 waagent[1450]: 2024-02-09T09:54:57.334569Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 09:54:57.344812 waagent[1450]: 2024-02-09T09:54:57.344726Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 09:54:57.350137 waagent[1450]: 2024-02-09T09:54:57.350056Z INFO Daemon Daemon Using waagent for provisioning Feb 9 09:54:57.356366 waagent[1450]: 2024-02-09T09:54:57.356279Z INFO Daemon Daemon Activate resource disk Feb 9 09:54:57.361612 waagent[1450]: 2024-02-09T09:54:57.361529Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 09:54:57.377216 waagent[1450]: 2024-02-09T09:54:57.377119Z INFO Daemon Daemon Found device: None Feb 9 09:54:57.382537 waagent[1450]: 2024-02-09T09:54:57.382458Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 09:54:57.392616 waagent[1450]: 2024-02-09T09:54:57.392528Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 09:54:57.405067 waagent[1450]: 2024-02-09T09:54:57.404998Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 09:54:57.411322 waagent[1450]: 2024-02-09T09:54:57.411248Z INFO Daemon Daemon Running default provisioning handler Feb 9 09:54:57.424509 waagent[1450]: 2024-02-09T09:54:57.424367Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 09:54:57.440011 waagent[1450]: 2024-02-09T09:54:57.439882Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 09:54:57.449960 waagent[1450]: 2024-02-09T09:54:57.449876Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 09:54:57.455467 waagent[1450]: 2024-02-09T09:54:57.455386Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 09:54:57.546907 waagent[1450]: 2024-02-09T09:54:57.546706Z INFO Daemon Daemon Successfully mounted dvd Feb 9 09:54:57.694684 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 09:54:57.729070 waagent[1450]: 2024-02-09T09:54:57.728926Z INFO Daemon Daemon Detect protocol endpoint Feb 9 09:54:57.735009 waagent[1450]: 2024-02-09T09:54:57.734922Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 09:54:57.741843 waagent[1450]: 2024-02-09T09:54:57.741770Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 09:54:57.748903 waagent[1450]: 2024-02-09T09:54:57.748827Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 09:54:57.754744 waagent[1450]: 2024-02-09T09:54:57.754675Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 09:54:57.760439 waagent[1450]: 2024-02-09T09:54:57.760375Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 09:54:57.886558 waagent[1450]: 2024-02-09T09:54:57.886440Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 09:54:57.893686 waagent[1450]: 2024-02-09T09:54:57.893641Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 09:54:57.899931 waagent[1450]: 2024-02-09T09:54:57.899866Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 09:54:58.937779 waagent[1450]: 2024-02-09T09:54:58.937627Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 09:54:58.953312 waagent[1450]: 2024-02-09T09:54:58.953236Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 09:54:58.959910 waagent[1450]: 2024-02-09T09:54:58.959843Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 09:54:59.038163 waagent[1450]: 2024-02-09T09:54:59.038025Z INFO Daemon Daemon Found private key matching thumbprint 575C47585D1459C00D5A0F07B441A63D325669A3 Feb 9 09:54:59.047370 waagent[1450]: 2024-02-09T09:54:59.047286Z INFO Daemon Daemon Certificate with thumbprint B8910F831702841F93B7EF734F4F2193722C3D6E has no matching private key. Feb 9 09:54:59.058155 waagent[1450]: 2024-02-09T09:54:59.058073Z INFO Daemon Daemon Fetch goal state completed Feb 9 09:54:59.111563 waagent[1450]: 2024-02-09T09:54:59.111507Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 737e3264-0ac2-49eb-a3f4-49e58a9190ba New eTag: 11099603303111743899] Feb 9 09:54:59.124018 waagent[1450]: 2024-02-09T09:54:59.123935Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 09:54:59.140921 waagent[1450]: 2024-02-09T09:54:59.140840Z INFO Daemon Daemon Starting provisioning Feb 9 09:54:59.146798 waagent[1450]: 2024-02-09T09:54:59.146731Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 09:54:59.151999 waagent[1450]: 2024-02-09T09:54:59.151939Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-b353ffea6c] Feb 9 09:54:59.210024 waagent[1450]: 2024-02-09T09:54:59.209901Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-b353ffea6c] Feb 9 09:54:59.216689 waagent[1450]: 2024-02-09T09:54:59.216609Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 09:54:59.223664 waagent[1450]: 2024-02-09T09:54:59.223592Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 09:54:59.239835 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 09:54:59.239999 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 09:54:59.240054 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 09:54:59.240339 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:54:59.244231 systemd-networkd[1222]: eth0: DHCPv6 lease lost Feb 9 09:54:59.245732 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:54:59.245904 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:54:59.247913 systemd[1]: Starting systemd-networkd.service... Feb 9 09:54:59.275342 systemd-networkd[1500]: enP11520s1: Link UP Feb 9 09:54:59.275356 systemd-networkd[1500]: enP11520s1: Gained carrier Feb 9 09:54:59.276356 systemd-networkd[1500]: eth0: Link UP Feb 9 09:54:59.276367 systemd-networkd[1500]: eth0: Gained carrier Feb 9 09:54:59.276681 systemd-networkd[1500]: lo: Link UP Feb 9 09:54:59.276691 systemd-networkd[1500]: lo: Gained carrier Feb 9 09:54:59.276915 systemd-networkd[1500]: eth0: Gained IPv6LL Feb 9 09:54:59.277354 systemd-networkd[1500]: Enumeration completed Feb 9 09:54:59.277466 systemd[1]: Started systemd-networkd.service. Feb 9 09:54:59.278328 systemd-networkd[1500]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:54:59.279130 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:54:59.282813 waagent[1450]: 2024-02-09T09:54:59.282662Z INFO Daemon Daemon Create user account if not exists Feb 9 09:54:59.291171 waagent[1450]: 2024-02-09T09:54:59.291070Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 09:54:59.297278 waagent[1450]: 2024-02-09T09:54:59.297195Z INFO Daemon Daemon Configure sudoer Feb 9 09:54:59.298283 systemd-networkd[1500]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:54:59.303957 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:54:59.304691 waagent[1450]: 2024-02-09T09:54:59.304612Z INFO Daemon Daemon Configure sshd Feb 9 09:54:59.309360 waagent[1450]: 2024-02-09T09:54:59.309282Z INFO Daemon Daemon Deploy ssh public key. Feb 9 09:55:00.564249 waagent[1450]: 2024-02-09T09:55:00.564146Z INFO Daemon Daemon Provisioning complete Feb 9 09:55:00.589412 waagent[1450]: 2024-02-09T09:55:00.589343Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 09:55:00.596719 waagent[1450]: 2024-02-09T09:55:00.596629Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 09:55:00.609013 waagent[1450]: 2024-02-09T09:55:00.608926Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 09:55:00.908130 waagent[1509]: 2024-02-09T09:55:00.907978Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 09:55:00.908848 waagent[1509]: 2024-02-09T09:55:00.908782Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:00.908974 waagent[1509]: 2024-02-09T09:55:00.908930Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:00.927911 waagent[1509]: 2024-02-09T09:55:00.927832Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 09:55:00.928103 waagent[1509]: 2024-02-09T09:55:00.928054Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 09:55:00.999130 waagent[1509]: 2024-02-09T09:55:00.998993Z INFO ExtHandler ExtHandler Found private key matching thumbprint 575C47585D1459C00D5A0F07B441A63D325669A3 Feb 9 09:55:00.999349 waagent[1509]: 2024-02-09T09:55:00.999296Z INFO ExtHandler ExtHandler Certificate with thumbprint B8910F831702841F93B7EF734F4F2193722C3D6E has no matching private key. Feb 9 09:55:00.999567 waagent[1509]: 2024-02-09T09:55:00.999520Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 09:55:01.020770 waagent[1509]: 2024-02-09T09:55:01.020714Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: cc8a5442-cc27-4224-922b-6ce8ddc9ef8c New eTag: 11099603303111743899] Feb 9 09:55:01.021405 waagent[1509]: 2024-02-09T09:55:01.021346Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 09:55:01.107715 waagent[1509]: 2024-02-09T09:55:01.107577Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 09:55:01.117861 waagent[1509]: 2024-02-09T09:55:01.117774Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1509 Feb 9 09:55:01.121686 waagent[1509]: 2024-02-09T09:55:01.121620Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 09:55:01.123008 waagent[1509]: 2024-02-09T09:55:01.122951Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 09:55:01.236585 waagent[1509]: 2024-02-09T09:55:01.236471Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 09:55:01.236939 waagent[1509]: 2024-02-09T09:55:01.236880Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 09:55:01.244620 waagent[1509]: 2024-02-09T09:55:01.244551Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 09:55:01.245141 waagent[1509]: 2024-02-09T09:55:01.245082Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 09:55:01.246327 waagent[1509]: 2024-02-09T09:55:01.246266Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 09:55:01.247683 waagent[1509]: 2024-02-09T09:55:01.247611Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 09:55:01.248357 waagent[1509]: 2024-02-09T09:55:01.248296Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:01.248622 waagent[1509]: 2024-02-09T09:55:01.248573Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:01.249292 waagent[1509]: 2024-02-09T09:55:01.249226Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 09:55:01.249932 waagent[1509]: 2024-02-09T09:55:01.249862Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:01.250291 waagent[1509]: 2024-02-09T09:55:01.250225Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:01.250638 waagent[1509]: 2024-02-09T09:55:01.250574Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 09:55:01.250826 waagent[1509]: 2024-02-09T09:55:01.250752Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 09:55:01.250826 waagent[1509]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 09:55:01.250826 waagent[1509]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 09:55:01.250826 waagent[1509]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 09:55:01.250826 waagent[1509]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:01.250826 waagent[1509]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:01.250826 waagent[1509]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:01.251379 waagent[1509]: 2024-02-09T09:55:01.251305Z INFO EnvHandler ExtHandler Configure routes Feb 9 09:55:01.251881 waagent[1509]: 2024-02-09T09:55:01.251833Z INFO EnvHandler ExtHandler Gateway:None Feb 9 09:55:01.251881 waagent[1509]: 2024-02-09T09:55:01.251754Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 09:55:01.252167 waagent[1509]: 2024-02-09T09:55:01.252113Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 09:55:01.254611 waagent[1509]: 2024-02-09T09:55:01.254440Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 09:55:01.254758 waagent[1509]: 2024-02-09T09:55:01.254681Z INFO EnvHandler ExtHandler Routes:None Feb 9 09:55:01.255673 waagent[1509]: 2024-02-09T09:55:01.255580Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 09:55:01.256106 waagent[1509]: 2024-02-09T09:55:01.256038Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 09:55:01.267577 waagent[1509]: 2024-02-09T09:55:01.267512Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 09:55:01.268395 waagent[1509]: 2024-02-09T09:55:01.268344Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 09:55:01.271891 waagent[1509]: 2024-02-09T09:55:01.271810Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 09:55:01.294531 waagent[1509]: 2024-02-09T09:55:01.294467Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 09:55:01.330617 waagent[1509]: 2024-02-09T09:55:01.330521Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1500' Feb 9 09:55:01.411551 waagent[1509]: 2024-02-09T09:55:01.411488Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 09:55:01.612515 waagent[1450]: 2024-02-09T09:55:01.612358Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 09:55:01.616488 waagent[1450]: 2024-02-09T09:55:01.616431Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 09:55:02.757400 waagent[1536]: 2024-02-09T09:55:02.757300Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 09:55:02.758429 waagent[1536]: 2024-02-09T09:55:02.758372Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 09:55:02.758651 waagent[1536]: 2024-02-09T09:55:02.758604Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 09:55:02.766962 waagent[1536]: 2024-02-09T09:55:02.766851Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 09:55:02.767529 waagent[1536]: 2024-02-09T09:55:02.767474Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:02.767771 waagent[1536]: 2024-02-09T09:55:02.767722Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:02.780588 waagent[1536]: 2024-02-09T09:55:02.780507Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 09:55:02.793281 waagent[1536]: 2024-02-09T09:55:02.793215Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 09:55:02.794527 waagent[1536]: 2024-02-09T09:55:02.794467Z INFO ExtHandler Feb 9 09:55:02.794780 waagent[1536]: 2024-02-09T09:55:02.794729Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7f1d7f6f-cab3-419f-8933-813c29eaf2c3 eTag: 11099603303111743899 source: Fabric] Feb 9 09:55:02.795650 waagent[1536]: 2024-02-09T09:55:02.795592Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 09:55:02.797003 waagent[1536]: 2024-02-09T09:55:02.796944Z INFO ExtHandler Feb 9 09:55:02.797255 waagent[1536]: 2024-02-09T09:55:02.797203Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 09:55:02.807149 waagent[1536]: 2024-02-09T09:55:02.807093Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 09:55:02.807800 waagent[1536]: 2024-02-09T09:55:02.807756Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 09:55:02.829869 waagent[1536]: 2024-02-09T09:55:02.829801Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 09:55:02.904932 waagent[1536]: 2024-02-09T09:55:02.904795Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B8910F831702841F93B7EF734F4F2193722C3D6E', 'hasPrivateKey': False} Feb 9 09:55:02.906285 waagent[1536]: 2024-02-09T09:55:02.906178Z INFO ExtHandler Downloaded certificate {'thumbprint': '575C47585D1459C00D5A0F07B441A63D325669A3', 'hasPrivateKey': True} Feb 9 09:55:02.907493 waagent[1536]: 2024-02-09T09:55:02.907434Z INFO ExtHandler Fetch goal state completed Feb 9 09:55:02.934114 waagent[1536]: 2024-02-09T09:55:02.934038Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1536 Feb 9 09:55:02.937814 waagent[1536]: 2024-02-09T09:55:02.937746Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 09:55:02.939430 waagent[1536]: 2024-02-09T09:55:02.939373Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 09:55:02.944869 waagent[1536]: 2024-02-09T09:55:02.944816Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 09:55:02.945462 waagent[1536]: 2024-02-09T09:55:02.945404Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 09:55:02.953714 waagent[1536]: 2024-02-09T09:55:02.953658Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 09:55:02.954407 waagent[1536]: 2024-02-09T09:55:02.954347Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 09:55:02.960893 waagent[1536]: 2024-02-09T09:55:02.960778Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 09:55:02.964719 waagent[1536]: 2024-02-09T09:55:02.964658Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 09:55:02.966472 waagent[1536]: 2024-02-09T09:55:02.966402Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 09:55:02.966753 waagent[1536]: 2024-02-09T09:55:02.966683Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:02.967324 waagent[1536]: 2024-02-09T09:55:02.967256Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:02.967940 waagent[1536]: 2024-02-09T09:55:02.967867Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 09:55:02.968269 waagent[1536]: 2024-02-09T09:55:02.968206Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 09:55:02.968269 waagent[1536]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 09:55:02.968269 waagent[1536]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 09:55:02.968269 waagent[1536]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 09:55:02.968269 waagent[1536]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:02.968269 waagent[1536]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:02.968269 waagent[1536]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:02.970613 waagent[1536]: 2024-02-09T09:55:02.970487Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 09:55:02.971498 waagent[1536]: 2024-02-09T09:55:02.971428Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:02.971685 waagent[1536]: 2024-02-09T09:55:02.971622Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:02.974099 waagent[1536]: 2024-02-09T09:55:02.973956Z INFO EnvHandler ExtHandler Configure routes Feb 9 09:55:02.974336 waagent[1536]: 2024-02-09T09:55:02.974279Z INFO EnvHandler ExtHandler Gateway:None Feb 9 09:55:02.974462 waagent[1536]: 2024-02-09T09:55:02.974413Z INFO EnvHandler ExtHandler Routes:None Feb 9 09:55:02.975839 waagent[1536]: 2024-02-09T09:55:02.975651Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 09:55:02.976103 waagent[1536]: 2024-02-09T09:55:02.976033Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 09:55:02.978161 waagent[1536]: 2024-02-09T09:55:02.977959Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 09:55:02.978535 waagent[1536]: 2024-02-09T09:55:02.978443Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 09:55:02.978932 waagent[1536]: 2024-02-09T09:55:02.978866Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 09:55:02.988011 waagent[1536]: 2024-02-09T09:55:02.987929Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 09:55:02.988011 waagent[1536]: Executing ['ip', '-a', '-o', 'link']: Feb 9 09:55:02.988011 waagent[1536]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 09:55:02.988011 waagent[1536]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:81:96 brd ff:ff:ff:ff:ff:ff Feb 9 09:55:02.988011 waagent[1536]: 3: enP11520s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:81:96 brd ff:ff:ff:ff:ff:ff\ altname enP11520p0s2 Feb 9 09:55:02.988011 waagent[1536]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 09:55:02.988011 waagent[1536]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 09:55:02.988011 waagent[1536]: 2: eth0 inet 10.200.20.38/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 09:55:02.988011 waagent[1536]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 09:55:02.988011 waagent[1536]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 09:55:02.988011 waagent[1536]: 2: eth0 inet6 fe80::20d:3aff:fefc:8196/64 scope link \ valid_lft forever preferred_lft forever Feb 9 09:55:03.001480 waagent[1536]: 2024-02-09T09:55:03.001390Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 09:55:03.003257 waagent[1536]: 2024-02-09T09:55:03.003158Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 09:55:03.021229 waagent[1536]: 2024-02-09T09:55:03.021116Z INFO ExtHandler ExtHandler Feb 9 09:55:03.021522 waagent[1536]: 2024-02-09T09:55:03.021465Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 905c1a5f-9ce9-491a-8e9a-b46259d62af0 correlation 722a7bc9-0d29-4892-898b-c0e6de237207 created: 2024-02-09T09:53:11.964058Z] Feb 9 09:55:03.022566 waagent[1536]: 2024-02-09T09:55:03.022507Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 09:55:03.024453 waagent[1536]: 2024-02-09T09:55:03.024399Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Feb 9 09:55:03.051825 waagent[1536]: 2024-02-09T09:55:03.051735Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 09:55:03.073370 waagent[1536]: 2024-02-09T09:55:03.073155Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 36446E1A-0745-44E0-B099-D551A421CC38;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 09:55:03.228168 waagent[1536]: 2024-02-09T09:55:03.228042Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 09:55:03.228168 waagent[1536]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:03.228168 waagent[1536]: pkts bytes target prot opt in out source destination Feb 9 09:55:03.228168 waagent[1536]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:03.228168 waagent[1536]: pkts bytes target prot opt in out source destination Feb 9 09:55:03.228168 waagent[1536]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:03.228168 waagent[1536]: pkts bytes target prot opt in out source destination Feb 9 09:55:03.228168 waagent[1536]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 09:55:03.228168 waagent[1536]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 09:55:03.228168 waagent[1536]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 09:55:03.236008 waagent[1536]: 2024-02-09T09:55:03.235894Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 09:55:03.236008 waagent[1536]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:03.236008 waagent[1536]: pkts bytes target prot opt in out source destination Feb 9 09:55:03.236008 waagent[1536]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:03.236008 waagent[1536]: pkts bytes target prot opt in out source destination Feb 9 09:55:03.236008 waagent[1536]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:03.236008 waagent[1536]: pkts bytes target prot opt in out source destination Feb 9 09:55:03.236008 waagent[1536]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 09:55:03.236008 waagent[1536]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 09:55:03.236008 waagent[1536]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 09:55:03.236928 waagent[1536]: 2024-02-09T09:55:03.236881Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 09:55:23.574964 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 9 09:55:31.599621 update_engine[1345]: I0209 09:55:31.599581 1345 update_attempter.cc:509] Updating boot flags... Feb 9 09:55:48.904947 systemd[1]: Created slice system-sshd.slice. Feb 9 09:55:48.905985 systemd[1]: Started sshd@0-10.200.20.38:22-10.200.12.6:39840.service. Feb 9 09:55:49.555307 sshd[1655]: Accepted publickey for core from 10.200.12.6 port 39840 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:49.572864 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:49.577393 systemd[1]: Started session-3.scope. Feb 9 09:55:49.578271 systemd-logind[1343]: New session 3 of user core. Feb 9 09:55:49.924996 systemd[1]: Started sshd@1-10.200.20.38:22-10.200.12.6:39854.service. Feb 9 09:55:50.344811 sshd[1660]: Accepted publickey for core from 10.200.12.6 port 39854 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:50.346981 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:50.350974 systemd[1]: Started session-4.scope. Feb 9 09:55:50.352142 systemd-logind[1343]: New session 4 of user core. Feb 9 09:55:50.647942 sshd[1660]: pam_unix(sshd:session): session closed for user core Feb 9 09:55:50.650364 systemd[1]: sshd@1-10.200.20.38:22-10.200.12.6:39854.service: Deactivated successfully. Feb 9 09:55:50.651035 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:55:50.651539 systemd-logind[1343]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:55:50.652237 systemd-logind[1343]: Removed session 4. Feb 9 09:55:50.718311 systemd[1]: Started sshd@2-10.200.20.38:22-10.200.12.6:39870.service. Feb 9 09:55:51.140998 sshd[1666]: Accepted publickey for core from 10.200.12.6 port 39870 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:51.142601 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:51.146165 systemd-logind[1343]: New session 5 of user core. Feb 9 09:55:51.146652 systemd[1]: Started session-5.scope. Feb 9 09:55:51.444060 sshd[1666]: pam_unix(sshd:session): session closed for user core Feb 9 09:55:51.446290 systemd[1]: sshd@2-10.200.20.38:22-10.200.12.6:39870.service: Deactivated successfully. Feb 9 09:55:51.446997 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:55:51.447497 systemd-logind[1343]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:55:51.448277 systemd-logind[1343]: Removed session 5. Feb 9 09:55:51.514467 systemd[1]: Started sshd@3-10.200.20.38:22-10.200.12.6:39874.service. Feb 9 09:55:51.937729 sshd[1672]: Accepted publickey for core from 10.200.12.6 port 39874 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:51.938962 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:51.943020 systemd[1]: Started session-6.scope. Feb 9 09:55:51.943331 systemd-logind[1343]: New session 6 of user core. Feb 9 09:55:52.245263 sshd[1672]: pam_unix(sshd:session): session closed for user core Feb 9 09:55:52.247710 systemd[1]: sshd@3-10.200.20.38:22-10.200.12.6:39874.service: Deactivated successfully. Feb 9 09:55:52.248383 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:55:52.248903 systemd-logind[1343]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:55:52.249736 systemd-logind[1343]: Removed session 6. Feb 9 09:55:52.314350 systemd[1]: Started sshd@4-10.200.20.38:22-10.200.12.6:39888.service. Feb 9 09:55:52.727482 sshd[1678]: Accepted publickey for core from 10.200.12.6 port 39888 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:52.728711 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:52.732748 systemd[1]: Started session-7.scope. Feb 9 09:55:52.733702 systemd-logind[1343]: New session 7 of user core. Feb 9 09:55:53.235628 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:55:53.235819 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:55:53.949076 systemd[1]: Starting docker.service... Feb 9 09:55:53.980038 env[1696]: time="2024-02-09T09:55:53.979985932Z" level=info msg="Starting up" Feb 9 09:55:53.981182 env[1696]: time="2024-02-09T09:55:53.981155133Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:55:53.981182 env[1696]: time="2024-02-09T09:55:53.981178973Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:55:53.981301 env[1696]: time="2024-02-09T09:55:53.981234973Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:55:53.981301 env[1696]: time="2024-02-09T09:55:53.981246773Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:55:53.982957 env[1696]: time="2024-02-09T09:55:53.982937735Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:55:53.983047 env[1696]: time="2024-02-09T09:55:53.983033855Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:55:53.983108 env[1696]: time="2024-02-09T09:55:53.983093215Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:55:53.983162 env[1696]: time="2024-02-09T09:55:53.983148855Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:55:54.105638 env[1696]: time="2024-02-09T09:55:54.105605395Z" level=info msg="Loading containers: start." Feb 9 09:55:54.270211 kernel: Initializing XFRM netlink socket Feb 9 09:55:54.293378 env[1696]: time="2024-02-09T09:55:54.293328329Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 09:55:54.410431 systemd-networkd[1500]: docker0: Link UP Feb 9 09:55:54.431837 env[1696]: time="2024-02-09T09:55:54.431805887Z" level=info msg="Loading containers: done." Feb 9 09:55:54.441063 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck388576487-merged.mount: Deactivated successfully. Feb 9 09:55:54.454240 env[1696]: time="2024-02-09T09:55:54.454169392Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 09:55:54.454429 env[1696]: time="2024-02-09T09:55:54.454405312Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 09:55:54.454540 env[1696]: time="2024-02-09T09:55:54.454518673Z" level=info msg="Daemon has completed initialization" Feb 9 09:55:54.483758 systemd[1]: Started docker.service. Feb 9 09:55:54.490561 env[1696]: time="2024-02-09T09:55:54.490495193Z" level=info msg="API listen on /run/docker.sock" Feb 9 09:55:54.505197 systemd[1]: Reloading. Feb 9 09:55:54.577141 /usr/lib/systemd/system-generators/torcx-generator[1828]: time="2024-02-09T09:55:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:55:54.577173 /usr/lib/systemd/system-generators/torcx-generator[1828]: time="2024-02-09T09:55:54Z" level=info msg="torcx already run" Feb 9 09:55:54.642703 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:55:54.642877 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:55:54.659703 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:55:54.737476 systemd[1]: Started kubelet.service. Feb 9 09:55:54.795951 kubelet[1884]: E0209 09:55:54.795880 1884 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 09:55:54.797847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:55:54.797963 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:55:59.149506 env[1354]: time="2024-02-09T09:55:59.149461543Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 9 09:55:59.937591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount31948930.mount: Deactivated successfully. Feb 9 09:56:01.638124 env[1354]: time="2024-02-09T09:56:01.638065804Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:01.643916 env[1354]: time="2024-02-09T09:56:01.643874210Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:68142d88471bf00b1317307442bd31edbbc7532061d623e85659df2d417308fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:01.647413 env[1354]: time="2024-02-09T09:56:01.647376573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:01.652933 env[1354]: time="2024-02-09T09:56:01.652897218Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:01.653687 env[1354]: time="2024-02-09T09:56:01.653657099Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:68142d88471bf00b1317307442bd31edbbc7532061d623e85659df2d417308fb\"" Feb 9 09:56:01.662141 env[1354]: time="2024-02-09T09:56:01.662110547Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 9 09:56:03.615527 env[1354]: time="2024-02-09T09:56:03.615463700Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:03.623130 env[1354]: time="2024-02-09T09:56:03.623083707Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8dbd4fd1241644100b94eb40a9d284c5cf08fa7f2d15cafdf1ca8cec8443b31f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:03.629266 env[1354]: time="2024-02-09T09:56:03.629231833Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:03.632593 env[1354]: time="2024-02-09T09:56:03.632559556Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:03.633322 env[1354]: time="2024-02-09T09:56:03.633294036Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:8dbd4fd1241644100b94eb40a9d284c5cf08fa7f2d15cafdf1ca8cec8443b31f\"" Feb 9 09:56:03.641718 env[1354]: time="2024-02-09T09:56:03.641675204Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 9 09:56:04.852564 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 09:56:04.852729 systemd[1]: Stopped kubelet.service. Feb 9 09:56:04.854101 systemd[1]: Started kubelet.service. Feb 9 09:56:04.883226 env[1354]: time="2024-02-09T09:56:04.882592618Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:04.890693 env[1354]: time="2024-02-09T09:56:04.890263705Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:541cddf10a6c9bb71f141eeefea4203714984b67ec3582fb4538058af9e43663,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:04.894718 env[1354]: time="2024-02-09T09:56:04.894670549Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:04.900099 kubelet[1911]: E0209 09:56:04.900067 1911 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 09:56:04.902173 env[1354]: time="2024-02-09T09:56:04.900871074Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:04.902173 env[1354]: time="2024-02-09T09:56:04.901375114Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:541cddf10a6c9bb71f141eeefea4203714984b67ec3582fb4538058af9e43663\"" Feb 9 09:56:04.904878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:56:04.904999 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:56:04.909997 env[1354]: time="2024-02-09T09:56:04.909949242Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 09:56:05.944607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1315662044.mount: Deactivated successfully. Feb 9 09:56:06.786389 env[1354]: time="2024-02-09T09:56:06.786345428Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:06.795259 env[1354]: time="2024-02-09T09:56:06.795224956Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:06.798379 env[1354]: time="2024-02-09T09:56:06.798357158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:06.801890 env[1354]: time="2024-02-09T09:56:06.801853681Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:06.802372 env[1354]: time="2024-02-09T09:56:06.802346161Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74\"" Feb 9 09:56:06.810932 env[1354]: time="2024-02-09T09:56:06.810890009Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 09:56:07.412028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2035757369.mount: Deactivated successfully. Feb 9 09:56:07.440280 env[1354]: time="2024-02-09T09:56:07.440222123Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:07.447601 env[1354]: time="2024-02-09T09:56:07.447550369Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:07.453712 env[1354]: time="2024-02-09T09:56:07.453675374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:07.459242 env[1354]: time="2024-02-09T09:56:07.459216218Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:07.459829 env[1354]: time="2024-02-09T09:56:07.459801139Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 09:56:07.468336 env[1354]: time="2024-02-09T09:56:07.468302866Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 9 09:56:08.163741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1122067883.mount: Deactivated successfully. Feb 9 09:56:13.742383 env[1354]: time="2024-02-09T09:56:13.742319146Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:13.748785 env[1354]: time="2024-02-09T09:56:13.748755391Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:13.753389 env[1354]: time="2024-02-09T09:56:13.753348594Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:13.758431 env[1354]: time="2024-02-09T09:56:13.758406038Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:13.759350 env[1354]: time="2024-02-09T09:56:13.759325318Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace\"" Feb 9 09:56:13.768948 env[1354]: time="2024-02-09T09:56:13.768910365Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 09:56:14.498893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4193002639.mount: Deactivated successfully. Feb 9 09:56:15.088094 env[1354]: time="2024-02-09T09:56:15.088043669Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:15.102460 env[1354]: time="2024-02-09T09:56:15.102424839Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:15.102561 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 09:56:15.102727 systemd[1]: Stopped kubelet.service. Feb 9 09:56:15.104083 systemd[1]: Started kubelet.service. Feb 9 09:56:15.109866 env[1354]: time="2024-02-09T09:56:15.109742604Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:15.116654 env[1354]: time="2024-02-09T09:56:15.116617768Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:15.117224 env[1354]: time="2024-02-09T09:56:15.116985449Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Feb 9 09:56:15.164719 kubelet[1937]: E0209 09:56:15.164679 1937 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 09:56:15.167153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:56:15.167295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:56:20.440322 systemd[1]: Stopped kubelet.service. Feb 9 09:56:20.461791 systemd[1]: Reloading. Feb 9 09:56:20.521859 /usr/lib/systemd/system-generators/torcx-generator[2028]: time="2024-02-09T09:56:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:56:20.522262 /usr/lib/systemd/system-generators/torcx-generator[2028]: time="2024-02-09T09:56:20Z" level=info msg="torcx already run" Feb 9 09:56:20.597218 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:56:20.597234 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:56:20.613911 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:56:20.704941 systemd[1]: Started kubelet.service. Feb 9 09:56:20.755294 kubelet[2087]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:20.755294 kubelet[2087]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 09:56:20.755294 kubelet[2087]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:20.755648 kubelet[2087]: I0209 09:56:20.755353 2087 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:56:21.509214 kubelet[2087]: I0209 09:56:21.509165 2087 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 09:56:21.509367 kubelet[2087]: I0209 09:56:21.509356 2087 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:56:21.509631 kubelet[2087]: I0209 09:56:21.509617 2087 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 09:56:21.514748 kubelet[2087]: E0209 09:56:21.514727 2087 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:21.514902 kubelet[2087]: I0209 09:56:21.514891 2087 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:56:21.519229 kubelet[2087]: W0209 09:56:21.519206 2087 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:56:21.519741 kubelet[2087]: I0209 09:56:21.519724 2087 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:56:21.519953 kubelet[2087]: I0209 09:56:21.519939 2087 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:56:21.520104 kubelet[2087]: I0209 09:56:21.520084 2087 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 09:56:21.520210 kubelet[2087]: I0209 09:56:21.520109 2087 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 09:56:21.520210 kubelet[2087]: I0209 09:56:21.520118 2087 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 09:56:21.520268 kubelet[2087]: I0209 09:56:21.520245 2087 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:21.520361 kubelet[2087]: I0209 09:56:21.520343 2087 kubelet.go:393] "Attempting to sync node with API server" Feb 9 09:56:21.520391 kubelet[2087]: I0209 09:56:21.520365 2087 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:56:21.520802 kubelet[2087]: W0209 09:56:21.520766 2087 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-b353ffea6c&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:21.520837 kubelet[2087]: E0209 09:56:21.520819 2087 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-b353ffea6c&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:21.520863 kubelet[2087]: I0209 09:56:21.520841 2087 kubelet.go:309] "Adding apiserver pod source" Feb 9 09:56:21.520863 kubelet[2087]: I0209 09:56:21.520860 2087 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:56:21.524001 kubelet[2087]: I0209 09:56:21.523987 2087 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:56:21.524356 kubelet[2087]: W0209 09:56:21.524343 2087 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:56:21.524846 kubelet[2087]: W0209 09:56:21.524793 2087 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:21.524846 kubelet[2087]: E0209 09:56:21.524843 2087 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:21.525025 kubelet[2087]: I0209 09:56:21.525012 2087 server.go:1232] "Started kubelet" Feb 9 09:56:21.525172 kubelet[2087]: I0209 09:56:21.525152 2087 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:56:21.525453 kubelet[2087]: I0209 09:56:21.525438 2087 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 09:56:21.525777 kubelet[2087]: I0209 09:56:21.525747 2087 server.go:462] "Adding debug handlers to kubelet server" Feb 9 09:56:21.525847 kubelet[2087]: I0209 09:56:21.525757 2087 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 09:56:21.526102 kubelet[2087]: E0209 09:56:21.526023 2087 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b353ffea6c.17b229442d818043", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b353ffea6c", UID:"ci-3510.3.2-a-b353ffea6c", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b353ffea6c"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 56, 21, 524815939, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 56, 21, 524815939, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-b353ffea6c"}': 'Post "https://10.200.20.38:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.38:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:56:21.527242 kubelet[2087]: E0209 09:56:21.527225 2087 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:56:21.527346 kubelet[2087]: E0209 09:56:21.527335 2087 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:56:21.534314 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:56:21.534820 kubelet[2087]: I0209 09:56:21.534791 2087 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:56:21.535500 kubelet[2087]: I0209 09:56:21.535482 2087 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 09:56:21.536636 kubelet[2087]: I0209 09:56:21.536617 2087 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:56:21.536869 kubelet[2087]: I0209 09:56:21.536857 2087 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 09:56:21.538792 kubelet[2087]: W0209 09:56:21.538742 2087 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:21.538885 kubelet[2087]: E0209 09:56:21.538804 2087 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:21.539060 kubelet[2087]: E0209 09:56:21.539034 2087 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b353ffea6c?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="200ms" Feb 9 09:56:21.597030 kubelet[2087]: I0209 09:56:21.596992 2087 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 09:56:21.598173 kubelet[2087]: I0209 09:56:21.598157 2087 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 09:56:21.598348 kubelet[2087]: I0209 09:56:21.598337 2087 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 09:56:21.598575 kubelet[2087]: I0209 09:56:21.598552 2087 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 09:56:21.598641 kubelet[2087]: E0209 09:56:21.598616 2087 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:56:21.599248 kubelet[2087]: W0209 09:56:21.599050 2087 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:21.599248 kubelet[2087]: E0209 09:56:21.599080 2087 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:21.624006 kubelet[2087]: I0209 09:56:21.623982 2087 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:56:21.624161 kubelet[2087]: I0209 09:56:21.624150 2087 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:56:21.624249 kubelet[2087]: I0209 09:56:21.624239 2087 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:21.633935 kubelet[2087]: I0209 09:56:21.633907 2087 policy_none.go:49] "None policy: Start" Feb 9 09:56:21.634755 kubelet[2087]: I0209 09:56:21.634738 2087 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:56:21.634867 kubelet[2087]: I0209 09:56:21.634856 2087 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:56:21.637628 kubelet[2087]: I0209 09:56:21.637610 2087 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:21.638125 kubelet[2087]: E0209 09:56:21.638112 2087 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:21.646090 systemd[1]: Created slice kubepods.slice. Feb 9 09:56:21.650221 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 09:56:21.652872 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 09:56:21.664860 kubelet[2087]: I0209 09:56:21.664433 2087 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:56:21.664978 kubelet[2087]: I0209 09:56:21.664936 2087 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:56:21.666156 kubelet[2087]: E0209 09:56:21.666105 2087 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-b353ffea6c\" not found" Feb 9 09:56:21.699754 kubelet[2087]: I0209 09:56:21.699713 2087 topology_manager.go:215] "Topology Admit Handler" podUID="e3047ddd82a2d9b46c165ae7eca1a82f" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:21.701395 kubelet[2087]: I0209 09:56:21.701376 2087 topology_manager.go:215] "Topology Admit Handler" podUID="ca011cf73094188d153c6ed8fbeb964a" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:21.702934 kubelet[2087]: I0209 09:56:21.702905 2087 topology_manager.go:215] "Topology Admit Handler" podUID="9bc53ac67eb089764b64e51d41e6ca46" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:21.708344 systemd[1]: Created slice kubepods-burstable-pode3047ddd82a2d9b46c165ae7eca1a82f.slice. Feb 9 09:56:21.721414 systemd[1]: Created slice kubepods-burstable-podca011cf73094188d153c6ed8fbeb964a.slice. Feb 9 09:56:21.725564 systemd[1]: Created slice kubepods-burstable-pod9bc53ac67eb089764b64e51d41e6ca46.slice. Feb 9 09:56:21.740290 kubelet[2087]: E0209 09:56:21.740254 2087 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b353ffea6c?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="400ms" Feb 9 09:56:21.839138 kubelet[2087]: I0209 09:56:21.838896 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3047ddd82a2d9b46c165ae7eca1a82f-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-b353ffea6c\" (UID: \"e3047ddd82a2d9b46c165ae7eca1a82f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:21.839138 kubelet[2087]: I0209 09:56:21.838936 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3047ddd82a2d9b46c165ae7eca1a82f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-b353ffea6c\" (UID: \"e3047ddd82a2d9b46c165ae7eca1a82f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:21.839138 kubelet[2087]: I0209 09:56:21.838958 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca011cf73094188d153c6ed8fbeb964a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-b353ffea6c\" (UID: \"ca011cf73094188d153c6ed8fbeb964a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:21.839138 kubelet[2087]: I0209 09:56:21.838978 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9bc53ac67eb089764b64e51d41e6ca46-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-b353ffea6c\" (UID: \"9bc53ac67eb089764b64e51d41e6ca46\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:21.839138 kubelet[2087]: I0209 09:56:21.838997 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3047ddd82a2d9b46c165ae7eca1a82f-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-b353ffea6c\" (UID: \"e3047ddd82a2d9b46c165ae7eca1a82f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:21.840120 kubelet[2087]: I0209 09:56:21.839015 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca011cf73094188d153c6ed8fbeb964a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-b353ffea6c\" (UID: \"ca011cf73094188d153c6ed8fbeb964a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:21.840120 kubelet[2087]: I0209 09:56:21.839036 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ca011cf73094188d153c6ed8fbeb964a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-b353ffea6c\" (UID: \"ca011cf73094188d153c6ed8fbeb964a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:21.840120 kubelet[2087]: I0209 09:56:21.839054 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca011cf73094188d153c6ed8fbeb964a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-b353ffea6c\" (UID: \"ca011cf73094188d153c6ed8fbeb964a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:21.840120 kubelet[2087]: I0209 09:56:21.839083 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca011cf73094188d153c6ed8fbeb964a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-b353ffea6c\" (UID: \"ca011cf73094188d153c6ed8fbeb964a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:21.841536 kubelet[2087]: I0209 09:56:21.841508 2087 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:21.841796 kubelet[2087]: E0209 09:56:21.841777 2087 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:22.022111 env[1354]: time="2024-02-09T09:56:22.022019268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-b353ffea6c,Uid:e3047ddd82a2d9b46c165ae7eca1a82f,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:22.025133 env[1354]: time="2024-02-09T09:56:22.025094710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-b353ffea6c,Uid:ca011cf73094188d153c6ed8fbeb964a,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:22.028347 env[1354]: time="2024-02-09T09:56:22.028322632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-b353ffea6c,Uid:9bc53ac67eb089764b64e51d41e6ca46,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:22.140744 kubelet[2087]: E0209 09:56:22.140706 2087 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b353ffea6c?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="800ms" Feb 9 09:56:22.243716 kubelet[2087]: I0209 09:56:22.243646 2087 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:22.243982 kubelet[2087]: E0209 09:56:22.243951 2087 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:22.426927 kubelet[2087]: W0209 09:56:22.426776 2087 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:22.426927 kubelet[2087]: E0209 09:56:22.426832 2087 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:22.609626 kubelet[2087]: W0209 09:56:22.609567 2087 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-b353ffea6c&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:22.609626 kubelet[2087]: E0209 09:56:22.609631 2087 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-b353ffea6c&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:22.683327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount32610001.mount: Deactivated successfully. Feb 9 09:56:22.703664 kubelet[2087]: W0209 09:56:22.703626 2087 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:22.703664 kubelet[2087]: E0209 09:56:22.703668 2087 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:22.707113 env[1354]: time="2024-02-09T09:56:22.707061698Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:22.724248 env[1354]: time="2024-02-09T09:56:22.724216187Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:22.730875 env[1354]: time="2024-02-09T09:56:22.730836831Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:22.734628 env[1354]: time="2024-02-09T09:56:22.734595793Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:22.746786 env[1354]: time="2024-02-09T09:56:22.746752360Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:22.750520 env[1354]: time="2024-02-09T09:56:22.750488882Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:22.755200 env[1354]: time="2024-02-09T09:56:22.755159565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:22.758572 env[1354]: time="2024-02-09T09:56:22.758546967Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:22.762547 kubelet[2087]: W0209 09:56:22.762517 2087 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:22.762634 kubelet[2087]: E0209 09:56:22.762559 2087 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 9 09:56:22.764275 env[1354]: time="2024-02-09T09:56:22.764237010Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:22.772730 env[1354]: time="2024-02-09T09:56:22.772685735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:22.777965 env[1354]: time="2024-02-09T09:56:22.777933658Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:22.792996 env[1354]: time="2024-02-09T09:56:22.792952946Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:22.844835 env[1354]: time="2024-02-09T09:56:22.844644616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:22.844835 env[1354]: time="2024-02-09T09:56:22.844682776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:22.844835 env[1354]: time="2024-02-09T09:56:22.844692416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:22.845266 env[1354]: time="2024-02-09T09:56:22.845210616Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c823c5b776f3f3886ea0384ed607ca2fc99c35440ddfb51edbf97e2eff14020 pid=2125 runtime=io.containerd.runc.v2 Feb 9 09:56:22.863214 systemd[1]: Started cri-containerd-3c823c5b776f3f3886ea0384ed607ca2fc99c35440ddfb51edbf97e2eff14020.scope. Feb 9 09:56:22.873821 env[1354]: time="2024-02-09T09:56:22.873754792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:22.874001 env[1354]: time="2024-02-09T09:56:22.873978312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:22.874097 env[1354]: time="2024-02-09T09:56:22.874076593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:22.874379 env[1354]: time="2024-02-09T09:56:22.874351313Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2caf5d14abaaa940285a79dbd0745acb24ddaa83b52e6cc3a8cb1096ec40a82b pid=2155 runtime=io.containerd.runc.v2 Feb 9 09:56:22.890602 env[1354]: time="2024-02-09T09:56:22.890368882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:22.890602 env[1354]: time="2024-02-09T09:56:22.890407442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:22.890602 env[1354]: time="2024-02-09T09:56:22.890418442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:22.892061 env[1354]: time="2024-02-09T09:56:22.890913082Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f05b26b81de45b6c058c2c6fd82ccfcf7b81215f0b2dafb090252abac0c1846a pid=2183 runtime=io.containerd.runc.v2 Feb 9 09:56:22.894924 systemd[1]: Started cri-containerd-2caf5d14abaaa940285a79dbd0745acb24ddaa83b52e6cc3a8cb1096ec40a82b.scope. Feb 9 09:56:22.922298 systemd[1]: Started cri-containerd-f05b26b81de45b6c058c2c6fd82ccfcf7b81215f0b2dafb090252abac0c1846a.scope. Feb 9 09:56:22.929337 env[1354]: time="2024-02-09T09:56:22.929295024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-b353ffea6c,Uid:e3047ddd82a2d9b46c165ae7eca1a82f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c823c5b776f3f3886ea0384ed607ca2fc99c35440ddfb51edbf97e2eff14020\"" Feb 9 09:56:22.934931 env[1354]: time="2024-02-09T09:56:22.934847547Z" level=info msg="CreateContainer within sandbox \"3c823c5b776f3f3886ea0384ed607ca2fc99c35440ddfb51edbf97e2eff14020\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 09:56:22.937813 env[1354]: time="2024-02-09T09:56:22.937774189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-b353ffea6c,Uid:9bc53ac67eb089764b64e51d41e6ca46,Namespace:kube-system,Attempt:0,} returns sandbox id \"2caf5d14abaaa940285a79dbd0745acb24ddaa83b52e6cc3a8cb1096ec40a82b\"" Feb 9 09:56:22.942145 kubelet[2087]: E0209 09:56:22.941797 2087 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b353ffea6c?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="1.6s" Feb 9 09:56:22.943276 env[1354]: time="2024-02-09T09:56:22.943250632Z" level=info msg="CreateContainer within sandbox \"2caf5d14abaaa940285a79dbd0745acb24ddaa83b52e6cc3a8cb1096ec40a82b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 09:56:22.965883 env[1354]: time="2024-02-09T09:56:22.965840085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-b353ffea6c,Uid:ca011cf73094188d153c6ed8fbeb964a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f05b26b81de45b6c058c2c6fd82ccfcf7b81215f0b2dafb090252abac0c1846a\"" Feb 9 09:56:22.969041 env[1354]: time="2024-02-09T09:56:22.968984167Z" level=info msg="CreateContainer within sandbox \"f05b26b81de45b6c058c2c6fd82ccfcf7b81215f0b2dafb090252abac0c1846a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 09:56:22.976510 env[1354]: time="2024-02-09T09:56:22.976464651Z" level=info msg="CreateContainer within sandbox \"3c823c5b776f3f3886ea0384ed607ca2fc99c35440ddfb51edbf97e2eff14020\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"99de18def708a92e2dd49886b95b8f5d918ea0bc71a0168814dd0e1b57d03c79\"" Feb 9 09:56:22.977108 env[1354]: time="2024-02-09T09:56:22.977086171Z" level=info msg="StartContainer for \"99de18def708a92e2dd49886b95b8f5d918ea0bc71a0168814dd0e1b57d03c79\"" Feb 9 09:56:22.992195 systemd[1]: Started cri-containerd-99de18def708a92e2dd49886b95b8f5d918ea0bc71a0168814dd0e1b57d03c79.scope. Feb 9 09:56:23.006750 env[1354]: time="2024-02-09T09:56:23.006711548Z" level=info msg="CreateContainer within sandbox \"2caf5d14abaaa940285a79dbd0745acb24ddaa83b52e6cc3a8cb1096ec40a82b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4197f79c547014ff7e128ce04c896aeee0b45d67cf98c81a8acc499615a45224\"" Feb 9 09:56:23.007394 env[1354]: time="2024-02-09T09:56:23.007358508Z" level=info msg="StartContainer for \"4197f79c547014ff7e128ce04c896aeee0b45d67cf98c81a8acc499615a45224\"" Feb 9 09:56:23.035040 env[1354]: time="2024-02-09T09:56:23.035001324Z" level=info msg="CreateContainer within sandbox \"f05b26b81de45b6c058c2c6fd82ccfcf7b81215f0b2dafb090252abac0c1846a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5e96002b5e2a890cb5d7be50a5047b846098a7735b052dd92ae11ec26db761f4\"" Feb 9 09:56:23.038301 systemd[1]: Started cri-containerd-4197f79c547014ff7e128ce04c896aeee0b45d67cf98c81a8acc499615a45224.scope. Feb 9 09:56:23.038623 env[1354]: time="2024-02-09T09:56:23.038079725Z" level=info msg="StartContainer for \"5e96002b5e2a890cb5d7be50a5047b846098a7735b052dd92ae11ec26db761f4\"" Feb 9 09:56:23.040082 env[1354]: time="2024-02-09T09:56:23.040056606Z" level=info msg="StartContainer for \"99de18def708a92e2dd49886b95b8f5d918ea0bc71a0168814dd0e1b57d03c79\" returns successfully" Feb 9 09:56:23.049012 kubelet[2087]: I0209 09:56:23.048644 2087 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:23.049012 kubelet[2087]: E0209 09:56:23.048995 2087 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:23.065779 systemd[1]: Started cri-containerd-5e96002b5e2a890cb5d7be50a5047b846098a7735b052dd92ae11ec26db761f4.scope. Feb 9 09:56:23.092138 env[1354]: time="2024-02-09T09:56:23.092078435Z" level=info msg="StartContainer for \"4197f79c547014ff7e128ce04c896aeee0b45d67cf98c81a8acc499615a45224\" returns successfully" Feb 9 09:56:23.112407 env[1354]: time="2024-02-09T09:56:23.112355687Z" level=info msg="StartContainer for \"5e96002b5e2a890cb5d7be50a5047b846098a7735b052dd92ae11ec26db761f4\" returns successfully" Feb 9 09:56:24.651074 kubelet[2087]: I0209 09:56:24.651042 2087 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:26.124973 kubelet[2087]: E0209 09:56:26.124947 2087 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-b353ffea6c\" not found" node="ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:26.217098 kubelet[2087]: I0209 09:56:26.217070 2087 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:26.525887 kubelet[2087]: I0209 09:56:26.525784 2087 apiserver.go:52] "Watching apiserver" Feb 9 09:56:26.537280 kubelet[2087]: I0209 09:56:26.537246 2087 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:56:27.135496 kubelet[2087]: W0209 09:56:27.135471 2087 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 09:56:29.338389 systemd[1]: Reloading. Feb 9 09:56:29.448719 /usr/lib/systemd/system-generators/torcx-generator[2376]: time="2024-02-09T09:56:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:56:29.448750 /usr/lib/systemd/system-generators/torcx-generator[2376]: time="2024-02-09T09:56:29Z" level=info msg="torcx already run" Feb 9 09:56:29.533643 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:56:29.533660 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:56:29.551635 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:56:29.658506 systemd[1]: Stopping kubelet.service... Feb 9 09:56:29.659433 kubelet[2087]: I0209 09:56:29.659399 2087 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:56:29.673853 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 09:56:29.674216 systemd[1]: Stopped kubelet.service. Feb 9 09:56:29.674353 systemd[1]: kubelet.service: Consumed 1.091s CPU time. Feb 9 09:56:29.676718 systemd[1]: Started kubelet.service. Feb 9 09:56:29.735964 kubelet[2429]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:29.735964 kubelet[2429]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 09:56:29.735964 kubelet[2429]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:29.736416 kubelet[2429]: I0209 09:56:29.736003 2429 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:56:29.740173 kubelet[2429]: I0209 09:56:29.740147 2429 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 09:56:29.740173 kubelet[2429]: I0209 09:56:29.740172 2429 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:56:29.740374 kubelet[2429]: I0209 09:56:29.740355 2429 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 09:56:29.743150 kubelet[2429]: I0209 09:56:29.743016 2429 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 09:56:29.746167 kubelet[2429]: I0209 09:56:29.746144 2429 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:56:29.750358 kubelet[2429]: W0209 09:56:29.750340 2429 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:56:29.751128 kubelet[2429]: I0209 09:56:29.751113 2429 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:56:29.751435 kubelet[2429]: I0209 09:56:29.751422 2429 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:56:29.751717 kubelet[2429]: I0209 09:56:29.751700 2429 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 09:56:29.751928 kubelet[2429]: I0209 09:56:29.751914 2429 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 09:56:29.752004 kubelet[2429]: I0209 09:56:29.751994 2429 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 09:56:29.752076 kubelet[2429]: I0209 09:56:29.752068 2429 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:29.753018 kubelet[2429]: I0209 09:56:29.753000 2429 kubelet.go:393] "Attempting to sync node with API server" Feb 9 09:56:29.762417 kubelet[2429]: I0209 09:56:29.756231 2429 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:56:29.762417 kubelet[2429]: I0209 09:56:29.756275 2429 kubelet.go:309] "Adding apiserver pod source" Feb 9 09:56:29.762417 kubelet[2429]: I0209 09:56:29.756296 2429 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:56:29.766151 kubelet[2429]: I0209 09:56:29.766128 2429 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:56:29.766859 kubelet[2429]: I0209 09:56:29.766827 2429 server.go:1232] "Started kubelet" Feb 9 09:56:29.771182 kubelet[2429]: I0209 09:56:29.771013 2429 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:56:29.772050 kubelet[2429]: E0209 09:56:29.772034 2429 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:56:29.772150 kubelet[2429]: E0209 09:56:29.772140 2429 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:56:29.772842 kubelet[2429]: I0209 09:56:29.772829 2429 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:56:29.773528 kubelet[2429]: I0209 09:56:29.773511 2429 server.go:462] "Adding debug handlers to kubelet server" Feb 9 09:56:29.778200 kubelet[2429]: I0209 09:56:29.775216 2429 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 09:56:29.778200 kubelet[2429]: I0209 09:56:29.776619 2429 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:56:29.778200 kubelet[2429]: I0209 09:56:29.776748 2429 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 09:56:29.778200 kubelet[2429]: I0209 09:56:29.777945 2429 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 09:56:29.778683 kubelet[2429]: I0209 09:56:29.778653 2429 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 09:56:29.778722 kubelet[2429]: I0209 09:56:29.778686 2429 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 09:56:29.778722 kubelet[2429]: I0209 09:56:29.778702 2429 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 09:56:29.778771 kubelet[2429]: E0209 09:56:29.778743 2429 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:56:29.785984 kubelet[2429]: I0209 09:56:29.780011 2429 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 09:56:29.786997 kubelet[2429]: I0209 09:56:29.786982 2429 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 09:56:29.877575 kubelet[2429]: I0209 09:56:29.877543 2429 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:29.879603 kubelet[2429]: E0209 09:56:29.879569 2429 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 09:56:29.889354 sudo[2458]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 09:56:29.889553 sudo[2458]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 09:56:29.890751 kubelet[2429]: I0209 09:56:29.890403 2429 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:56:29.890751 kubelet[2429]: I0209 09:56:29.890431 2429 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:56:29.890751 kubelet[2429]: I0209 09:56:29.890449 2429 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:29.890751 kubelet[2429]: I0209 09:56:29.890600 2429 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 09:56:29.890751 kubelet[2429]: I0209 09:56:29.890622 2429 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 9 09:56:29.890751 kubelet[2429]: I0209 09:56:29.890629 2429 policy_none.go:49] "None policy: Start" Feb 9 09:56:29.895834 kubelet[2429]: I0209 09:56:29.892541 2429 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:56:29.895834 kubelet[2429]: I0209 09:56:29.892572 2429 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:56:29.895834 kubelet[2429]: I0209 09:56:29.892744 2429 state_mem.go:75] "Updated machine memory state" Feb 9 09:56:29.902820 kubelet[2429]: I0209 09:56:29.902793 2429 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:56:29.903038 kubelet[2429]: I0209 09:56:29.903018 2429 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:56:29.911132 kubelet[2429]: I0209 09:56:29.910858 2429 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:29.911132 kubelet[2429]: I0209 09:56:29.910926 2429 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:30.080678 kubelet[2429]: I0209 09:56:30.080637 2429 topology_manager.go:215] "Topology Admit Handler" podUID="ca011cf73094188d153c6ed8fbeb964a" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:30.080824 kubelet[2429]: I0209 09:56:30.080756 2429 topology_manager.go:215] "Topology Admit Handler" podUID="9bc53ac67eb089764b64e51d41e6ca46" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:30.080824 kubelet[2429]: I0209 09:56:30.080790 2429 topology_manager.go:215] "Topology Admit Handler" podUID="e3047ddd82a2d9b46c165ae7eca1a82f" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:30.085384 kubelet[2429]: W0209 09:56:30.085350 2429 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 09:56:30.094832 kubelet[2429]: W0209 09:56:30.094812 2429 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 09:56:30.105037 kubelet[2429]: W0209 09:56:30.105015 2429 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 09:56:30.105240 kubelet[2429]: E0209 09:56:30.105227 2429 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-b353ffea6c\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:30.178487 kubelet[2429]: I0209 09:56:30.178387 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ca011cf73094188d153c6ed8fbeb964a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-b353ffea6c\" (UID: \"ca011cf73094188d153c6ed8fbeb964a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:30.178487 kubelet[2429]: I0209 09:56:30.178440 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca011cf73094188d153c6ed8fbeb964a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-b353ffea6c\" (UID: \"ca011cf73094188d153c6ed8fbeb964a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:30.178487 kubelet[2429]: I0209 09:56:30.178463 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca011cf73094188d153c6ed8fbeb964a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-b353ffea6c\" (UID: \"ca011cf73094188d153c6ed8fbeb964a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:30.178655 kubelet[2429]: I0209 09:56:30.178498 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca011cf73094188d153c6ed8fbeb964a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-b353ffea6c\" (UID: \"ca011cf73094188d153c6ed8fbeb964a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:30.178655 kubelet[2429]: I0209 09:56:30.178526 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3047ddd82a2d9b46c165ae7eca1a82f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-b353ffea6c\" (UID: \"e3047ddd82a2d9b46c165ae7eca1a82f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:30.178655 kubelet[2429]: I0209 09:56:30.178546 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca011cf73094188d153c6ed8fbeb964a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-b353ffea6c\" (UID: \"ca011cf73094188d153c6ed8fbeb964a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:30.178655 kubelet[2429]: I0209 09:56:30.178565 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9bc53ac67eb089764b64e51d41e6ca46-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-b353ffea6c\" (UID: \"9bc53ac67eb089764b64e51d41e6ca46\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:30.178655 kubelet[2429]: I0209 09:56:30.178597 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3047ddd82a2d9b46c165ae7eca1a82f-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-b353ffea6c\" (UID: \"e3047ddd82a2d9b46c165ae7eca1a82f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:30.178769 kubelet[2429]: I0209 09:56:30.178624 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3047ddd82a2d9b46c165ae7eca1a82f-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-b353ffea6c\" (UID: \"e3047ddd82a2d9b46c165ae7eca1a82f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b353ffea6c" Feb 9 09:56:30.421768 sudo[2458]: pam_unix(sudo:session): session closed for user root Feb 9 09:56:30.763351 kubelet[2429]: I0209 09:56:30.763322 2429 apiserver.go:52] "Watching apiserver" Feb 9 09:56:30.776871 kubelet[2429]: I0209 09:56:30.776834 2429 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:56:30.900079 kubelet[2429]: I0209 09:56:30.900043 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-b353ffea6c" podStartSLOduration=3.899983831 podCreationTimestamp="2024-02-09 09:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:30.89830875 +0000 UTC m=+1.216730468" watchObservedRunningTime="2024-02-09 09:56:30.899983831 +0000 UTC m=+1.218405549" Feb 9 09:56:30.900229 kubelet[2429]: I0209 09:56:30.900151 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b353ffea6c" podStartSLOduration=0.900134551 podCreationTimestamp="2024-02-09 09:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:30.870168056 +0000 UTC m=+1.188589814" watchObservedRunningTime="2024-02-09 09:56:30.900134551 +0000 UTC m=+1.218556309" Feb 9 09:56:30.931943 kubelet[2429]: I0209 09:56:30.931877 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-b353ffea6c" podStartSLOduration=0.931819566 podCreationTimestamp="2024-02-09 09:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:30.915477478 +0000 UTC m=+1.233899236" watchObservedRunningTime="2024-02-09 09:56:30.931819566 +0000 UTC m=+1.250241324" Feb 9 09:56:32.502057 sudo[1681]: pam_unix(sudo:session): session closed for user root Feb 9 09:56:32.585629 sshd[1678]: pam_unix(sshd:session): session closed for user core Feb 9 09:56:32.588182 systemd-logind[1343]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:56:32.588356 systemd[1]: sshd@4-10.200.20.38:22-10.200.12.6:39888.service: Deactivated successfully. Feb 9 09:56:32.589038 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:56:32.589246 systemd[1]: session-7.scope: Consumed 7.223s CPU time. Feb 9 09:56:32.589883 systemd-logind[1343]: Removed session 7. Feb 9 09:56:42.314527 kubelet[2429]: I0209 09:56:42.314491 2429 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 09:56:42.314931 env[1354]: time="2024-02-09T09:56:42.314807381Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:56:42.315161 kubelet[2429]: I0209 09:56:42.315138 2429 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 09:56:43.007435 kubelet[2429]: I0209 09:56:43.007382 2429 topology_manager.go:215] "Topology Admit Handler" podUID="12f58850-8e6c-4fa8-a6e8-32fe28b7eb79" podNamespace="kube-system" podName="kube-proxy-87wx9" Feb 9 09:56:43.010129 kubelet[2429]: I0209 09:56:43.010101 2429 topology_manager.go:215] "Topology Admit Handler" podUID="dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" podNamespace="kube-system" podName="cilium-s9d74" Feb 9 09:56:43.013632 systemd[1]: Created slice kubepods-besteffort-pod12f58850_8e6c_4fa8_a6e8_32fe28b7eb79.slice. Feb 9 09:56:43.021075 systemd[1]: Created slice kubepods-burstable-poddc6bc49e_686d_4fb0_9969_4dc4513aeb0e.slice. Feb 9 09:56:43.041628 kubelet[2429]: I0209 09:56:43.041596 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-cilium-cgroup\") pod \"cilium-s9d74\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " pod="kube-system/cilium-s9d74" Feb 9 09:56:43.041843 kubelet[2429]: I0209 09:56:43.041831 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-cilium-config-path\") pod \"cilium-s9d74\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " pod="kube-system/cilium-s9d74" Feb 9 09:56:43.041951 kubelet[2429]: I0209 09:56:43.041942 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-etc-cni-netd\") pod \"cilium-s9d74\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " pod="kube-system/cilium-s9d74" Feb 9 09:56:43.042051 kubelet[2429]: I0209 09:56:43.042042 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp4j7\" (UniqueName: \"kubernetes.io/projected/12f58850-8e6c-4fa8-a6e8-32fe28b7eb79-kube-api-access-wp4j7\") pod \"kube-proxy-87wx9\" (UID: \"12f58850-8e6c-4fa8-a6e8-32fe28b7eb79\") " pod="kube-system/kube-proxy-87wx9" Feb 9 09:56:43.042155 kubelet[2429]: I0209 09:56:43.042146 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-cilium-run\") pod \"cilium-s9d74\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " pod="kube-system/cilium-s9d74" Feb 9 09:56:43.042289 kubelet[2429]: I0209 09:56:43.042269 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-cni-path\") pod \"cilium-s9d74\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " pod="kube-system/cilium-s9d74" Feb 9 09:56:43.042392 kubelet[2429]: I0209 09:56:43.042383 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-host-proc-sys-net\") pod \"cilium-s9d74\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " pod="kube-system/cilium-s9d74" Feb 9 09:56:43.042477 kubelet[2429]: I0209 09:56:43.042468 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-hostproc\") pod \"cilium-s9d74\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " pod="kube-system/cilium-s9d74" Feb 9 09:56:43.042570 kubelet[2429]: I0209 09:56:43.042553 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-lib-modules\") pod \"cilium-s9d74\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " pod="kube-system/cilium-s9d74" Feb 9 09:56:43.042713 kubelet[2429]: I0209 09:56:43.042703 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-xtables-lock\") pod \"cilium-s9d74\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " pod="kube-system/cilium-s9d74" Feb 9 09:56:43.042809 kubelet[2429]: I0209 09:56:43.042799 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-clustermesh-secrets\") pod \"cilium-s9d74\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " pod="kube-system/cilium-s9d74" Feb 9 09:56:43.042912 kubelet[2429]: I0209 09:56:43.042902 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12f58850-8e6c-4fa8-a6e8-32fe28b7eb79-lib-modules\") pod \"kube-proxy-87wx9\" (UID: \"12f58850-8e6c-4fa8-a6e8-32fe28b7eb79\") " pod="kube-system/kube-proxy-87wx9" Feb 9 09:56:43.043006 kubelet[2429]: I0209 09:56:43.042996 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12f58850-8e6c-4fa8-a6e8-32fe28b7eb79-xtables-lock\") pod \"kube-proxy-87wx9\" (UID: \"12f58850-8e6c-4fa8-a6e8-32fe28b7eb79\") " pod="kube-system/kube-proxy-87wx9" Feb 9 09:56:43.043094 kubelet[2429]: I0209 09:56:43.043085 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-host-proc-sys-kernel\") pod \"cilium-s9d74\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " pod="kube-system/cilium-s9d74" Feb 9 09:56:43.043199 kubelet[2429]: I0209 09:56:43.043177 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz476\" (UniqueName: \"kubernetes.io/projected/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-kube-api-access-tz476\") pod \"cilium-s9d74\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " pod="kube-system/cilium-s9d74" Feb 9 09:56:43.043301 kubelet[2429]: I0209 09:56:43.043293 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-hubble-tls\") pod \"cilium-s9d74\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " pod="kube-system/cilium-s9d74" Feb 9 09:56:43.043396 kubelet[2429]: I0209 09:56:43.043387 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/12f58850-8e6c-4fa8-a6e8-32fe28b7eb79-kube-proxy\") pod \"kube-proxy-87wx9\" (UID: \"12f58850-8e6c-4fa8-a6e8-32fe28b7eb79\") " pod="kube-system/kube-proxy-87wx9" Feb 9 09:56:43.043486 kubelet[2429]: I0209 09:56:43.043477 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-bpf-maps\") pod \"cilium-s9d74\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " pod="kube-system/cilium-s9d74" Feb 9 09:56:43.284149 kubelet[2429]: I0209 09:56:43.284039 2429 topology_manager.go:215] "Topology Admit Handler" podUID="806f1e00-073c-4833-9820-c88731c8fc4d" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-m5vhr" Feb 9 09:56:43.289631 systemd[1]: Created slice kubepods-besteffort-pod806f1e00_073c_4833_9820_c88731c8fc4d.slice. Feb 9 09:56:43.320066 env[1354]: time="2024-02-09T09:56:43.320018083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-87wx9,Uid:12f58850-8e6c-4fa8-a6e8-32fe28b7eb79,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:43.324366 env[1354]: time="2024-02-09T09:56:43.324329765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s9d74,Uid:dc6bc49e-686d-4fb0-9969-4dc4513aeb0e,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:43.345051 kubelet[2429]: I0209 09:56:43.345008 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/806f1e00-073c-4833-9820-c88731c8fc4d-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-m5vhr\" (UID: \"806f1e00-073c-4833-9820-c88731c8fc4d\") " pod="kube-system/cilium-operator-6bc8ccdb58-m5vhr" Feb 9 09:56:43.345772 kubelet[2429]: I0209 09:56:43.345754 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbrt8\" (UniqueName: \"kubernetes.io/projected/806f1e00-073c-4833-9820-c88731c8fc4d-kube-api-access-gbrt8\") pod \"cilium-operator-6bc8ccdb58-m5vhr\" (UID: \"806f1e00-073c-4833-9820-c88731c8fc4d\") " pod="kube-system/cilium-operator-6bc8ccdb58-m5vhr" Feb 9 09:56:43.373783 env[1354]: time="2024-02-09T09:56:43.373710223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:43.373783 env[1354]: time="2024-02-09T09:56:43.373750023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:43.373983 env[1354]: time="2024-02-09T09:56:43.373760343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:43.374041 env[1354]: time="2024-02-09T09:56:43.374007903Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc9101eeef68d4c6f91bb87735fbc6314c39fc1c9aa0a8dd6e81f8442674dce6 pid=2508 runtime=io.containerd.runc.v2 Feb 9 09:56:43.382231 env[1354]: time="2024-02-09T09:56:43.382008106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:43.382231 env[1354]: time="2024-02-09T09:56:43.382182426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:43.382541 env[1354]: time="2024-02-09T09:56:43.382461427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:43.382957 env[1354]: time="2024-02-09T09:56:43.382891907Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5 pid=2526 runtime=io.containerd.runc.v2 Feb 9 09:56:43.390295 systemd[1]: Started cri-containerd-fc9101eeef68d4c6f91bb87735fbc6314c39fc1c9aa0a8dd6e81f8442674dce6.scope. Feb 9 09:56:43.403767 systemd[1]: Started cri-containerd-465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5.scope. Feb 9 09:56:43.437066 env[1354]: time="2024-02-09T09:56:43.437026887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-87wx9,Uid:12f58850-8e6c-4fa8-a6e8-32fe28b7eb79,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc9101eeef68d4c6f91bb87735fbc6314c39fc1c9aa0a8dd6e81f8442674dce6\"" Feb 9 09:56:43.438969 env[1354]: time="2024-02-09T09:56:43.438931528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s9d74,Uid:dc6bc49e-686d-4fb0-9969-4dc4513aeb0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\"" Feb 9 09:56:43.443068 env[1354]: time="2024-02-09T09:56:43.443038649Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 09:56:43.443404 env[1354]: time="2024-02-09T09:56:43.443135849Z" level=info msg="CreateContainer within sandbox \"fc9101eeef68d4c6f91bb87735fbc6314c39fc1c9aa0a8dd6e81f8442674dce6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:56:43.499646 env[1354]: time="2024-02-09T09:56:43.499600311Z" level=info msg="CreateContainer within sandbox \"fc9101eeef68d4c6f91bb87735fbc6314c39fc1c9aa0a8dd6e81f8442674dce6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b0dea68fa3c71cb640072db83c128cf1c35883164836b6ebe2638b6a0e0fd582\"" Feb 9 09:56:43.501579 env[1354]: time="2024-02-09T09:56:43.501539071Z" level=info msg="StartContainer for \"b0dea68fa3c71cb640072db83c128cf1c35883164836b6ebe2638b6a0e0fd582\"" Feb 9 09:56:43.517443 systemd[1]: Started cri-containerd-b0dea68fa3c71cb640072db83c128cf1c35883164836b6ebe2638b6a0e0fd582.scope. Feb 9 09:56:43.552006 env[1354]: time="2024-02-09T09:56:43.551893170Z" level=info msg="StartContainer for \"b0dea68fa3c71cb640072db83c128cf1c35883164836b6ebe2638b6a0e0fd582\" returns successfully" Feb 9 09:56:43.593222 env[1354]: time="2024-02-09T09:56:43.593161666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-m5vhr,Uid:806f1e00-073c-4833-9820-c88731c8fc4d,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:43.631997 env[1354]: time="2024-02-09T09:56:43.631917000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:43.631997 env[1354]: time="2024-02-09T09:56:43.631958600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:43.632182 env[1354]: time="2024-02-09T09:56:43.631993560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:43.632449 env[1354]: time="2024-02-09T09:56:43.632386080Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec pid=2651 runtime=io.containerd.runc.v2 Feb 9 09:56:43.645252 systemd[1]: Started cri-containerd-bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec.scope. Feb 9 09:56:43.681821 env[1354]: time="2024-02-09T09:56:43.681778859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-m5vhr,Uid:806f1e00-073c-4833-9820-c88731c8fc4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec\"" Feb 9 09:56:43.883912 kubelet[2429]: I0209 09:56:43.883881 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-87wx9" podStartSLOduration=1.883836735 podCreationTimestamp="2024-02-09 09:56:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:43.883630055 +0000 UTC m=+14.202051773" watchObservedRunningTime="2024-02-09 09:56:43.883836735 +0000 UTC m=+14.202258493" Feb 9 09:56:48.113169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount885010409.mount: Deactivated successfully. Feb 9 09:56:50.293392 env[1354]: time="2024-02-09T09:56:50.293336962Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:50.303698 env[1354]: time="2024-02-09T09:56:50.303643245Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:50.309851 env[1354]: time="2024-02-09T09:56:50.309803127Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:50.310557 env[1354]: time="2024-02-09T09:56:50.310527608Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 09:56:50.314095 env[1354]: time="2024-02-09T09:56:50.314058609Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 09:56:50.314982 env[1354]: time="2024-02-09T09:56:50.314954609Z" level=info msg="CreateContainer within sandbox \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:56:50.349536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1803473475.mount: Deactivated successfully. Feb 9 09:56:50.354357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2716070657.mount: Deactivated successfully. Feb 9 09:56:50.365937 env[1354]: time="2024-02-09T09:56:50.365874066Z" level=info msg="CreateContainer within sandbox \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011\"" Feb 9 09:56:50.367362 env[1354]: time="2024-02-09T09:56:50.367320667Z" level=info msg="StartContainer for \"a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011\"" Feb 9 09:56:50.385538 systemd[1]: Started cri-containerd-a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011.scope. Feb 9 09:56:50.419389 env[1354]: time="2024-02-09T09:56:50.419341244Z" level=info msg="StartContainer for \"a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011\" returns successfully" Feb 9 09:56:50.423261 systemd[1]: cri-containerd-a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011.scope: Deactivated successfully. Feb 9 09:56:51.346680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011-rootfs.mount: Deactivated successfully. Feb 9 09:56:52.118375 env[1354]: time="2024-02-09T09:56:52.118250367Z" level=info msg="shim disconnected" id=a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011 Feb 9 09:56:52.118375 env[1354]: time="2024-02-09T09:56:52.118322887Z" level=warning msg="cleaning up after shim disconnected" id=a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011 namespace=k8s.io Feb 9 09:56:52.118375 env[1354]: time="2024-02-09T09:56:52.118334247Z" level=info msg="cleaning up dead shim" Feb 9 09:56:52.126039 env[1354]: time="2024-02-09T09:56:52.125962689Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2827 runtime=io.containerd.runc.v2\n" Feb 9 09:56:52.911047 env[1354]: time="2024-02-09T09:56:52.910993984Z" level=info msg="CreateContainer within sandbox \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:56:52.952518 env[1354]: time="2024-02-09T09:56:52.952471917Z" level=info msg="CreateContainer within sandbox \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70\"" Feb 9 09:56:52.953511 env[1354]: time="2024-02-09T09:56:52.953486078Z" level=info msg="StartContainer for \"b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70\"" Feb 9 09:56:52.977951 systemd[1]: Started cri-containerd-b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70.scope. Feb 9 09:56:53.024290 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:56:53.024483 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:56:53.024654 systemd[1]: Stopping systemd-sysctl.service... Feb 9 09:56:53.026069 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:56:53.034558 systemd[1]: cri-containerd-b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70.scope: Deactivated successfully. Feb 9 09:56:53.038389 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:56:53.041206 env[1354]: time="2024-02-09T09:56:53.041139866Z" level=info msg="StartContainer for \"b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70\" returns successfully" Feb 9 09:56:53.133585 env[1354]: time="2024-02-09T09:56:53.133504776Z" level=info msg="shim disconnected" id=b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70 Feb 9 09:56:53.133585 env[1354]: time="2024-02-09T09:56:53.133580416Z" level=warning msg="cleaning up after shim disconnected" id=b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70 namespace=k8s.io Feb 9 09:56:53.133585 env[1354]: time="2024-02-09T09:56:53.133589456Z" level=info msg="cleaning up dead shim" Feb 9 09:56:53.141879 env[1354]: time="2024-02-09T09:56:53.141829778Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2892 runtime=io.containerd.runc.v2\n" Feb 9 09:56:53.763337 env[1354]: time="2024-02-09T09:56:53.763290937Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:53.770043 env[1354]: time="2024-02-09T09:56:53.769994499Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:53.774500 env[1354]: time="2024-02-09T09:56:53.774465941Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:53.775060 env[1354]: time="2024-02-09T09:56:53.775027181Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 09:56:53.779657 env[1354]: time="2024-02-09T09:56:53.779625902Z" level=info msg="CreateContainer within sandbox \"bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 09:56:53.807698 env[1354]: time="2024-02-09T09:56:53.807653911Z" level=info msg="CreateContainer within sandbox \"bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288\"" Feb 9 09:56:53.809259 env[1354]: time="2024-02-09T09:56:53.809230352Z" level=info msg="StartContainer for \"e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288\"" Feb 9 09:56:53.825938 systemd[1]: Started cri-containerd-e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288.scope. Feb 9 09:56:53.861884 env[1354]: time="2024-02-09T09:56:53.861819569Z" level=info msg="StartContainer for \"e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288\" returns successfully" Feb 9 09:56:53.902501 env[1354]: time="2024-02-09T09:56:53.902448222Z" level=info msg="CreateContainer within sandbox \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:56:53.915827 kubelet[2429]: I0209 09:56:53.915784 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-m5vhr" podStartSLOduration=0.823072824 podCreationTimestamp="2024-02-09 09:56:43 +0000 UTC" firstStartedPulling="2024-02-09 09:56:43.683114539 +0000 UTC m=+14.001536257" lastFinishedPulling="2024-02-09 09:56:53.775786461 +0000 UTC m=+24.094208219" observedRunningTime="2024-02-09 09:56:53.914891226 +0000 UTC m=+24.233312984" watchObservedRunningTime="2024-02-09 09:56:53.915744786 +0000 UTC m=+24.234166544" Feb 9 09:56:53.936877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70-rootfs.mount: Deactivated successfully. Feb 9 09:56:53.944949 env[1354]: time="2024-02-09T09:56:53.944889115Z" level=info msg="CreateContainer within sandbox \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4\"" Feb 9 09:56:53.945593 env[1354]: time="2024-02-09T09:56:53.945558115Z" level=info msg="StartContainer for \"4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4\"" Feb 9 09:56:53.970752 systemd[1]: run-containerd-runc-k8s.io-4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4-runc.6mUvUW.mount: Deactivated successfully. Feb 9 09:56:53.975383 systemd[1]: Started cri-containerd-4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4.scope. Feb 9 09:56:54.006732 systemd[1]: cri-containerd-4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4.scope: Deactivated successfully. Feb 9 09:56:54.011503 env[1354]: time="2024-02-09T09:56:54.011458176Z" level=info msg="StartContainer for \"4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4\" returns successfully" Feb 9 09:56:54.320017 env[1354]: time="2024-02-09T09:56:54.319964074Z" level=info msg="shim disconnected" id=4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4 Feb 9 09:56:54.320017 env[1354]: time="2024-02-09T09:56:54.320012394Z" level=warning msg="cleaning up after shim disconnected" id=4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4 namespace=k8s.io Feb 9 09:56:54.320017 env[1354]: time="2024-02-09T09:56:54.320021314Z" level=info msg="cleaning up dead shim" Feb 9 09:56:54.328069 env[1354]: time="2024-02-09T09:56:54.328015196Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2984 runtime=io.containerd.runc.v2\n" Feb 9 09:56:54.908010 env[1354]: time="2024-02-09T09:56:54.906726659Z" level=info msg="CreateContainer within sandbox \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:56:54.936089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4-rootfs.mount: Deactivated successfully. Feb 9 09:56:54.945580 env[1354]: time="2024-02-09T09:56:54.945525351Z" level=info msg="CreateContainer within sandbox \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27\"" Feb 9 09:56:54.946537 env[1354]: time="2024-02-09T09:56:54.946501231Z" level=info msg="StartContainer for \"6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27\"" Feb 9 09:56:54.970838 systemd[1]: Started cri-containerd-6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27.scope. Feb 9 09:56:54.996677 systemd[1]: cri-containerd-6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27.scope: Deactivated successfully. Feb 9 09:56:54.998386 env[1354]: time="2024-02-09T09:56:54.998322088Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc6bc49e_686d_4fb0_9969_4dc4513aeb0e.slice/cri-containerd-6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27.scope/memory.events\": no such file or directory" Feb 9 09:56:55.003168 env[1354]: time="2024-02-09T09:56:55.003093409Z" level=info msg="StartContainer for \"6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27\" returns successfully" Feb 9 09:56:55.020268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27-rootfs.mount: Deactivated successfully. Feb 9 09:56:55.037477 env[1354]: time="2024-02-09T09:56:55.037419620Z" level=info msg="shim disconnected" id=6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27 Feb 9 09:56:55.037477 env[1354]: time="2024-02-09T09:56:55.037470300Z" level=warning msg="cleaning up after shim disconnected" id=6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27 namespace=k8s.io Feb 9 09:56:55.037477 env[1354]: time="2024-02-09T09:56:55.037481740Z" level=info msg="cleaning up dead shim" Feb 9 09:56:55.045039 env[1354]: time="2024-02-09T09:56:55.044991462Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3041 runtime=io.containerd.runc.v2\n" Feb 9 09:56:55.914503 env[1354]: time="2024-02-09T09:56:55.914449012Z" level=info msg="CreateContainer within sandbox \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:56:55.948346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1665640530.mount: Deactivated successfully. Feb 9 09:56:55.962974 env[1354]: time="2024-02-09T09:56:55.962914347Z" level=info msg="CreateContainer within sandbox \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0\"" Feb 9 09:56:55.965212 env[1354]: time="2024-02-09T09:56:55.963893508Z" level=info msg="StartContainer for \"d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0\"" Feb 9 09:56:55.986080 systemd[1]: Started cri-containerd-d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0.scope. Feb 9 09:56:56.027198 env[1354]: time="2024-02-09T09:56:56.027129127Z" level=info msg="StartContainer for \"d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0\" returns successfully" Feb 9 09:56:56.142246 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:56:56.197008 kubelet[2429]: I0209 09:56:56.196914 2429 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:56:56.246557 kubelet[2429]: I0209 09:56:56.246525 2429 topology_manager.go:215] "Topology Admit Handler" podUID="7f89ea3f-5608-425b-a8c9-49a30381c192" podNamespace="kube-system" podName="coredns-5dd5756b68-wwm69" Feb 9 09:56:56.251216 systemd[1]: Created slice kubepods-burstable-pod7f89ea3f_5608_425b_a8c9_49a30381c192.slice. Feb 9 09:56:56.258334 kubelet[2429]: I0209 09:56:56.258299 2429 topology_manager.go:215] "Topology Admit Handler" podUID="c432dd86-8d2a-4eee-b1aa-cc68028d6fbe" podNamespace="kube-system" podName="coredns-5dd5756b68-ct6pr" Feb 9 09:56:56.262643 systemd[1]: Created slice kubepods-burstable-podc432dd86_8d2a_4eee_b1aa_cc68028d6fbe.slice. Feb 9 09:56:56.333760 kubelet[2429]: I0209 09:56:56.333722 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhmlq\" (UniqueName: \"kubernetes.io/projected/7f89ea3f-5608-425b-a8c9-49a30381c192-kube-api-access-bhmlq\") pod \"coredns-5dd5756b68-wwm69\" (UID: \"7f89ea3f-5608-425b-a8c9-49a30381c192\") " pod="kube-system/coredns-5dd5756b68-wwm69" Feb 9 09:56:56.333897 kubelet[2429]: I0209 09:56:56.333770 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c432dd86-8d2a-4eee-b1aa-cc68028d6fbe-config-volume\") pod \"coredns-5dd5756b68-ct6pr\" (UID: \"c432dd86-8d2a-4eee-b1aa-cc68028d6fbe\") " pod="kube-system/coredns-5dd5756b68-ct6pr" Feb 9 09:56:56.333897 kubelet[2429]: I0209 09:56:56.333809 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f89ea3f-5608-425b-a8c9-49a30381c192-config-volume\") pod \"coredns-5dd5756b68-wwm69\" (UID: \"7f89ea3f-5608-425b-a8c9-49a30381c192\") " pod="kube-system/coredns-5dd5756b68-wwm69" Feb 9 09:56:56.333897 kubelet[2429]: I0209 09:56:56.333833 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp2tj\" (UniqueName: \"kubernetes.io/projected/c432dd86-8d2a-4eee-b1aa-cc68028d6fbe-kube-api-access-kp2tj\") pod \"coredns-5dd5756b68-ct6pr\" (UID: \"c432dd86-8d2a-4eee-b1aa-cc68028d6fbe\") " pod="kube-system/coredns-5dd5756b68-ct6pr" Feb 9 09:56:56.483217 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:56:56.555734 env[1354]: time="2024-02-09T09:56:56.555676689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wwm69,Uid:7f89ea3f-5608-425b-a8c9-49a30381c192,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:56.566905 env[1354]: time="2024-02-09T09:56:56.566854933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-ct6pr,Uid:c432dd86-8d2a-4eee-b1aa-cc68028d6fbe,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:56.947426 systemd[1]: run-containerd-runc-k8s.io-d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0-runc.UaGtov.mount: Deactivated successfully. Feb 9 09:56:58.117445 systemd-networkd[1500]: cilium_host: Link UP Feb 9 09:56:58.124285 systemd-networkd[1500]: cilium_net: Link UP Feb 9 09:56:58.127992 systemd-networkd[1500]: cilium_net: Gained carrier Feb 9 09:56:58.134108 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 09:56:58.134254 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 09:56:58.136831 systemd-networkd[1500]: cilium_host: Gained carrier Feb 9 09:56:58.313654 systemd-networkd[1500]: cilium_vxlan: Link UP Feb 9 09:56:58.313660 systemd-networkd[1500]: cilium_vxlan: Gained carrier Feb 9 09:56:58.475348 systemd-networkd[1500]: cilium_net: Gained IPv6LL Feb 9 09:56:58.522372 systemd-networkd[1500]: cilium_host: Gained IPv6LL Feb 9 09:56:58.552210 kernel: NET: Registered PF_ALG protocol family Feb 9 09:56:59.210915 systemd-networkd[1500]: lxc_health: Link UP Feb 9 09:56:59.223626 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:56:59.223452 systemd-networkd[1500]: lxc_health: Gained carrier Feb 9 09:56:59.360423 kubelet[2429]: I0209 09:56:59.360392 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-s9d74" podStartSLOduration=10.491362613 podCreationTimestamp="2024-02-09 09:56:42 +0000 UTC" firstStartedPulling="2024-02-09 09:56:43.441919609 +0000 UTC m=+13.760341327" lastFinishedPulling="2024-02-09 09:56:50.310909968 +0000 UTC m=+20.629331726" observedRunningTime="2024-02-09 09:56:56.927860403 +0000 UTC m=+27.246282121" watchObservedRunningTime="2024-02-09 09:56:59.360353012 +0000 UTC m=+29.678774770" Feb 9 09:56:59.639145 systemd-networkd[1500]: lxc05ce92582a31: Link UP Feb 9 09:56:59.649223 kernel: eth0: renamed from tmpe27b0 Feb 9 09:56:59.663495 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc05ce92582a31: link becomes ready Feb 9 09:56:59.661467 systemd-networkd[1500]: lxc05ce92582a31: Gained carrier Feb 9 09:56:59.671913 systemd-networkd[1500]: lxc59ecdd54c4b2: Link UP Feb 9 09:56:59.682282 kernel: eth0: renamed from tmpe3812 Feb 9 09:56:59.697102 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc59ecdd54c4b2: link becomes ready Feb 9 09:56:59.696518 systemd-networkd[1500]: lxc59ecdd54c4b2: Gained carrier Feb 9 09:56:59.978339 systemd-networkd[1500]: cilium_vxlan: Gained IPv6LL Feb 9 09:57:01.130359 systemd-networkd[1500]: lxc_health: Gained IPv6LL Feb 9 09:57:01.323408 systemd-networkd[1500]: lxc05ce92582a31: Gained IPv6LL Feb 9 09:57:01.323787 systemd-networkd[1500]: lxc59ecdd54c4b2: Gained IPv6LL Feb 9 09:57:03.267630 env[1354]: time="2024-02-09T09:57:03.267559374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:57:03.268036 env[1354]: time="2024-02-09T09:57:03.268007854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:57:03.268145 env[1354]: time="2024-02-09T09:57:03.268123174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:57:03.268476 env[1354]: time="2024-02-09T09:57:03.268438255Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e38128b4bd519a15e9e192f6f8d8f0b6c8aca8a476bff9b8ed79163ee2924caa pid=3593 runtime=io.containerd.runc.v2 Feb 9 09:57:03.279697 env[1354]: time="2024-02-09T09:57:03.279628218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:57:03.279870 env[1354]: time="2024-02-09T09:57:03.279847498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:57:03.279976 env[1354]: time="2024-02-09T09:57:03.279954218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:57:03.281533 env[1354]: time="2024-02-09T09:57:03.281478978Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e27b01175f3369dfa0c19e415f474ff270f6c4fc8881bc0c0daea279c6c86447 pid=3610 runtime=io.containerd.runc.v2 Feb 9 09:57:03.292369 systemd[1]: run-containerd-runc-k8s.io-e38128b4bd519a15e9e192f6f8d8f0b6c8aca8a476bff9b8ed79163ee2924caa-runc.feTBcM.mount: Deactivated successfully. Feb 9 09:57:03.301428 systemd[1]: Started cri-containerd-e38128b4bd519a15e9e192f6f8d8f0b6c8aca8a476bff9b8ed79163ee2924caa.scope. Feb 9 09:57:03.332367 systemd[1]: Started cri-containerd-e27b01175f3369dfa0c19e415f474ff270f6c4fc8881bc0c0daea279c6c86447.scope. Feb 9 09:57:03.342888 env[1354]: time="2024-02-09T09:57:03.342848155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-ct6pr,Uid:c432dd86-8d2a-4eee-b1aa-cc68028d6fbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"e38128b4bd519a15e9e192f6f8d8f0b6c8aca8a476bff9b8ed79163ee2924caa\"" Feb 9 09:57:03.346463 env[1354]: time="2024-02-09T09:57:03.346425236Z" level=info msg="CreateContainer within sandbox \"e38128b4bd519a15e9e192f6f8d8f0b6c8aca8a476bff9b8ed79163ee2924caa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:57:03.383196 env[1354]: time="2024-02-09T09:57:03.383135647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wwm69,Uid:7f89ea3f-5608-425b-a8c9-49a30381c192,Namespace:kube-system,Attempt:0,} returns sandbox id \"e27b01175f3369dfa0c19e415f474ff270f6c4fc8881bc0c0daea279c6c86447\"" Feb 9 09:57:03.383368 env[1354]: time="2024-02-09T09:57:03.383327887Z" level=info msg="CreateContainer within sandbox \"e38128b4bd519a15e9e192f6f8d8f0b6c8aca8a476bff9b8ed79163ee2924caa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e7315e70186e639c08d3b4dc7075b264c079e60c3a56a71ffea6c9d7244f3945\"" Feb 9 09:57:03.385835 env[1354]: time="2024-02-09T09:57:03.385798247Z" level=info msg="StartContainer for \"e7315e70186e639c08d3b4dc7075b264c079e60c3a56a71ffea6c9d7244f3945\"" Feb 9 09:57:03.388290 env[1354]: time="2024-02-09T09:57:03.388257728Z" level=info msg="CreateContainer within sandbox \"e27b01175f3369dfa0c19e415f474ff270f6c4fc8881bc0c0daea279c6c86447\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:57:03.418966 env[1354]: time="2024-02-09T09:57:03.418914457Z" level=info msg="CreateContainer within sandbox \"e27b01175f3369dfa0c19e415f474ff270f6c4fc8881bc0c0daea279c6c86447\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a2cb9a5f45252672f3dbc47be64cdbaa78be62cd3a64908365e2a41fdd89dc8d\"" Feb 9 09:57:03.419872 env[1354]: time="2024-02-09T09:57:03.419845497Z" level=info msg="StartContainer for \"a2cb9a5f45252672f3dbc47be64cdbaa78be62cd3a64908365e2a41fdd89dc8d\"" Feb 9 09:57:03.426345 systemd[1]: Started cri-containerd-e7315e70186e639c08d3b4dc7075b264c079e60c3a56a71ffea6c9d7244f3945.scope. Feb 9 09:57:03.458432 systemd[1]: Started cri-containerd-a2cb9a5f45252672f3dbc47be64cdbaa78be62cd3a64908365e2a41fdd89dc8d.scope. Feb 9 09:57:03.497349 env[1354]: time="2024-02-09T09:57:03.497303759Z" level=info msg="StartContainer for \"e7315e70186e639c08d3b4dc7075b264c079e60c3a56a71ffea6c9d7244f3945\" returns successfully" Feb 9 09:57:03.537393 env[1354]: time="2024-02-09T09:57:03.537282210Z" level=info msg="StartContainer for \"a2cb9a5f45252672f3dbc47be64cdbaa78be62cd3a64908365e2a41fdd89dc8d\" returns successfully" Feb 9 09:57:03.937124 kubelet[2429]: I0209 09:57:03.937091 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-ct6pr" podStartSLOduration=20.937057641 podCreationTimestamp="2024-02-09 09:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:57:03.935491681 +0000 UTC m=+34.253913399" watchObservedRunningTime="2024-02-09 09:57:03.937057641 +0000 UTC m=+34.255479439" Feb 9 09:57:03.953383 kubelet[2429]: I0209 09:57:03.953349 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wwm69" podStartSLOduration=20.953314566 podCreationTimestamp="2024-02-09 09:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:57:03.952014886 +0000 UTC m=+34.270436644" watchObservedRunningTime="2024-02-09 09:57:03.953314566 +0000 UTC m=+34.271736324" Feb 9 09:57:04.272486 systemd[1]: run-containerd-runc-k8s.io-e27b01175f3369dfa0c19e415f474ff270f6c4fc8881bc0c0daea279c6c86447-runc.sBhjl4.mount: Deactivated successfully. Feb 9 09:58:21.262739 systemd[1]: Started sshd@5-10.200.20.38:22-10.200.12.6:41580.service. Feb 9 09:58:21.686727 sshd[3768]: Accepted publickey for core from 10.200.12.6 port 41580 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:21.688463 sshd[3768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:21.692768 systemd[1]: Started session-8.scope. Feb 9 09:58:21.693165 systemd-logind[1343]: New session 8 of user core. Feb 9 09:58:22.208986 sshd[3768]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:22.211372 systemd[1]: sshd@5-10.200.20.38:22-10.200.12.6:41580.service: Deactivated successfully. Feb 9 09:58:22.212113 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 09:58:22.212673 systemd-logind[1343]: Session 8 logged out. Waiting for processes to exit. Feb 9 09:58:22.213645 systemd-logind[1343]: Removed session 8. Feb 9 09:58:27.280440 systemd[1]: Started sshd@6-10.200.20.38:22-10.200.12.6:38036.service. Feb 9 09:58:27.701330 sshd[3780]: Accepted publickey for core from 10.200.12.6 port 38036 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:27.702923 sshd[3780]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:27.707138 systemd[1]: Started session-9.scope. Feb 9 09:58:27.707595 systemd-logind[1343]: New session 9 of user core. Feb 9 09:58:28.064697 sshd[3780]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:28.067470 systemd-logind[1343]: Session 9 logged out. Waiting for processes to exit. Feb 9 09:58:28.067719 systemd[1]: sshd@6-10.200.20.38:22-10.200.12.6:38036.service: Deactivated successfully. Feb 9 09:58:28.068460 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 09:58:28.069246 systemd-logind[1343]: Removed session 9. Feb 9 09:58:33.135389 systemd[1]: Started sshd@7-10.200.20.38:22-10.200.12.6:38042.service. Feb 9 09:58:33.561405 sshd[3794]: Accepted publickey for core from 10.200.12.6 port 38042 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:33.563093 sshd[3794]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:33.567425 systemd[1]: Started session-10.scope. Feb 9 09:58:33.568722 systemd-logind[1343]: New session 10 of user core. Feb 9 09:58:33.936744 sshd[3794]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:33.939450 systemd[1]: sshd@7-10.200.20.38:22-10.200.12.6:38042.service: Deactivated successfully. Feb 9 09:58:33.940207 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 09:58:33.940757 systemd-logind[1343]: Session 10 logged out. Waiting for processes to exit. Feb 9 09:58:33.941432 systemd-logind[1343]: Removed session 10. Feb 9 09:58:39.008501 systemd[1]: Started sshd@8-10.200.20.38:22-10.200.12.6:56866.service. Feb 9 09:58:39.433642 sshd[3808]: Accepted publickey for core from 10.200.12.6 port 56866 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:39.435341 sshd[3808]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:39.439034 systemd-logind[1343]: New session 11 of user core. Feb 9 09:58:39.439512 systemd[1]: Started session-11.scope. Feb 9 09:58:39.797292 sshd[3808]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:39.799801 systemd-logind[1343]: Session 11 logged out. Waiting for processes to exit. Feb 9 09:58:39.799976 systemd[1]: sshd@8-10.200.20.38:22-10.200.12.6:56866.service: Deactivated successfully. Feb 9 09:58:39.800713 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 09:58:39.801379 systemd-logind[1343]: Removed session 11. Feb 9 09:58:44.869506 systemd[1]: Started sshd@9-10.200.20.38:22-10.200.12.6:56876.service. Feb 9 09:58:45.290404 sshd[3824]: Accepted publickey for core from 10.200.12.6 port 56876 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:45.291937 sshd[3824]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:45.296157 systemd[1]: Started session-12.scope. Feb 9 09:58:45.297238 systemd-logind[1343]: New session 12 of user core. Feb 9 09:58:45.654000 sshd[3824]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:45.656885 systemd[1]: sshd@9-10.200.20.38:22-10.200.12.6:56876.service: Deactivated successfully. Feb 9 09:58:45.657641 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 09:58:45.658204 systemd-logind[1343]: Session 12 logged out. Waiting for processes to exit. Feb 9 09:58:45.659032 systemd-logind[1343]: Removed session 12. Feb 9 09:58:50.730151 systemd[1]: Started sshd@10-10.200.20.38:22-10.200.12.6:48666.service. Feb 9 09:58:51.185394 sshd[3837]: Accepted publickey for core from 10.200.12.6 port 48666 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:51.187016 sshd[3837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:51.191128 systemd[1]: Started session-13.scope. Feb 9 09:58:51.191438 systemd-logind[1343]: New session 13 of user core. Feb 9 09:58:51.577389 sshd[3837]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:51.580425 systemd-logind[1343]: Session 13 logged out. Waiting for processes to exit. Feb 9 09:58:51.581546 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 09:58:51.582320 systemd[1]: sshd@10-10.200.20.38:22-10.200.12.6:48666.service: Deactivated successfully. Feb 9 09:58:51.583463 systemd-logind[1343]: Removed session 13. Feb 9 09:58:51.650337 systemd[1]: Started sshd@11-10.200.20.38:22-10.200.12.6:48670.service. Feb 9 09:58:52.075028 sshd[3850]: Accepted publickey for core from 10.200.12.6 port 48670 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:52.076635 sshd[3850]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:52.080366 systemd-logind[1343]: New session 14 of user core. Feb 9 09:58:52.080799 systemd[1]: Started session-14.scope. Feb 9 09:58:53.043512 sshd[3850]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:53.045868 systemd-logind[1343]: Session 14 logged out. Waiting for processes to exit. Feb 9 09:58:53.046063 systemd[1]: sshd@11-10.200.20.38:22-10.200.12.6:48670.service: Deactivated successfully. Feb 9 09:58:53.046767 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 09:58:53.047471 systemd-logind[1343]: Removed session 14. Feb 9 09:58:53.118028 systemd[1]: Started sshd@12-10.200.20.38:22-10.200.12.6:48682.service. Feb 9 09:58:53.540213 sshd[3860]: Accepted publickey for core from 10.200.12.6 port 48682 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:53.541536 sshd[3860]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:53.545239 systemd-logind[1343]: New session 15 of user core. Feb 9 09:58:53.545817 systemd[1]: Started session-15.scope. Feb 9 09:58:53.911851 sshd[3860]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:53.914392 systemd-logind[1343]: Session 15 logged out. Waiting for processes to exit. Feb 9 09:58:53.914477 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 09:58:53.915117 systemd[1]: sshd@12-10.200.20.38:22-10.200.12.6:48682.service: Deactivated successfully. Feb 9 09:58:53.916221 systemd-logind[1343]: Removed session 15. Feb 9 09:58:58.990246 systemd[1]: Started sshd@13-10.200.20.38:22-10.200.12.6:38074.service. Feb 9 09:58:59.412105 sshd[3872]: Accepted publickey for core from 10.200.12.6 port 38074 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:59.413438 sshd[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:59.417759 systemd[1]: Started session-16.scope. Feb 9 09:58:59.418246 systemd-logind[1343]: New session 16 of user core. Feb 9 09:58:59.776932 sshd[3872]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:59.779901 systemd-logind[1343]: Session 16 logged out. Waiting for processes to exit. Feb 9 09:58:59.780954 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 09:58:59.782114 systemd-logind[1343]: Removed session 16. Feb 9 09:58:59.782646 systemd[1]: sshd@13-10.200.20.38:22-10.200.12.6:38074.service: Deactivated successfully. Feb 9 09:59:04.851174 systemd[1]: Started sshd@14-10.200.20.38:22-10.200.12.6:38076.service. Feb 9 09:59:05.275671 sshd[3884]: Accepted publickey for core from 10.200.12.6 port 38076 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:05.277291 sshd[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:05.281582 systemd[1]: Started session-17.scope. Feb 9 09:59:05.281909 systemd-logind[1343]: New session 17 of user core. Feb 9 09:59:05.645523 sshd[3884]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:05.648070 systemd-logind[1343]: Session 17 logged out. Waiting for processes to exit. Feb 9 09:59:05.648812 systemd[1]: sshd@14-10.200.20.38:22-10.200.12.6:38076.service: Deactivated successfully. Feb 9 09:59:05.649591 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 09:59:05.650301 systemd-logind[1343]: Removed session 17. Feb 9 09:59:05.717984 systemd[1]: Started sshd@15-10.200.20.38:22-10.200.12.6:38084.service. Feb 9 09:59:06.139571 sshd[3897]: Accepted publickey for core from 10.200.12.6 port 38084 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:06.141166 sshd[3897]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:06.146326 systemd[1]: Started session-18.scope. Feb 9 09:59:06.146949 systemd-logind[1343]: New session 18 of user core. Feb 9 09:59:06.538078 sshd[3897]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:06.541261 systemd[1]: sshd@15-10.200.20.38:22-10.200.12.6:38084.service: Deactivated successfully. Feb 9 09:59:06.541991 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 09:59:06.543061 systemd-logind[1343]: Session 18 logged out. Waiting for processes to exit. Feb 9 09:59:06.544086 systemd-logind[1343]: Removed session 18. Feb 9 09:59:06.608412 systemd[1]: Started sshd@16-10.200.20.38:22-10.200.12.6:38086.service. Feb 9 09:59:07.030784 sshd[3906]: Accepted publickey for core from 10.200.12.6 port 38086 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:07.032430 sshd[3906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:07.036103 systemd-logind[1343]: New session 19 of user core. Feb 9 09:59:07.036653 systemd[1]: Started session-19.scope. Feb 9 09:59:08.225302 sshd[3906]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:08.227704 systemd-logind[1343]: Session 19 logged out. Waiting for processes to exit. Feb 9 09:59:08.227949 systemd[1]: sshd@16-10.200.20.38:22-10.200.12.6:38086.service: Deactivated successfully. Feb 9 09:59:08.228698 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 09:59:08.229745 systemd-logind[1343]: Removed session 19. Feb 9 09:59:08.301168 systemd[1]: Started sshd@17-10.200.20.38:22-10.200.12.6:48784.service. Feb 9 09:59:08.757237 sshd[3923]: Accepted publickey for core from 10.200.12.6 port 48784 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:08.758923 sshd[3923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:08.763105 systemd[1]: Started session-20.scope. Feb 9 09:59:08.763517 systemd-logind[1343]: New session 20 of user core. Feb 9 09:59:09.309922 sshd[3923]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:09.312540 systemd-logind[1343]: Session 20 logged out. Waiting for processes to exit. Feb 9 09:59:09.312544 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 09:59:09.313212 systemd[1]: sshd@17-10.200.20.38:22-10.200.12.6:48784.service: Deactivated successfully. Feb 9 09:59:09.314286 systemd-logind[1343]: Removed session 20. Feb 9 09:59:09.380117 systemd[1]: Started sshd@18-10.200.20.38:22-10.200.12.6:48790.service. Feb 9 09:59:09.805939 sshd[3933]: Accepted publickey for core from 10.200.12.6 port 48790 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:09.807231 sshd[3933]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:09.811496 systemd[1]: Started session-21.scope. Feb 9 09:59:09.811804 systemd-logind[1343]: New session 21 of user core. Feb 9 09:59:10.176606 sshd[3933]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:10.179248 systemd-logind[1343]: Session 21 logged out. Waiting for processes to exit. Feb 9 09:59:10.179333 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 09:59:10.179923 systemd[1]: sshd@18-10.200.20.38:22-10.200.12.6:48790.service: Deactivated successfully. Feb 9 09:59:10.180934 systemd-logind[1343]: Removed session 21. Feb 9 09:59:15.248830 systemd[1]: Started sshd@19-10.200.20.38:22-10.200.12.6:48794.service. Feb 9 09:59:15.678761 sshd[3948]: Accepted publickey for core from 10.200.12.6 port 48794 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:15.680396 sshd[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:15.684710 systemd[1]: Started session-22.scope. Feb 9 09:59:15.685981 systemd-logind[1343]: New session 22 of user core. Feb 9 09:59:16.047111 sshd[3948]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:16.050031 systemd[1]: sshd@19-10.200.20.38:22-10.200.12.6:48794.service: Deactivated successfully. Feb 9 09:59:16.050777 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 09:59:16.051633 systemd-logind[1343]: Session 22 logged out. Waiting for processes to exit. Feb 9 09:59:16.052421 systemd-logind[1343]: Removed session 22. Feb 9 09:59:21.117583 systemd[1]: Started sshd@20-10.200.20.38:22-10.200.12.6:38988.service. Feb 9 09:59:21.538461 sshd[3960]: Accepted publickey for core from 10.200.12.6 port 38988 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:21.540034 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:21.544207 systemd[1]: Started session-23.scope. Feb 9 09:59:21.545294 systemd-logind[1343]: New session 23 of user core. Feb 9 09:59:21.903323 sshd[3960]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:21.905991 systemd-logind[1343]: Session 23 logged out. Waiting for processes to exit. Feb 9 09:59:21.906749 systemd[1]: sshd@20-10.200.20.38:22-10.200.12.6:38988.service: Deactivated successfully. Feb 9 09:59:21.907582 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 09:59:21.908359 systemd-logind[1343]: Removed session 23. Feb 9 09:59:26.980905 systemd[1]: Started sshd@21-10.200.20.38:22-10.200.12.6:52188.service. Feb 9 09:59:27.436666 sshd[3972]: Accepted publickey for core from 10.200.12.6 port 52188 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:27.438369 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:27.442570 systemd[1]: Started session-24.scope. Feb 9 09:59:27.442864 systemd-logind[1343]: New session 24 of user core. Feb 9 09:59:27.823370 sshd[3972]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:27.826162 systemd[1]: sshd@21-10.200.20.38:22-10.200.12.6:52188.service: Deactivated successfully. Feb 9 09:59:27.826940 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 09:59:27.828095 systemd-logind[1343]: Session 24 logged out. Waiting for processes to exit. Feb 9 09:59:27.829154 systemd-logind[1343]: Removed session 24. Feb 9 09:59:32.894659 systemd[1]: Started sshd@22-10.200.20.38:22-10.200.12.6:52190.service. Feb 9 09:59:33.315871 sshd[3989]: Accepted publickey for core from 10.200.12.6 port 52190 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:33.317616 sshd[3989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:33.321814 systemd[1]: Started session-25.scope. Feb 9 09:59:33.322394 systemd-logind[1343]: New session 25 of user core. Feb 9 09:59:33.678092 sshd[3989]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:33.680927 systemd[1]: sshd@22-10.200.20.38:22-10.200.12.6:52190.service: Deactivated successfully. Feb 9 09:59:33.681689 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 09:59:33.682260 systemd-logind[1343]: Session 25 logged out. Waiting for processes to exit. Feb 9 09:59:33.683039 systemd-logind[1343]: Removed session 25. Feb 9 09:59:38.769913 systemd[1]: Started sshd@23-10.200.20.38:22-10.200.12.6:44664.service. Feb 9 09:59:39.192287 sshd[4003]: Accepted publickey for core from 10.200.12.6 port 44664 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:39.193613 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:39.202986 systemd[1]: Started session-26.scope. Feb 9 09:59:39.203325 systemd-logind[1343]: New session 26 of user core. Feb 9 09:59:39.560358 sshd[4003]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:39.563239 systemd[1]: sshd@23-10.200.20.38:22-10.200.12.6:44664.service: Deactivated successfully. Feb 9 09:59:39.563402 systemd-logind[1343]: Session 26 logged out. Waiting for processes to exit. Feb 9 09:59:39.563946 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 09:59:39.564609 systemd-logind[1343]: Removed session 26. Feb 9 09:59:44.634535 systemd[1]: Started sshd@24-10.200.20.38:22-10.200.12.6:44670.service. Feb 9 09:59:45.059062 sshd[4020]: Accepted publickey for core from 10.200.12.6 port 44670 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:45.060802 sshd[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:45.065238 systemd[1]: Started session-27.scope. Feb 9 09:59:45.066256 systemd-logind[1343]: New session 27 of user core. Feb 9 09:59:45.424819 sshd[4020]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:45.427369 systemd[1]: sshd@24-10.200.20.38:22-10.200.12.6:44670.service: Deactivated successfully. Feb 9 09:59:45.428083 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 09:59:45.428765 systemd-logind[1343]: Session 27 logged out. Waiting for processes to exit. Feb 9 09:59:45.429589 systemd-logind[1343]: Removed session 27. Feb 9 09:59:45.501320 systemd[1]: Started sshd@25-10.200.20.38:22-10.200.12.6:44678.service. Feb 9 09:59:45.957554 sshd[4032]: Accepted publickey for core from 10.200.12.6 port 44678 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:45.958847 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:45.963288 systemd-logind[1343]: New session 28 of user core. Feb 9 09:59:45.963335 systemd[1]: Started session-28.scope. Feb 9 09:59:48.551480 systemd[1]: run-containerd-runc-k8s.io-d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0-runc.WSnd2q.mount: Deactivated successfully. Feb 9 09:59:48.560087 env[1354]: time="2024-02-09T09:59:48.560035255Z" level=info msg="StopContainer for \"e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288\" with timeout 30 (s)" Feb 9 09:59:48.560502 env[1354]: time="2024-02-09T09:59:48.560397923Z" level=info msg="Stop container \"e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288\" with signal terminated" Feb 9 09:59:48.575880 env[1354]: time="2024-02-09T09:59:48.575815691Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:59:48.577438 systemd[1]: cri-containerd-e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288.scope: Deactivated successfully. Feb 9 09:59:48.584103 env[1354]: time="2024-02-09T09:59:48.584068537Z" level=info msg="StopContainer for \"d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0\" with timeout 2 (s)" Feb 9 09:59:48.584589 env[1354]: time="2024-02-09T09:59:48.584567761Z" level=info msg="Stop container \"d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0\" with signal terminated" Feb 9 09:59:48.593062 systemd-networkd[1500]: lxc_health: Link DOWN Feb 9 09:59:48.593068 systemd-networkd[1500]: lxc_health: Lost carrier Feb 9 09:59:48.601480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288-rootfs.mount: Deactivated successfully. Feb 9 09:59:48.616637 systemd[1]: cri-containerd-d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0.scope: Deactivated successfully. Feb 9 09:59:48.616942 systemd[1]: cri-containerd-d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0.scope: Consumed 6.309s CPU time. Feb 9 09:59:48.635388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0-rootfs.mount: Deactivated successfully. Feb 9 09:59:48.652637 env[1354]: time="2024-02-09T09:59:48.652593542Z" level=info msg="shim disconnected" id=e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288 Feb 9 09:59:48.653929 env[1354]: time="2024-02-09T09:59:48.652744297Z" level=warning msg="cleaning up after shim disconnected" id=e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288 namespace=k8s.io Feb 9 09:59:48.653929 env[1354]: time="2024-02-09T09:59:48.652758017Z" level=info msg="cleaning up dead shim" Feb 9 09:59:48.655531 env[1354]: time="2024-02-09T09:59:48.655496926Z" level=info msg="shim disconnected" id=d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0 Feb 9 09:59:48.655699 env[1354]: time="2024-02-09T09:59:48.655679960Z" level=warning msg="cleaning up after shim disconnected" id=d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0 namespace=k8s.io Feb 9 09:59:48.655804 env[1354]: time="2024-02-09T09:59:48.655788956Z" level=info msg="cleaning up dead shim" Feb 9 09:59:48.661303 env[1354]: time="2024-02-09T09:59:48.661268734Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4098 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T09:59:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Feb 9 09:59:48.667937 env[1354]: time="2024-02-09T09:59:48.667902434Z" level=info msg="StopContainer for \"e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288\" returns successfully" Feb 9 09:59:48.670702 env[1354]: time="2024-02-09T09:59:48.670405311Z" level=info msg="StopPodSandbox for \"bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec\"" Feb 9 09:59:48.670702 env[1354]: time="2024-02-09T09:59:48.670472909Z" level=info msg="Container to stop \"e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:48.672159 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec-shm.mount: Deactivated successfully. Feb 9 09:59:48.673630 env[1354]: time="2024-02-09T09:59:48.673604045Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4107 runtime=io.containerd.runc.v2\n" Feb 9 09:59:48.677618 systemd[1]: cri-containerd-bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec.scope: Deactivated successfully. Feb 9 09:59:48.679266 env[1354]: time="2024-02-09T09:59:48.679234178Z" level=info msg="StopContainer for \"d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0\" returns successfully" Feb 9 09:59:48.679983 env[1354]: time="2024-02-09T09:59:48.679943354Z" level=info msg="StopPodSandbox for \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\"" Feb 9 09:59:48.680059 env[1354]: time="2024-02-09T09:59:48.680006552Z" level=info msg="Container to stop \"4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:48.680059 env[1354]: time="2024-02-09T09:59:48.680023071Z" level=info msg="Container to stop \"d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:48.680059 env[1354]: time="2024-02-09T09:59:48.680035111Z" level=info msg="Container to stop \"a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:48.680059 env[1354]: time="2024-02-09T09:59:48.680046351Z" level=info msg="Container to stop \"b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:48.680059 env[1354]: time="2024-02-09T09:59:48.680057590Z" level=info msg="Container to stop \"6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:48.688503 systemd[1]: cri-containerd-465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5.scope: Deactivated successfully. Feb 9 09:59:48.723018 env[1354]: time="2024-02-09T09:59:48.722952246Z" level=info msg="shim disconnected" id=bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec Feb 9 09:59:48.723018 env[1354]: time="2024-02-09T09:59:48.723010364Z" level=warning msg="cleaning up after shim disconnected" id=bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec namespace=k8s.io Feb 9 09:59:48.723018 env[1354]: time="2024-02-09T09:59:48.723021844Z" level=info msg="cleaning up dead shim" Feb 9 09:59:48.723575 env[1354]: time="2024-02-09T09:59:48.723531907Z" level=info msg="shim disconnected" id=465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5 Feb 9 09:59:48.723575 env[1354]: time="2024-02-09T09:59:48.723570905Z" level=warning msg="cleaning up after shim disconnected" id=465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5 namespace=k8s.io Feb 9 09:59:48.723707 env[1354]: time="2024-02-09T09:59:48.723581465Z" level=info msg="cleaning up dead shim" Feb 9 09:59:48.734752 env[1354]: time="2024-02-09T09:59:48.734701416Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4162 runtime=io.containerd.runc.v2\n" Feb 9 09:59:48.735082 env[1354]: time="2024-02-09T09:59:48.735048124Z" level=info msg="TearDown network for sandbox \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\" successfully" Feb 9 09:59:48.735121 env[1354]: time="2024-02-09T09:59:48.735079043Z" level=info msg="StopPodSandbox for \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\" returns successfully" Feb 9 09:59:48.739725 env[1354]: time="2024-02-09T09:59:48.739558735Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4161 runtime=io.containerd.runc.v2\n" Feb 9 09:59:48.739842 env[1354]: time="2024-02-09T09:59:48.739827646Z" level=info msg="TearDown network for sandbox \"bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec\" successfully" Feb 9 09:59:48.739883 env[1354]: time="2024-02-09T09:59:48.739846165Z" level=info msg="StopPodSandbox for \"bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec\" returns successfully" Feb 9 09:59:48.857818 kubelet[2429]: I0209 09:59:48.857781 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-host-proc-sys-net\") pod \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " Feb 9 09:59:48.858147 kubelet[2429]: I0209 09:59:48.857826 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-host-proc-sys-kernel\") pod \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " Feb 9 09:59:48.858147 kubelet[2429]: I0209 09:59:48.857848 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-cilium-run\") pod \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " Feb 9 09:59:48.858147 kubelet[2429]: I0209 09:59:48.857866 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-hostproc\") pod \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " Feb 9 09:59:48.858147 kubelet[2429]: I0209 09:59:48.857890 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-hubble-tls\") pod \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " Feb 9 09:59:48.858147 kubelet[2429]: I0209 09:59:48.857913 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/806f1e00-073c-4833-9820-c88731c8fc4d-cilium-config-path\") pod \"806f1e00-073c-4833-9820-c88731c8fc4d\" (UID: \"806f1e00-073c-4833-9820-c88731c8fc4d\") " Feb 9 09:59:48.858147 kubelet[2429]: I0209 09:59:48.857935 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbrt8\" (UniqueName: \"kubernetes.io/projected/806f1e00-073c-4833-9820-c88731c8fc4d-kube-api-access-gbrt8\") pod \"806f1e00-073c-4833-9820-c88731c8fc4d\" (UID: \"806f1e00-073c-4833-9820-c88731c8fc4d\") " Feb 9 09:59:48.858330 kubelet[2429]: I0209 09:59:48.857955 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-cilium-cgroup\") pod \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " Feb 9 09:59:48.858330 kubelet[2429]: I0209 09:59:48.857971 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-xtables-lock\") pod \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " Feb 9 09:59:48.858330 kubelet[2429]: I0209 09:59:48.857988 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-bpf-maps\") pod \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " Feb 9 09:59:48.858330 kubelet[2429]: I0209 09:59:48.858005 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-lib-modules\") pod \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " Feb 9 09:59:48.858330 kubelet[2429]: I0209 09:59:48.858024 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-clustermesh-secrets\") pod \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " Feb 9 09:59:48.858330 kubelet[2429]: I0209 09:59:48.858046 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-cilium-config-path\") pod \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " Feb 9 09:59:48.858469 kubelet[2429]: I0209 09:59:48.858062 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-etc-cni-netd\") pod \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " Feb 9 09:59:48.858469 kubelet[2429]: I0209 09:59:48.858078 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-cni-path\") pod \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " Feb 9 09:59:48.858469 kubelet[2429]: I0209 09:59:48.858123 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz476\" (UniqueName: \"kubernetes.io/projected/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-kube-api-access-tz476\") pod \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\" (UID: \"dc6bc49e-686d-4fb0-9969-4dc4513aeb0e\") " Feb 9 09:59:48.858535 kubelet[2429]: I0209 09:59:48.858493 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" (UID: "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:48.860058 kubelet[2429]: I0209 09:59:48.858556 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" (UID: "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:48.860058 kubelet[2429]: I0209 09:59:48.858584 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" (UID: "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:48.860058 kubelet[2429]: I0209 09:59:48.858602 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" (UID: "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:48.860058 kubelet[2429]: I0209 09:59:48.858633 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-hostproc" (OuterVolumeSpecName: "hostproc") pod "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" (UID: "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:48.860058 kubelet[2429]: I0209 09:59:48.858695 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" (UID: "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:48.860285 kubelet[2429]: I0209 09:59:48.858714 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" (UID: "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:48.860285 kubelet[2429]: I0209 09:59:48.858730 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" (UID: "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:48.862282 kubelet[2429]: I0209 09:59:48.862147 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" (UID: "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:48.862282 kubelet[2429]: I0209 09:59:48.862211 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-cni-path" (OuterVolumeSpecName: "cni-path") pod "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" (UID: "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:48.866025 kubelet[2429]: I0209 09:59:48.865996 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-kube-api-access-tz476" (OuterVolumeSpecName: "kube-api-access-tz476") pod "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" (UID: "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e"). InnerVolumeSpecName "kube-api-access-tz476". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:48.866806 kubelet[2429]: I0209 09:59:48.866774 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" (UID: "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:59:48.866926 kubelet[2429]: I0209 09:59:48.866903 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" (UID: "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:59:48.867012 kubelet[2429]: I0209 09:59:48.866989 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" (UID: "dc6bc49e-686d-4fb0-9969-4dc4513aeb0e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:48.867123 kubelet[2429]: I0209 09:59:48.867087 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/806f1e00-073c-4833-9820-c88731c8fc4d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "806f1e00-073c-4833-9820-c88731c8fc4d" (UID: "806f1e00-073c-4833-9820-c88731c8fc4d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:59:48.868899 kubelet[2429]: I0209 09:59:48.868875 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/806f1e00-073c-4833-9820-c88731c8fc4d-kube-api-access-gbrt8" (OuterVolumeSpecName: "kube-api-access-gbrt8") pod "806f1e00-073c-4833-9820-c88731c8fc4d" (UID: "806f1e00-073c-4833-9820-c88731c8fc4d"). InnerVolumeSpecName "kube-api-access-gbrt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:48.959313 kubelet[2429]: I0209 09:59:48.959270 2429 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-host-proc-sys-net\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:48.959492 kubelet[2429]: I0209 09:59:48.959482 2429 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:48.959571 kubelet[2429]: I0209 09:59:48.959562 2429 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-cilium-run\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:48.959636 kubelet[2429]: I0209 09:59:48.959626 2429 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-hostproc\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:48.959693 kubelet[2429]: I0209 09:59:48.959684 2429 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-hubble-tls\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:48.959763 kubelet[2429]: I0209 09:59:48.959754 2429 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/806f1e00-073c-4833-9820-c88731c8fc4d-cilium-config-path\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:48.959827 kubelet[2429]: I0209 09:59:48.959817 2429 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gbrt8\" (UniqueName: \"kubernetes.io/projected/806f1e00-073c-4833-9820-c88731c8fc4d-kube-api-access-gbrt8\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:48.959886 kubelet[2429]: I0209 09:59:48.959876 2429 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-cilium-cgroup\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:48.959973 kubelet[2429]: I0209 09:59:48.959964 2429 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-xtables-lock\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:48.960030 kubelet[2429]: I0209 09:59:48.960022 2429 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-bpf-maps\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:48.960090 kubelet[2429]: I0209 09:59:48.960081 2429 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-lib-modules\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:48.960146 kubelet[2429]: I0209 09:59:48.960137 2429 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-clustermesh-secrets\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:48.960228 kubelet[2429]: I0209 09:59:48.960216 2429 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-cilium-config-path\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:48.960296 kubelet[2429]: I0209 09:59:48.960287 2429 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-etc-cni-netd\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:48.960355 kubelet[2429]: I0209 09:59:48.960347 2429 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-cni-path\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:48.960416 kubelet[2429]: I0209 09:59:48.960406 2429 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tz476\" (UniqueName: \"kubernetes.io/projected/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e-kube-api-access-tz476\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:49.224094 kubelet[2429]: I0209 09:59:49.222722 2429 scope.go:117] "RemoveContainer" containerID="e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288" Feb 9 09:59:49.225476 env[1354]: time="2024-02-09T09:59:49.225433347Z" level=info msg="RemoveContainer for \"e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288\"" Feb 9 09:59:49.230244 systemd[1]: Removed slice kubepods-besteffort-pod806f1e00_073c_4833_9820_c88731c8fc4d.slice. Feb 9 09:59:49.237726 systemd[1]: Removed slice kubepods-burstable-poddc6bc49e_686d_4fb0_9969_4dc4513aeb0e.slice. Feb 9 09:59:49.237807 systemd[1]: kubepods-burstable-poddc6bc49e_686d_4fb0_9969_4dc4513aeb0e.slice: Consumed 6.398s CPU time. Feb 9 09:59:49.244241 env[1354]: time="2024-02-09T09:59:49.244023975Z" level=info msg="RemoveContainer for \"e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288\" returns successfully" Feb 9 09:59:49.244544 kubelet[2429]: I0209 09:59:49.244516 2429 scope.go:117] "RemoveContainer" containerID="e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288" Feb 9 09:59:49.244946 env[1354]: time="2024-02-09T09:59:49.244879667Z" level=error msg="ContainerStatus for \"e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288\": not found" Feb 9 09:59:49.246737 kubelet[2429]: E0209 09:59:49.246576 2429 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288\": not found" containerID="e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288" Feb 9 09:59:49.246737 kubelet[2429]: I0209 09:59:49.246653 2429 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288"} err="failed to get container status \"e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288\": rpc error: code = NotFound desc = an error occurred when try to find container \"e205e18a837ad8dc2a25b1277ef560a41fc4a82eb070dc4ec7b36d99fd879288\": not found" Feb 9 09:59:49.246737 kubelet[2429]: I0209 09:59:49.246666 2429 scope.go:117] "RemoveContainer" containerID="d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0" Feb 9 09:59:49.247933 env[1354]: time="2024-02-09T09:59:49.247907687Z" level=info msg="RemoveContainer for \"d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0\"" Feb 9 09:59:49.256704 env[1354]: time="2024-02-09T09:59:49.256667519Z" level=info msg="RemoveContainer for \"d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0\" returns successfully" Feb 9 09:59:49.257077 kubelet[2429]: I0209 09:59:49.256978 2429 scope.go:117] "RemoveContainer" containerID="6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27" Feb 9 09:59:49.258138 env[1354]: time="2024-02-09T09:59:49.258113111Z" level=info msg="RemoveContainer for \"6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27\"" Feb 9 09:59:49.268043 env[1354]: time="2024-02-09T09:59:49.268007586Z" level=info msg="RemoveContainer for \"6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27\" returns successfully" Feb 9 09:59:49.268384 kubelet[2429]: I0209 09:59:49.268261 2429 scope.go:117] "RemoveContainer" containerID="4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4" Feb 9 09:59:49.269103 env[1354]: time="2024-02-09T09:59:49.269070831Z" level=info msg="RemoveContainer for \"4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4\"" Feb 9 09:59:49.288668 env[1354]: time="2024-02-09T09:59:49.288619827Z" level=info msg="RemoveContainer for \"4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4\" returns successfully" Feb 9 09:59:49.289009 kubelet[2429]: I0209 09:59:49.288982 2429 scope.go:117] "RemoveContainer" containerID="b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70" Feb 9 09:59:49.290217 env[1354]: time="2024-02-09T09:59:49.290160857Z" level=info msg="RemoveContainer for \"b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70\"" Feb 9 09:59:49.298478 env[1354]: time="2024-02-09T09:59:49.298431784Z" level=info msg="RemoveContainer for \"b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70\" returns successfully" Feb 9 09:59:49.298754 kubelet[2429]: I0209 09:59:49.298724 2429 scope.go:117] "RemoveContainer" containerID="a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011" Feb 9 09:59:49.299838 env[1354]: time="2024-02-09T09:59:49.299811499Z" level=info msg="RemoveContainer for \"a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011\"" Feb 9 09:59:49.309265 env[1354]: time="2024-02-09T09:59:49.309231749Z" level=info msg="RemoveContainer for \"a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011\" returns successfully" Feb 9 09:59:49.309573 kubelet[2429]: I0209 09:59:49.309551 2429 scope.go:117] "RemoveContainer" containerID="d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0" Feb 9 09:59:49.309838 env[1354]: time="2024-02-09T09:59:49.309777011Z" level=error msg="ContainerStatus for \"d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0\": not found" Feb 9 09:59:49.309989 kubelet[2429]: E0209 09:59:49.309967 2429 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0\": not found" containerID="d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0" Feb 9 09:59:49.310030 kubelet[2429]: I0209 09:59:49.310006 2429 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0"} err="failed to get container status \"d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"d54b9565d707a9c5c3802d7042e8984cb6575752f2cc5b8a0f105835b5ad47d0\": not found" Feb 9 09:59:49.310030 kubelet[2429]: I0209 09:59:49.310016 2429 scope.go:117] "RemoveContainer" containerID="6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27" Feb 9 09:59:49.310241 env[1354]: time="2024-02-09T09:59:49.310172598Z" level=error msg="ContainerStatus for \"6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27\": not found" Feb 9 09:59:49.310356 kubelet[2429]: E0209 09:59:49.310336 2429 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27\": not found" containerID="6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27" Feb 9 09:59:49.310419 kubelet[2429]: I0209 09:59:49.310368 2429 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27"} err="failed to get container status \"6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27\": rpc error: code = NotFound desc = an error occurred when try to find container \"6fe77d70f6b70ab8508dac505beb1f646dfc07c1cd82d4b51e488d75abffef27\": not found" Feb 9 09:59:49.310419 kubelet[2429]: I0209 09:59:49.310380 2429 scope.go:117] "RemoveContainer" containerID="4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4" Feb 9 09:59:49.310579 env[1354]: time="2024-02-09T09:59:49.310531266Z" level=error msg="ContainerStatus for \"4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4\": not found" Feb 9 09:59:49.310699 kubelet[2429]: E0209 09:59:49.310680 2429 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4\": not found" containerID="4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4" Feb 9 09:59:49.310752 kubelet[2429]: I0209 09:59:49.310709 2429 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4"} err="failed to get container status \"4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4\": rpc error: code = NotFound desc = an error occurred when try to find container \"4079f669b585cd8877670053a2845b1e9d12b066106d884751a8b0c77d2f50e4\": not found" Feb 9 09:59:49.310752 kubelet[2429]: I0209 09:59:49.310719 2429 scope.go:117] "RemoveContainer" containerID="b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70" Feb 9 09:59:49.310901 env[1354]: time="2024-02-09T09:59:49.310854695Z" level=error msg="ContainerStatus for \"b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70\": not found" Feb 9 09:59:49.311011 kubelet[2429]: E0209 09:59:49.310993 2429 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70\": not found" containerID="b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70" Feb 9 09:59:49.311069 kubelet[2429]: I0209 09:59:49.311021 2429 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70"} err="failed to get container status \"b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70\": rpc error: code = NotFound desc = an error occurred when try to find container \"b81d7ba9350c66fbf1d6389199f7624b141339bffe0b02e09de6bcea5fa88e70\": not found" Feb 9 09:59:49.311069 kubelet[2429]: I0209 09:59:49.311033 2429 scope.go:117] "RemoveContainer" containerID="a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011" Feb 9 09:59:49.311310 env[1354]: time="2024-02-09T09:59:49.311172925Z" level=error msg="ContainerStatus for \"a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011\": not found" Feb 9 09:59:49.311426 kubelet[2429]: E0209 09:59:49.311408 2429 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011\": not found" containerID="a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011" Feb 9 09:59:49.311482 kubelet[2429]: I0209 09:59:49.311437 2429 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011"} err="failed to get container status \"a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011\": rpc error: code = NotFound desc = an error occurred when try to find container \"a1598f9cb12b4f386f54e7c61bc1a071e75057ec55a415bcb238fbc814bfe011\": not found" Feb 9 09:59:49.546489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec-rootfs.mount: Deactivated successfully. Feb 9 09:59:49.546583 systemd[1]: var-lib-kubelet-pods-806f1e00\x2d073c\x2d4833\x2d9820\x2dc88731c8fc4d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgbrt8.mount: Deactivated successfully. Feb 9 09:59:49.546644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5-rootfs.mount: Deactivated successfully. Feb 9 09:59:49.546692 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5-shm.mount: Deactivated successfully. Feb 9 09:59:49.546750 systemd[1]: var-lib-kubelet-pods-dc6bc49e\x2d686d\x2d4fb0\x2d9969\x2d4dc4513aeb0e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtz476.mount: Deactivated successfully. Feb 9 09:59:49.546809 systemd[1]: var-lib-kubelet-pods-dc6bc49e\x2d686d\x2d4fb0\x2d9969\x2d4dc4513aeb0e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:59:49.546857 systemd[1]: var-lib-kubelet-pods-dc6bc49e\x2d686d\x2d4fb0\x2d9969\x2d4dc4513aeb0e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:59:49.782133 kubelet[2429]: I0209 09:59:49.782099 2429 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="806f1e00-073c-4833-9820-c88731c8fc4d" path="/var/lib/kubelet/pods/806f1e00-073c-4833-9820-c88731c8fc4d/volumes" Feb 9 09:59:49.782552 kubelet[2429]: I0209 09:59:49.782531 2429 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" path="/var/lib/kubelet/pods/dc6bc49e-686d-4fb0-9969-4dc4513aeb0e/volumes" Feb 9 09:59:49.945231 kubelet[2429]: E0209 09:59:49.945201 2429 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:59:50.537809 sshd[4032]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:50.540426 systemd[1]: sshd@25-10.200.20.38:22-10.200.12.6:44678.service: Deactivated successfully. Feb 9 09:59:50.541143 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 09:59:50.541348 systemd[1]: session-28.scope: Consumed 1.649s CPU time. Feb 9 09:59:50.542248 systemd-logind[1343]: Session 28 logged out. Waiting for processes to exit. Feb 9 09:59:50.542967 systemd-logind[1343]: Removed session 28. Feb 9 09:59:50.610350 systemd[1]: Started sshd@26-10.200.20.38:22-10.200.12.6:43362.service. Feb 9 09:59:51.037243 sshd[4194]: Accepted publickey for core from 10.200.12.6 port 43362 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:51.038946 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:51.042986 systemd-logind[1343]: New session 29 of user core. Feb 9 09:59:51.043476 systemd[1]: Started session-29.scope. Feb 9 09:59:52.441697 kubelet[2429]: I0209 09:59:52.441659 2429 topology_manager.go:215] "Topology Admit Handler" podUID="4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" podNamespace="kube-system" podName="cilium-89gwd" Feb 9 09:59:52.442243 kubelet[2429]: E0209 09:59:52.442228 2429 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="806f1e00-073c-4833-9820-c88731c8fc4d" containerName="cilium-operator" Feb 9 09:59:52.442345 kubelet[2429]: E0209 09:59:52.442336 2429 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" containerName="clean-cilium-state" Feb 9 09:59:52.442410 kubelet[2429]: E0209 09:59:52.442390 2429 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" containerName="cilium-agent" Feb 9 09:59:52.442466 kubelet[2429]: E0209 09:59:52.442458 2429 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" containerName="mount-cgroup" Feb 9 09:59:52.442532 kubelet[2429]: E0209 09:59:52.442524 2429 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" containerName="apply-sysctl-overwrites" Feb 9 09:59:52.442632 kubelet[2429]: E0209 09:59:52.442623 2429 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" containerName="mount-bpf-fs" Feb 9 09:59:52.442771 kubelet[2429]: I0209 09:59:52.442753 2429 memory_manager.go:346] "RemoveStaleState removing state" podUID="dc6bc49e-686d-4fb0-9969-4dc4513aeb0e" containerName="cilium-agent" Feb 9 09:59:52.442845 kubelet[2429]: I0209 09:59:52.442836 2429 memory_manager.go:346] "RemoveStaleState removing state" podUID="806f1e00-073c-4833-9820-c88731c8fc4d" containerName="cilium-operator" Feb 9 09:59:52.445911 sshd[4194]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:52.448914 systemd[1]: sshd@26-10.200.20.38:22-10.200.12.6:43362.service: Deactivated successfully. Feb 9 09:59:52.449628 systemd[1]: session-29.scope: Deactivated successfully. Feb 9 09:59:52.449796 systemd[1]: session-29.scope: Consumed 1.020s CPU time. Feb 9 09:59:52.451076 systemd-logind[1343]: Session 29 logged out. Waiting for processes to exit. Feb 9 09:59:52.452706 systemd[1]: Created slice kubepods-burstable-pod4ef3cc5b_0e67_4ba7_87d5_6439baedb33e.slice. Feb 9 09:59:52.453072 systemd-logind[1343]: Removed session 29. Feb 9 09:59:52.459083 kubelet[2429]: W0209 09:59:52.459052 2429 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-b353ffea6c" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b353ffea6c' and this object Feb 9 09:59:52.459255 kubelet[2429]: E0209 09:59:52.459243 2429 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-b353ffea6c" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b353ffea6c' and this object Feb 9 09:59:52.459373 kubelet[2429]: W0209 09:59:52.459361 2429 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.2-a-b353ffea6c" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b353ffea6c' and this object Feb 9 09:59:52.459657 kubelet[2429]: E0209 09:59:52.459645 2429 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.2-a-b353ffea6c" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b353ffea6c' and this object Feb 9 09:59:52.479720 kubelet[2429]: I0209 09:59:52.479685 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cni-path\") pod \"cilium-89gwd\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " pod="kube-system/cilium-89gwd" Feb 9 09:59:52.479928 kubelet[2429]: I0209 09:59:52.479915 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhpjm\" (UniqueName: \"kubernetes.io/projected/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-kube-api-access-zhpjm\") pod \"cilium-89gwd\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " pod="kube-system/cilium-89gwd" Feb 9 09:59:52.480046 kubelet[2429]: I0209 09:59:52.480035 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cilium-cgroup\") pod \"cilium-89gwd\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " pod="kube-system/cilium-89gwd" Feb 9 09:59:52.480150 kubelet[2429]: I0209 09:59:52.480139 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-host-proc-sys-net\") pod \"cilium-89gwd\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " pod="kube-system/cilium-89gwd" Feb 9 09:59:52.480313 kubelet[2429]: I0209 09:59:52.480300 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-hubble-tls\") pod \"cilium-89gwd\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " pod="kube-system/cilium-89gwd" Feb 9 09:59:52.480416 kubelet[2429]: I0209 09:59:52.480406 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cilium-run\") pod \"cilium-89gwd\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " pod="kube-system/cilium-89gwd" Feb 9 09:59:52.480512 kubelet[2429]: I0209 09:59:52.480501 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-bpf-maps\") pod \"cilium-89gwd\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " pod="kube-system/cilium-89gwd" Feb 9 09:59:52.480617 kubelet[2429]: I0209 09:59:52.480607 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-clustermesh-secrets\") pod \"cilium-89gwd\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " pod="kube-system/cilium-89gwd" Feb 9 09:59:52.480710 kubelet[2429]: I0209 09:59:52.480700 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-host-proc-sys-kernel\") pod \"cilium-89gwd\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " pod="kube-system/cilium-89gwd" Feb 9 09:59:52.480806 kubelet[2429]: I0209 09:59:52.480796 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-lib-modules\") pod \"cilium-89gwd\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " pod="kube-system/cilium-89gwd" Feb 9 09:59:52.480948 kubelet[2429]: I0209 09:59:52.480922 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-hostproc\") pod \"cilium-89gwd\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " pod="kube-system/cilium-89gwd" Feb 9 09:59:52.480987 kubelet[2429]: I0209 09:59:52.480962 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-xtables-lock\") pod \"cilium-89gwd\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " pod="kube-system/cilium-89gwd" Feb 9 09:59:52.480987 kubelet[2429]: I0209 09:59:52.480982 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cilium-config-path\") pod \"cilium-89gwd\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " pod="kube-system/cilium-89gwd" Feb 9 09:59:52.481052 kubelet[2429]: I0209 09:59:52.481002 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cilium-ipsec-secrets\") pod \"cilium-89gwd\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " pod="kube-system/cilium-89gwd" Feb 9 09:59:52.481052 kubelet[2429]: I0209 09:59:52.481023 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-etc-cni-netd\") pod \"cilium-89gwd\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " pod="kube-system/cilium-89gwd" Feb 9 09:59:52.517908 systemd[1]: Started sshd@27-10.200.20.38:22-10.200.12.6:43374.service. Feb 9 09:59:52.948412 sshd[4205]: Accepted publickey for core from 10.200.12.6 port 43374 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:52.949721 sshd[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:52.953241 systemd-logind[1343]: New session 30 of user core. Feb 9 09:59:52.954247 systemd[1]: Started session-30.scope. Feb 9 09:59:53.282258 kubelet[2429]: E0209 09:59:53.282125 2429 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[cilium-ipsec-secrets clustermesh-secrets], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-89gwd" podUID="4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" Feb 9 09:59:53.331717 sshd[4205]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:53.334996 systemd-logind[1343]: Session 30 logged out. Waiting for processes to exit. Feb 9 09:59:53.335158 systemd[1]: sshd@27-10.200.20.38:22-10.200.12.6:43374.service: Deactivated successfully. Feb 9 09:59:53.335858 systemd[1]: session-30.scope: Deactivated successfully. Feb 9 09:59:53.336792 systemd-logind[1343]: Removed session 30. Feb 9 09:59:53.403267 systemd[1]: Started sshd@28-10.200.20.38:22-10.200.12.6:43386.service. Feb 9 09:59:53.582932 kubelet[2429]: E0209 09:59:53.582854 2429 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 9 09:59:53.583330 kubelet[2429]: E0209 09:59:53.583310 2429 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-clustermesh-secrets podName:4ef3cc5b-0e67-4ba7-87d5-6439baedb33e nodeName:}" failed. No retries permitted until 2024-02-09 09:59:54.083288118 +0000 UTC m=+204.401709836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-clustermesh-secrets") pod "cilium-89gwd" (UID: "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e") : failed to sync secret cache: timed out waiting for the condition Feb 9 09:59:53.830270 sshd[4220]: Accepted publickey for core from 10.200.12.6 port 43386 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:53.831293 sshd[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:53.835578 systemd[1]: Started session-31.scope. Feb 9 09:59:53.835873 systemd-logind[1343]: New session 31 of user core. Feb 9 09:59:54.293969 kubelet[2429]: I0209 09:59:54.293933 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhpjm\" (UniqueName: \"kubernetes.io/projected/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-kube-api-access-zhpjm\") pod \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " Feb 9 09:59:54.294610 kubelet[2429]: I0209 09:59:54.294596 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-host-proc-sys-net\") pod \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " Feb 9 09:59:54.294751 kubelet[2429]: I0209 09:59:54.294741 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-hubble-tls\") pod \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " Feb 9 09:59:54.294851 kubelet[2429]: I0209 09:59:54.294840 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cilium-run\") pod \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " Feb 9 09:59:54.294941 kubelet[2429]: I0209 09:59:54.294931 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-bpf-maps\") pod \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " Feb 9 09:59:54.295024 kubelet[2429]: I0209 09:59:54.295015 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-clustermesh-secrets\") pod \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " Feb 9 09:59:54.295121 kubelet[2429]: I0209 09:59:54.295112 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cilium-ipsec-secrets\") pod \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " Feb 9 09:59:54.295222 kubelet[2429]: I0209 09:59:54.295212 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-hostproc\") pod \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " Feb 9 09:59:54.295315 kubelet[2429]: I0209 09:59:54.295305 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-xtables-lock\") pod \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " Feb 9 09:59:54.295397 kubelet[2429]: I0209 09:59:54.295388 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cilium-cgroup\") pod \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " Feb 9 09:59:54.295475 kubelet[2429]: I0209 09:59:54.295467 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-host-proc-sys-kernel\") pod \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " Feb 9 09:59:54.295560 kubelet[2429]: I0209 09:59:54.295551 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-lib-modules\") pod \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " Feb 9 09:59:54.295637 kubelet[2429]: I0209 09:59:54.295629 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-etc-cni-netd\") pod \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " Feb 9 09:59:54.295729 kubelet[2429]: I0209 09:59:54.295720 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cilium-config-path\") pod \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " Feb 9 09:59:54.295827 kubelet[2429]: I0209 09:59:54.295818 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cni-path\") pod \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\" (UID: \"4ef3cc5b-0e67-4ba7-87d5-6439baedb33e\") " Feb 9 09:59:54.295943 kubelet[2429]: I0209 09:59:54.295929 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cni-path" (OuterVolumeSpecName: "cni-path") pod "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" (UID: "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:54.296042 kubelet[2429]: I0209 09:59:54.296029 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" (UID: "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:54.296133 kubelet[2429]: I0209 09:59:54.296121 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" (UID: "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:54.296606 kubelet[2429]: I0209 09:59:54.296577 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-hostproc" (OuterVolumeSpecName: "hostproc") pod "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" (UID: "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:54.298103 systemd[1]: var-lib-kubelet-pods-4ef3cc5b\x2d0e67\x2d4ba7\x2d87d5\x2d6439baedb33e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzhpjm.mount: Deactivated successfully. Feb 9 09:59:54.298866 kubelet[2429]: I0209 09:59:54.298844 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" (UID: "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:54.298981 kubelet[2429]: I0209 09:59:54.298968 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" (UID: "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:54.299059 kubelet[2429]: I0209 09:59:54.298971 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" (UID: "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:54.299152 kubelet[2429]: I0209 09:59:54.299135 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" (UID: "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:54.299285 kubelet[2429]: I0209 09:59:54.299251 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" (UID: "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:54.299388 kubelet[2429]: I0209 09:59:54.299373 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" (UID: "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:54.301536 kubelet[2429]: I0209 09:59:54.301504 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" (UID: "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:59:54.301730 kubelet[2429]: I0209 09:59:54.301714 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-kube-api-access-zhpjm" (OuterVolumeSpecName: "kube-api-access-zhpjm") pod "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" (UID: "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e"). InnerVolumeSpecName "kube-api-access-zhpjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:54.303443 systemd[1]: var-lib-kubelet-pods-4ef3cc5b\x2d0e67\x2d4ba7\x2d87d5\x2d6439baedb33e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:59:54.304326 kubelet[2429]: I0209 09:59:54.304302 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" (UID: "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:54.306926 systemd[1]: var-lib-kubelet-pods-4ef3cc5b\x2d0e67\x2d4ba7\x2d87d5\x2d6439baedb33e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:59:54.307629 kubelet[2429]: I0209 09:59:54.307608 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" (UID: "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:59:54.308981 systemd[1]: var-lib-kubelet-pods-4ef3cc5b\x2d0e67\x2d4ba7\x2d87d5\x2d6439baedb33e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 09:59:54.309738 kubelet[2429]: I0209 09:59:54.309712 2429 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" (UID: "4ef3cc5b-0e67-4ba7-87d5-6439baedb33e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:59:54.396679 kubelet[2429]: I0209 09:59:54.396639 2429 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:54.396679 kubelet[2429]: I0209 09:59:54.396673 2429 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-lib-modules\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:54.396679 kubelet[2429]: I0209 09:59:54.396685 2429 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-etc-cni-netd\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:54.396891 kubelet[2429]: I0209 09:59:54.396696 2429 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cilium-config-path\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:54.396891 kubelet[2429]: I0209 09:59:54.396707 2429 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cilium-cgroup\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:54.396891 kubelet[2429]: I0209 09:59:54.396717 2429 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cni-path\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:54.396891 kubelet[2429]: I0209 09:59:54.396726 2429 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zhpjm\" (UniqueName: \"kubernetes.io/projected/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-kube-api-access-zhpjm\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:54.396891 kubelet[2429]: I0209 09:59:54.396736 2429 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-host-proc-sys-net\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:54.396891 kubelet[2429]: I0209 09:59:54.396746 2429 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-hubble-tls\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:54.396891 kubelet[2429]: I0209 09:59:54.396757 2429 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cilium-run\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:54.396891 kubelet[2429]: I0209 09:59:54.396767 2429 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-bpf-maps\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:54.397077 kubelet[2429]: I0209 09:59:54.396776 2429 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-clustermesh-secrets\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:54.397077 kubelet[2429]: I0209 09:59:54.396785 2429 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:54.397077 kubelet[2429]: I0209 09:59:54.396794 2429 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-hostproc\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:54.397077 kubelet[2429]: I0209 09:59:54.396806 2429 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e-xtables-lock\") on node \"ci-3510.3.2-a-b353ffea6c\" DevicePath \"\"" Feb 9 09:59:54.501924 kubelet[2429]: I0209 09:59:54.501903 2429 setters.go:552] "Node became not ready" node="ci-3510.3.2-a-b353ffea6c" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T09:59:54Z","lastTransitionTime":"2024-02-09T09:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 09:59:54.945947 kubelet[2429]: E0209 09:59:54.945906 2429 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:59:55.244499 systemd[1]: Removed slice kubepods-burstable-pod4ef3cc5b_0e67_4ba7_87d5_6439baedb33e.slice. Feb 9 09:59:55.286629 kubelet[2429]: I0209 09:59:55.286566 2429 topology_manager.go:215] "Topology Admit Handler" podUID="2f131ccb-4d32-4a11-b99e-d2621200794d" podNamespace="kube-system" podName="cilium-4vpkr" Feb 9 09:59:55.291766 systemd[1]: Created slice kubepods-burstable-pod2f131ccb_4d32_4a11_b99e_d2621200794d.slice. Feb 9 09:59:55.402394 kubelet[2429]: I0209 09:59:55.402364 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f131ccb-4d32-4a11-b99e-d2621200794d-xtables-lock\") pod \"cilium-4vpkr\" (UID: \"2f131ccb-4d32-4a11-b99e-d2621200794d\") " pod="kube-system/cilium-4vpkr" Feb 9 09:59:55.402601 kubelet[2429]: I0209 09:59:55.402589 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7r6w\" (UniqueName: \"kubernetes.io/projected/2f131ccb-4d32-4a11-b99e-d2621200794d-kube-api-access-x7r6w\") pod \"cilium-4vpkr\" (UID: \"2f131ccb-4d32-4a11-b99e-d2621200794d\") " pod="kube-system/cilium-4vpkr" Feb 9 09:59:55.402689 kubelet[2429]: I0209 09:59:55.402679 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f131ccb-4d32-4a11-b99e-d2621200794d-lib-modules\") pod \"cilium-4vpkr\" (UID: \"2f131ccb-4d32-4a11-b99e-d2621200794d\") " pod="kube-system/cilium-4vpkr" Feb 9 09:59:55.402776 kubelet[2429]: I0209 09:59:55.402766 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2f131ccb-4d32-4a11-b99e-d2621200794d-cni-path\") pod \"cilium-4vpkr\" (UID: \"2f131ccb-4d32-4a11-b99e-d2621200794d\") " pod="kube-system/cilium-4vpkr" Feb 9 09:59:55.402866 kubelet[2429]: I0209 09:59:55.402855 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2f131ccb-4d32-4a11-b99e-d2621200794d-bpf-maps\") pod \"cilium-4vpkr\" (UID: \"2f131ccb-4d32-4a11-b99e-d2621200794d\") " pod="kube-system/cilium-4vpkr" Feb 9 09:59:55.402967 kubelet[2429]: I0209 09:59:55.402948 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2f131ccb-4d32-4a11-b99e-d2621200794d-cilium-ipsec-secrets\") pod \"cilium-4vpkr\" (UID: \"2f131ccb-4d32-4a11-b99e-d2621200794d\") " pod="kube-system/cilium-4vpkr" Feb 9 09:59:55.403025 kubelet[2429]: I0209 09:59:55.402988 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f131ccb-4d32-4a11-b99e-d2621200794d-etc-cni-netd\") pod \"cilium-4vpkr\" (UID: \"2f131ccb-4d32-4a11-b99e-d2621200794d\") " pod="kube-system/cilium-4vpkr" Feb 9 09:59:55.403025 kubelet[2429]: I0209 09:59:55.403011 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2f131ccb-4d32-4a11-b99e-d2621200794d-cilium-cgroup\") pod \"cilium-4vpkr\" (UID: \"2f131ccb-4d32-4a11-b99e-d2621200794d\") " pod="kube-system/cilium-4vpkr" Feb 9 09:59:55.403083 kubelet[2429]: I0209 09:59:55.403033 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2f131ccb-4d32-4a11-b99e-d2621200794d-host-proc-sys-kernel\") pod \"cilium-4vpkr\" (UID: \"2f131ccb-4d32-4a11-b99e-d2621200794d\") " pod="kube-system/cilium-4vpkr" Feb 9 09:59:55.403083 kubelet[2429]: I0209 09:59:55.403056 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2f131ccb-4d32-4a11-b99e-d2621200794d-cilium-config-path\") pod \"cilium-4vpkr\" (UID: \"2f131ccb-4d32-4a11-b99e-d2621200794d\") " pod="kube-system/cilium-4vpkr" Feb 9 09:59:55.403083 kubelet[2429]: I0209 09:59:55.403074 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2f131ccb-4d32-4a11-b99e-d2621200794d-cilium-run\") pod \"cilium-4vpkr\" (UID: \"2f131ccb-4d32-4a11-b99e-d2621200794d\") " pod="kube-system/cilium-4vpkr" Feb 9 09:59:55.403150 kubelet[2429]: I0209 09:59:55.403092 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2f131ccb-4d32-4a11-b99e-d2621200794d-hostproc\") pod \"cilium-4vpkr\" (UID: \"2f131ccb-4d32-4a11-b99e-d2621200794d\") " pod="kube-system/cilium-4vpkr" Feb 9 09:59:55.403150 kubelet[2429]: I0209 09:59:55.403112 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2f131ccb-4d32-4a11-b99e-d2621200794d-clustermesh-secrets\") pod \"cilium-4vpkr\" (UID: \"2f131ccb-4d32-4a11-b99e-d2621200794d\") " pod="kube-system/cilium-4vpkr" Feb 9 09:59:55.403150 kubelet[2429]: I0209 09:59:55.403130 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2f131ccb-4d32-4a11-b99e-d2621200794d-host-proc-sys-net\") pod \"cilium-4vpkr\" (UID: \"2f131ccb-4d32-4a11-b99e-d2621200794d\") " pod="kube-system/cilium-4vpkr" Feb 9 09:59:55.403150 kubelet[2429]: I0209 09:59:55.403147 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2f131ccb-4d32-4a11-b99e-d2621200794d-hubble-tls\") pod \"cilium-4vpkr\" (UID: \"2f131ccb-4d32-4a11-b99e-d2621200794d\") " pod="kube-system/cilium-4vpkr" Feb 9 09:59:55.595977 env[1354]: time="2024-02-09T09:59:55.595860764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4vpkr,Uid:2f131ccb-4d32-4a11-b99e-d2621200794d,Namespace:kube-system,Attempt:0,}" Feb 9 09:59:55.628691 env[1354]: time="2024-02-09T09:59:55.628618340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:59:55.628839 env[1354]: time="2024-02-09T09:59:55.628694778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:59:55.628839 env[1354]: time="2024-02-09T09:59:55.628720337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:59:55.628984 env[1354]: time="2024-02-09T09:59:55.628949530Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fbc0687e1afdafab9dc52485d03c1571492e348ed31c0afaf1c6a02ce35269c8 pid=4245 runtime=io.containerd.runc.v2 Feb 9 09:59:55.639547 systemd[1]: Started cri-containerd-fbc0687e1afdafab9dc52485d03c1571492e348ed31c0afaf1c6a02ce35269c8.scope. Feb 9 09:59:55.663552 env[1354]: time="2024-02-09T09:59:55.663506730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4vpkr,Uid:2f131ccb-4d32-4a11-b99e-d2621200794d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbc0687e1afdafab9dc52485d03c1571492e348ed31c0afaf1c6a02ce35269c8\"" Feb 9 09:59:55.667630 env[1354]: time="2024-02-09T09:59:55.667586882Z" level=info msg="CreateContainer within sandbox \"fbc0687e1afdafab9dc52485d03c1571492e348ed31c0afaf1c6a02ce35269c8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:59:55.699790 env[1354]: time="2024-02-09T09:59:55.699744677Z" level=info msg="CreateContainer within sandbox \"fbc0687e1afdafab9dc52485d03c1571492e348ed31c0afaf1c6a02ce35269c8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d30682989834908e2f4231623aaea0a02111f1a93c895d719bfa8cfbcdc8a8a2\"" Feb 9 09:59:55.700684 env[1354]: time="2024-02-09T09:59:55.700657729Z" level=info msg="StartContainer for \"d30682989834908e2f4231623aaea0a02111f1a93c895d719bfa8cfbcdc8a8a2\"" Feb 9 09:59:55.716967 systemd[1]: Started cri-containerd-d30682989834908e2f4231623aaea0a02111f1a93c895d719bfa8cfbcdc8a8a2.scope. Feb 9 09:59:55.751494 env[1354]: time="2024-02-09T09:59:55.751448581Z" level=info msg="StartContainer for \"d30682989834908e2f4231623aaea0a02111f1a93c895d719bfa8cfbcdc8a8a2\" returns successfully" Feb 9 09:59:55.755730 systemd[1]: cri-containerd-d30682989834908e2f4231623aaea0a02111f1a93c895d719bfa8cfbcdc8a8a2.scope: Deactivated successfully. Feb 9 09:59:55.783101 kubelet[2429]: I0209 09:59:55.782910 2429 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4ef3cc5b-0e67-4ba7-87d5-6439baedb33e" path="/var/lib/kubelet/pods/4ef3cc5b-0e67-4ba7-87d5-6439baedb33e/volumes" Feb 9 09:59:55.862790 env[1354]: time="2024-02-09T09:59:55.862745342Z" level=info msg="shim disconnected" id=d30682989834908e2f4231623aaea0a02111f1a93c895d719bfa8cfbcdc8a8a2 Feb 9 09:59:55.863057 env[1354]: time="2024-02-09T09:59:55.863037093Z" level=warning msg="cleaning up after shim disconnected" id=d30682989834908e2f4231623aaea0a02111f1a93c895d719bfa8cfbcdc8a8a2 namespace=k8s.io Feb 9 09:59:55.863156 env[1354]: time="2024-02-09T09:59:55.863141890Z" level=info msg="cleaning up dead shim" Feb 9 09:59:55.870773 env[1354]: time="2024-02-09T09:59:55.870729653Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4331 runtime=io.containerd.runc.v2\n" Feb 9 09:59:56.246554 env[1354]: time="2024-02-09T09:59:56.246453133Z" level=info msg="CreateContainer within sandbox \"fbc0687e1afdafab9dc52485d03c1571492e348ed31c0afaf1c6a02ce35269c8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:59:56.275336 env[1354]: time="2024-02-09T09:59:56.275282919Z" level=info msg="CreateContainer within sandbox \"fbc0687e1afdafab9dc52485d03c1571492e348ed31c0afaf1c6a02ce35269c8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"63257e60092c2f508819e929cbf62176b7d312858e0750943471d3fa6f176451\"" Feb 9 09:59:56.276564 env[1354]: time="2024-02-09T09:59:56.275753305Z" level=info msg="StartContainer for \"63257e60092c2f508819e929cbf62176b7d312858e0750943471d3fa6f176451\"" Feb 9 09:59:56.289781 systemd[1]: Started cri-containerd-63257e60092c2f508819e929cbf62176b7d312858e0750943471d3fa6f176451.scope. Feb 9 09:59:56.324270 systemd[1]: cri-containerd-63257e60092c2f508819e929cbf62176b7d312858e0750943471d3fa6f176451.scope: Deactivated successfully. Feb 9 09:59:56.326641 env[1354]: time="2024-02-09T09:59:56.326606529Z" level=info msg="StartContainer for \"63257e60092c2f508819e929cbf62176b7d312858e0750943471d3fa6f176451\" returns successfully" Feb 9 09:59:56.354706 env[1354]: time="2024-02-09T09:59:56.354656419Z" level=info msg="shim disconnected" id=63257e60092c2f508819e929cbf62176b7d312858e0750943471d3fa6f176451 Feb 9 09:59:56.354706 env[1354]: time="2024-02-09T09:59:56.354704098Z" level=warning msg="cleaning up after shim disconnected" id=63257e60092c2f508819e929cbf62176b7d312858e0750943471d3fa6f176451 namespace=k8s.io Feb 9 09:59:56.354919 env[1354]: time="2024-02-09T09:59:56.354714777Z" level=info msg="cleaning up dead shim" Feb 9 09:59:56.361656 env[1354]: time="2024-02-09T09:59:56.361608844Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4396 runtime=io.containerd.runc.v2\n" Feb 9 09:59:57.253415 env[1354]: time="2024-02-09T09:59:57.253364228Z" level=info msg="CreateContainer within sandbox \"fbc0687e1afdafab9dc52485d03c1571492e348ed31c0afaf1c6a02ce35269c8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:59:57.293310 env[1354]: time="2024-02-09T09:59:57.293254402Z" level=info msg="CreateContainer within sandbox \"fbc0687e1afdafab9dc52485d03c1571492e348ed31c0afaf1c6a02ce35269c8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e5688cbf14f7fdb729e30f2cc9755d4643bfcd4d7c6ddc366114eefb6984fc6f\"" Feb 9 09:59:57.294217 env[1354]: time="2024-02-09T09:59:57.294169734Z" level=info msg="StartContainer for \"e5688cbf14f7fdb729e30f2cc9755d4643bfcd4d7c6ddc366114eefb6984fc6f\"" Feb 9 09:59:57.313473 systemd[1]: Started cri-containerd-e5688cbf14f7fdb729e30f2cc9755d4643bfcd4d7c6ddc366114eefb6984fc6f.scope. Feb 9 09:59:57.345800 systemd[1]: cri-containerd-e5688cbf14f7fdb729e30f2cc9755d4643bfcd4d7c6ddc366114eefb6984fc6f.scope: Deactivated successfully. Feb 9 09:59:57.352076 env[1354]: time="2024-02-09T09:59:57.352035555Z" level=info msg="StartContainer for \"e5688cbf14f7fdb729e30f2cc9755d4643bfcd4d7c6ddc366114eefb6984fc6f\" returns successfully" Feb 9 09:59:57.387637 env[1354]: time="2024-02-09T09:59:57.387585023Z" level=info msg="shim disconnected" id=e5688cbf14f7fdb729e30f2cc9755d4643bfcd4d7c6ddc366114eefb6984fc6f Feb 9 09:59:57.387637 env[1354]: time="2024-02-09T09:59:57.387637421Z" level=warning msg="cleaning up after shim disconnected" id=e5688cbf14f7fdb729e30f2cc9755d4643bfcd4d7c6ddc366114eefb6984fc6f namespace=k8s.io Feb 9 09:59:57.387855 env[1354]: time="2024-02-09T09:59:57.387646461Z" level=info msg="cleaning up dead shim" Feb 9 09:59:57.395494 env[1354]: time="2024-02-09T09:59:57.395448381Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4452 runtime=io.containerd.runc.v2\n" Feb 9 09:59:57.507678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5688cbf14f7fdb729e30f2cc9755d4643bfcd4d7c6ddc366114eefb6984fc6f-rootfs.mount: Deactivated successfully. Feb 9 09:59:58.256493 env[1354]: time="2024-02-09T09:59:58.256446740Z" level=info msg="CreateContainer within sandbox \"fbc0687e1afdafab9dc52485d03c1571492e348ed31c0afaf1c6a02ce35269c8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:59:58.278691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1753744825.mount: Deactivated successfully. Feb 9 09:59:58.295471 env[1354]: time="2024-02-09T09:59:58.295421832Z" level=info msg="CreateContainer within sandbox \"fbc0687e1afdafab9dc52485d03c1571492e348ed31c0afaf1c6a02ce35269c8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"36448083052d93ed4437b687d8abe7ced6f9621fd8cd85e9be79393b2a10e6ad\"" Feb 9 09:59:58.296251 env[1354]: time="2024-02-09T09:59:58.296171089Z" level=info msg="StartContainer for \"36448083052d93ed4437b687d8abe7ced6f9621fd8cd85e9be79393b2a10e6ad\"" Feb 9 09:59:58.313154 systemd[1]: Started cri-containerd-36448083052d93ed4437b687d8abe7ced6f9621fd8cd85e9be79393b2a10e6ad.scope. Feb 9 09:59:58.338274 systemd[1]: cri-containerd-36448083052d93ed4437b687d8abe7ced6f9621fd8cd85e9be79393b2a10e6ad.scope: Deactivated successfully. Feb 9 09:59:58.340661 env[1354]: time="2024-02-09T09:59:58.340584735Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f131ccb_4d32_4a11_b99e_d2621200794d.slice/cri-containerd-36448083052d93ed4437b687d8abe7ced6f9621fd8cd85e9be79393b2a10e6ad.scope/memory.events\": no such file or directory" Feb 9 09:59:58.345219 env[1354]: time="2024-02-09T09:59:58.344911163Z" level=info msg="StartContainer for \"36448083052d93ed4437b687d8abe7ced6f9621fd8cd85e9be79393b2a10e6ad\" returns successfully" Feb 9 09:59:58.384030 env[1354]: time="2024-02-09T09:59:58.383968852Z" level=info msg="shim disconnected" id=36448083052d93ed4437b687d8abe7ced6f9621fd8cd85e9be79393b2a10e6ad Feb 9 09:59:58.384299 env[1354]: time="2024-02-09T09:59:58.384278243Z" level=warning msg="cleaning up after shim disconnected" id=36448083052d93ed4437b687d8abe7ced6f9621fd8cd85e9be79393b2a10e6ad namespace=k8s.io Feb 9 09:59:58.384365 env[1354]: time="2024-02-09T09:59:58.384352041Z" level=info msg="cleaning up dead shim" Feb 9 09:59:58.392997 env[1354]: time="2024-02-09T09:59:58.392955538Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4510 runtime=io.containerd.runc.v2\n" Feb 9 09:59:58.509163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36448083052d93ed4437b687d8abe7ced6f9621fd8cd85e9be79393b2a10e6ad-rootfs.mount: Deactivated successfully. Feb 9 09:59:59.260300 env[1354]: time="2024-02-09T09:59:59.260259482Z" level=info msg="CreateContainer within sandbox \"fbc0687e1afdafab9dc52485d03c1571492e348ed31c0afaf1c6a02ce35269c8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:59:59.294283 env[1354]: time="2024-02-09T09:59:59.294233975Z" level=info msg="CreateContainer within sandbox \"fbc0687e1afdafab9dc52485d03c1571492e348ed31c0afaf1c6a02ce35269c8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"780887045d077282320cec2440bc56ff93e256b2bc735516a29438c62ca3efb0\"" Feb 9 09:59:59.295123 env[1354]: time="2024-02-09T09:59:59.295096109Z" level=info msg="StartContainer for \"780887045d077282320cec2440bc56ff93e256b2bc735516a29438c62ca3efb0\"" Feb 9 09:59:59.314741 systemd[1]: Started cri-containerd-780887045d077282320cec2440bc56ff93e256b2bc735516a29438c62ca3efb0.scope. Feb 9 09:59:59.350786 env[1354]: time="2024-02-09T09:59:59.350741507Z" level=info msg="StartContainer for \"780887045d077282320cec2440bc56ff93e256b2bc735516a29438c62ca3efb0\" returns successfully" Feb 9 09:59:59.907209 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 10:00:02.451041 systemd-networkd[1500]: lxc_health: Link UP Feb 9 10:00:02.477310 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:00:02.477507 systemd-networkd[1500]: lxc_health: Gained carrier Feb 9 10:00:03.618006 kubelet[2429]: I0209 10:00:03.617964 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-4vpkr" podStartSLOduration=8.61792533 podCreationTimestamp="2024-02-09 09:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:00:00.293145724 +0000 UTC m=+210.611567522" watchObservedRunningTime="2024-02-09 10:00:03.61792533 +0000 UTC m=+213.936347088" Feb 9 10:00:04.107306 systemd-networkd[1500]: lxc_health: Gained IPv6LL Feb 9 10:00:04.659641 systemd[1]: run-containerd-runc-k8s.io-780887045d077282320cec2440bc56ff93e256b2bc735516a29438c62ca3efb0-runc.AxRcEF.mount: Deactivated successfully. Feb 9 10:00:06.784440 systemd[1]: run-containerd-runc-k8s.io-780887045d077282320cec2440bc56ff93e256b2bc735516a29438c62ca3efb0-runc.44XzCS.mount: Deactivated successfully. Feb 9 10:00:08.900638 systemd[1]: run-containerd-runc-k8s.io-780887045d077282320cec2440bc56ff93e256b2bc735516a29438c62ca3efb0-runc.R07ozF.mount: Deactivated successfully. Feb 9 10:00:09.016412 sshd[4220]: pam_unix(sshd:session): session closed for user core Feb 9 10:00:09.019180 systemd[1]: sshd@28-10.200.20.38:22-10.200.12.6:43386.service: Deactivated successfully. Feb 9 10:00:09.019906 systemd[1]: session-31.scope: Deactivated successfully. Feb 9 10:00:09.020872 systemd-logind[1343]: Session 31 logged out. Waiting for processes to exit. Feb 9 10:00:09.021720 systemd-logind[1343]: Removed session 31. Feb 9 10:00:23.371462 kubelet[2429]: E0209 10:00:23.371422 2429 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.38:54188->10.200.20.32:2379: read: connection timed out" Feb 9 10:00:23.376871 systemd[1]: cri-containerd-4197f79c547014ff7e128ce04c896aeee0b45d67cf98c81a8acc499615a45224.scope: Deactivated successfully. Feb 9 10:00:23.377180 systemd[1]: cri-containerd-4197f79c547014ff7e128ce04c896aeee0b45d67cf98c81a8acc499615a45224.scope: Consumed 2.472s CPU time. Feb 9 10:00:23.396544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4197f79c547014ff7e128ce04c896aeee0b45d67cf98c81a8acc499615a45224-rootfs.mount: Deactivated successfully. Feb 9 10:00:23.421629 env[1354]: time="2024-02-09T10:00:23.421577790Z" level=info msg="shim disconnected" id=4197f79c547014ff7e128ce04c896aeee0b45d67cf98c81a8acc499615a45224 Feb 9 10:00:23.421629 env[1354]: time="2024-02-09T10:00:23.421626189Z" level=warning msg="cleaning up after shim disconnected" id=4197f79c547014ff7e128ce04c896aeee0b45d67cf98c81a8acc499615a45224 namespace=k8s.io Feb 9 10:00:23.421629 env[1354]: time="2024-02-09T10:00:23.421636069Z" level=info msg="cleaning up dead shim" Feb 9 10:00:23.428770 env[1354]: time="2024-02-09T10:00:23.428723730Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5188 runtime=io.containerd.runc.v2\n" Feb 9 10:00:24.011686 systemd[1]: cri-containerd-5e96002b5e2a890cb5d7be50a5047b846098a7735b052dd92ae11ec26db761f4.scope: Deactivated successfully. Feb 9 10:00:24.011994 systemd[1]: cri-containerd-5e96002b5e2a890cb5d7be50a5047b846098a7735b052dd92ae11ec26db761f4.scope: Consumed 3.422s CPU time. Feb 9 10:00:24.031309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e96002b5e2a890cb5d7be50a5047b846098a7735b052dd92ae11ec26db761f4-rootfs.mount: Deactivated successfully. Feb 9 10:00:24.040635 env[1354]: time="2024-02-09T10:00:24.040594697Z" level=info msg="shim disconnected" id=5e96002b5e2a890cb5d7be50a5047b846098a7735b052dd92ae11ec26db761f4 Feb 9 10:00:24.040821 env[1354]: time="2024-02-09T10:00:24.040803332Z" level=warning msg="cleaning up after shim disconnected" id=5e96002b5e2a890cb5d7be50a5047b846098a7735b052dd92ae11ec26db761f4 namespace=k8s.io Feb 9 10:00:24.040899 env[1354]: time="2024-02-09T10:00:24.040886970Z" level=info msg="cleaning up dead shim" Feb 9 10:00:24.048151 env[1354]: time="2024-02-09T10:00:24.048122388Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5213 runtime=io.containerd.runc.v2\n" Feb 9 10:00:24.315836 kubelet[2429]: I0209 10:00:24.315379 2429 scope.go:117] "RemoveContainer" containerID="4197f79c547014ff7e128ce04c896aeee0b45d67cf98c81a8acc499615a45224" Feb 9 10:00:24.317486 env[1354]: time="2024-02-09T10:00:24.317446299Z" level=info msg="CreateContainer within sandbox \"2caf5d14abaaa940285a79dbd0745acb24ddaa83b52e6cc3a8cb1096ec40a82b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 10:00:24.319077 kubelet[2429]: I0209 10:00:24.318608 2429 scope.go:117] "RemoveContainer" containerID="5e96002b5e2a890cb5d7be50a5047b846098a7735b052dd92ae11ec26db761f4" Feb 9 10:00:24.320947 env[1354]: time="2024-02-09T10:00:24.320907892Z" level=info msg="CreateContainer within sandbox \"f05b26b81de45b6c058c2c6fd82ccfcf7b81215f0b2dafb090252abac0c1846a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 10:00:24.350973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1043161750.mount: Deactivated successfully. Feb 9 10:00:24.380362 env[1354]: time="2024-02-09T10:00:24.380291880Z" level=info msg="CreateContainer within sandbox \"2caf5d14abaaa940285a79dbd0745acb24ddaa83b52e6cc3a8cb1096ec40a82b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4db6adf2920b1284f97e4f1004330a416b5bc7f3ed879bd4068e72c0181de684\"" Feb 9 10:00:24.380789 env[1354]: time="2024-02-09T10:00:24.380763308Z" level=info msg="StartContainer for \"4db6adf2920b1284f97e4f1004330a416b5bc7f3ed879bd4068e72c0181de684\"" Feb 9 10:00:24.385855 env[1354]: time="2024-02-09T10:00:24.385813581Z" level=info msg="CreateContainer within sandbox \"f05b26b81de45b6c058c2c6fd82ccfcf7b81215f0b2dafb090252abac0c1846a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"21e18153f125d900c011c915d0e054955a3e353b6eca4e402a2b26d0e3480d87\"" Feb 9 10:00:24.386535 env[1354]: time="2024-02-09T10:00:24.386509364Z" level=info msg="StartContainer for \"21e18153f125d900c011c915d0e054955a3e353b6eca4e402a2b26d0e3480d87\"" Feb 9 10:00:24.408834 systemd[1]: run-containerd-runc-k8s.io-4db6adf2920b1284f97e4f1004330a416b5bc7f3ed879bd4068e72c0181de684-runc.ESbYkz.mount: Deactivated successfully. Feb 9 10:00:24.414309 systemd[1]: Started cri-containerd-4db6adf2920b1284f97e4f1004330a416b5bc7f3ed879bd4068e72c0181de684.scope. Feb 9 10:00:24.432498 systemd[1]: Started cri-containerd-21e18153f125d900c011c915d0e054955a3e353b6eca4e402a2b26d0e3480d87.scope. Feb 9 10:00:24.470938 env[1354]: time="2024-02-09T10:00:24.470873564Z" level=info msg="StartContainer for \"4db6adf2920b1284f97e4f1004330a416b5bc7f3ed879bd4068e72c0181de684\" returns successfully" Feb 9 10:00:24.487763 env[1354]: time="2024-02-09T10:00:24.487715661Z" level=info msg="StartContainer for \"21e18153f125d900c011c915d0e054955a3e353b6eca4e402a2b26d0e3480d87\" returns successfully" Feb 9 10:00:26.635525 kubelet[2429]: E0209 10:00:26.635416 2429 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-b353ffea6c.17b2297ad1376e83", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-b353ffea6c", UID:"e3047ddd82a2d9b46c165ae7eca1a82f", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b353ffea6c"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 0, 16, 199659139, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 0, 16, 199659139, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-b353ffea6c"}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.38:53970->10.200.20.32:2379: read: connection timed out' (will not retry!) Feb 9 10:00:29.783022 env[1354]: time="2024-02-09T10:00:29.782847165Z" level=info msg="StopPodSandbox for \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\"" Feb 9 10:00:29.783022 env[1354]: time="2024-02-09T10:00:29.782931883Z" level=info msg="TearDown network for sandbox \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\" successfully" Feb 9 10:00:29.783022 env[1354]: time="2024-02-09T10:00:29.782963882Z" level=info msg="StopPodSandbox for \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\" returns successfully" Feb 9 10:00:29.783431 env[1354]: time="2024-02-09T10:00:29.783346713Z" level=info msg="RemovePodSandbox for \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\"" Feb 9 10:00:29.783431 env[1354]: time="2024-02-09T10:00:29.783385832Z" level=info msg="Forcibly stopping sandbox \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\"" Feb 9 10:00:29.783482 env[1354]: time="2024-02-09T10:00:29.783457950Z" level=info msg="TearDown network for sandbox \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\" successfully" Feb 9 10:00:29.800484 env[1354]: time="2024-02-09T10:00:29.800370859Z" level=info msg="RemovePodSandbox \"465f53f3a6b9a42b44611c3b4eea5c3c13e23c25cb682c00e24477b3e00ebab5\" returns successfully" Feb 9 10:00:29.801015 env[1354]: time="2024-02-09T10:00:29.800847887Z" level=info msg="StopPodSandbox for \"bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec\"" Feb 9 10:00:29.801015 env[1354]: time="2024-02-09T10:00:29.800926365Z" level=info msg="TearDown network for sandbox \"bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec\" successfully" Feb 9 10:00:29.801015 env[1354]: time="2024-02-09T10:00:29.800957005Z" level=info msg="StopPodSandbox for \"bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec\" returns successfully" Feb 9 10:00:29.802407 env[1354]: time="2024-02-09T10:00:29.801370075Z" level=info msg="RemovePodSandbox for \"bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec\"" Feb 9 10:00:29.802407 env[1354]: time="2024-02-09T10:00:29.801395474Z" level=info msg="Forcibly stopping sandbox \"bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec\"" Feb 9 10:00:29.802407 env[1354]: time="2024-02-09T10:00:29.801456473Z" level=info msg="TearDown network for sandbox \"bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec\" successfully" Feb 9 10:00:29.816464 env[1354]: time="2024-02-09T10:00:29.816364390Z" level=info msg="RemovePodSandbox \"bda5a4a98981a8c8692f4e4973f281ee013901ac379a65c2971df5d6494ebfec\" returns successfully" Feb 9 10:00:33.372098 kubelet[2429]: E0209 10:00:33.372057 2429 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b353ffea6c?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 10:00:34.700817 kubelet[2429]: I0209 10:00:34.700790 2429 status_manager.go:853] "Failed to get status for pod" podUID="e3047ddd82a2d9b46c165ae7eca1a82f" pod="kube-system/kube-apiserver-ci-3510.3.2-a-b353ffea6c" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.38:54086->10.200.20.32:2379: read: connection timed out" Feb 9 10:00:43.373208 kubelet[2429]: E0209 10:00:43.373166 2429 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b353ffea6c?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"