Feb 12 19:21:37.027945 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 12 19:21:37.027964 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 19:21:37.027971 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 12 19:21:37.027978 kernel: printk: bootconsole [pl11] enabled Feb 12 19:21:37.027983 kernel: efi: EFI v2.70 by EDK II Feb 12 19:21:37.027988 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 12 19:21:37.027995 kernel: random: crng init done Feb 12 19:21:37.028000 kernel: ACPI: Early table checksum verification disabled Feb 12 19:21:37.028005 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 12 19:21:37.028010 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:37.028016 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:37.028022 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 12 19:21:37.028028 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:37.028033 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:37.028040 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:37.028045 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:37.028051 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:37.028058 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:37.028064 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 12 19:21:37.028070 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:21:37.028075 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 12 19:21:37.028081 kernel: NUMA: Failed to initialise from firmware Feb 12 19:21:37.028087 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 12 19:21:37.028092 kernel: NUMA: NODE_DATA [mem 0x1bf7f0900-0x1bf7f5fff] Feb 12 19:21:37.028098 kernel: Zone ranges: Feb 12 19:21:37.028104 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 12 19:21:37.028109 kernel: DMA32 empty Feb 12 19:21:37.028116 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 12 19:21:37.028122 kernel: Movable zone start for each node Feb 12 19:21:37.028127 kernel: Early memory node ranges Feb 12 19:21:37.028133 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 12 19:21:37.028138 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 12 19:21:37.028144 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 12 19:21:37.028150 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 12 19:21:37.028155 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 12 19:21:37.028161 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 12 19:21:37.028166 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 12 19:21:37.028172 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 12 19:21:37.028177 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 12 19:21:37.028184 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 12 19:21:37.028193 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 12 19:21:37.028199 kernel: psci: probing for conduit method from ACPI. Feb 12 19:21:37.028205 kernel: psci: PSCIv1.1 detected in firmware. Feb 12 19:21:37.028211 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 19:21:37.028218 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 12 19:21:37.028224 kernel: psci: SMC Calling Convention v1.4 Feb 12 19:21:37.028230 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 12 19:21:37.028236 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 12 19:21:37.028242 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 19:21:37.028248 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 19:21:37.028254 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 12 19:21:37.028260 kernel: Detected PIPT I-cache on CPU0 Feb 12 19:21:37.028266 kernel: CPU features: detected: GIC system register CPU interface Feb 12 19:21:37.028272 kernel: CPU features: detected: Hardware dirty bit management Feb 12 19:21:37.028278 kernel: CPU features: detected: Spectre-BHB Feb 12 19:21:37.028284 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 19:21:37.028292 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 19:21:37.028298 kernel: CPU features: detected: ARM erratum 1418040 Feb 12 19:21:37.028304 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 12 19:21:37.028310 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 12 19:21:37.028333 kernel: Policy zone: Normal Feb 12 19:21:37.028340 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:21:37.028347 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:21:37.028353 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:21:37.028359 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:21:37.028365 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:21:37.028372 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 12 19:21:37.028379 kernel: Memory: 3991928K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202232K reserved, 0K cma-reserved) Feb 12 19:21:37.028385 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 19:21:37.028391 kernel: trace event string verifier disabled Feb 12 19:21:37.028397 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 19:21:37.028403 kernel: rcu: RCU event tracing is enabled. Feb 12 19:21:37.028410 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 19:21:37.028416 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 19:21:37.028422 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:21:37.028428 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:21:37.028434 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 19:21:37.028441 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 19:21:37.028447 kernel: GICv3: 960 SPIs implemented Feb 12 19:21:37.028454 kernel: GICv3: 0 Extended SPIs implemented Feb 12 19:21:37.028460 kernel: GICv3: Distributor has no Range Selector support Feb 12 19:21:37.028466 kernel: Root IRQ handler: gic_handle_irq Feb 12 19:21:37.028471 kernel: GICv3: 16 PPIs implemented Feb 12 19:21:37.028477 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 12 19:21:37.028483 kernel: ITS: No ITS available, not enabling LPIs Feb 12 19:21:37.028490 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:21:37.028496 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 12 19:21:37.028502 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 12 19:21:37.028508 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 12 19:21:37.028515 kernel: Console: colour dummy device 80x25 Feb 12 19:21:37.028522 kernel: printk: console [tty1] enabled Feb 12 19:21:37.028528 kernel: ACPI: Core revision 20210730 Feb 12 19:21:37.028535 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 12 19:21:37.028541 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:21:37.028547 kernel: LSM: Security Framework initializing Feb 12 19:21:37.028553 kernel: SELinux: Initializing. Feb 12 19:21:37.028559 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:21:37.028566 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:21:37.028573 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 12 19:21:37.028579 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 12 19:21:37.028586 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:21:37.028592 kernel: Remapping and enabling EFI services. Feb 12 19:21:37.028598 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:21:37.028604 kernel: Detected PIPT I-cache on CPU1 Feb 12 19:21:37.028610 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 12 19:21:37.028617 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:21:37.028623 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 12 19:21:37.028630 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 19:21:37.028637 kernel: SMP: Total of 2 processors activated. Feb 12 19:21:37.028643 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 19:21:37.028649 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 12 19:21:37.028656 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 12 19:21:37.028662 kernel: CPU features: detected: CRC32 instructions Feb 12 19:21:37.028668 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 12 19:21:37.028674 kernel: CPU features: detected: LSE atomic instructions Feb 12 19:21:37.028681 kernel: CPU features: detected: Privileged Access Never Feb 12 19:21:37.028688 kernel: CPU: All CPU(s) started at EL1 Feb 12 19:21:37.028694 kernel: alternatives: patching kernel code Feb 12 19:21:37.028705 kernel: devtmpfs: initialized Feb 12 19:21:37.028712 kernel: KASLR enabled Feb 12 19:21:37.028719 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:21:37.028726 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 19:21:37.028733 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:21:37.028739 kernel: SMBIOS 3.1.0 present. Feb 12 19:21:37.028746 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 12 19:21:37.028752 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:21:37.028760 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 19:21:37.028767 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 19:21:37.028774 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 19:21:37.028780 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:21:37.028787 kernel: audit: type=2000 audit(0.089:1): state=initialized audit_enabled=0 res=1 Feb 12 19:21:37.028793 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:21:37.028800 kernel: cpuidle: using governor menu Feb 12 19:21:37.028808 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 19:21:37.028814 kernel: ASID allocator initialised with 32768 entries Feb 12 19:21:37.028821 kernel: ACPI: bus type PCI registered Feb 12 19:21:37.028827 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:21:37.028834 kernel: Serial: AMBA PL011 UART driver Feb 12 19:21:37.028840 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:21:37.028847 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 19:21:37.028854 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:21:37.028860 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 19:21:37.028868 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:21:37.028875 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 19:21:37.028881 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:21:37.028888 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:21:37.028894 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:21:37.028901 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:21:37.028907 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:21:37.028914 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:21:37.028920 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:21:37.028928 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:21:37.028934 kernel: ACPI: Interpreter enabled Feb 12 19:21:37.028941 kernel: ACPI: Using GIC for interrupt routing Feb 12 19:21:37.028947 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 12 19:21:37.028954 kernel: printk: console [ttyAMA0] enabled Feb 12 19:21:37.028960 kernel: printk: bootconsole [pl11] disabled Feb 12 19:21:37.028967 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 12 19:21:37.028974 kernel: iommu: Default domain type: Translated Feb 12 19:21:37.028980 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 19:21:37.028988 kernel: vgaarb: loaded Feb 12 19:21:37.028994 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:21:37.029001 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:21:37.029008 kernel: PTP clock support registered Feb 12 19:21:37.029014 kernel: Registered efivars operations Feb 12 19:21:37.029020 kernel: No ACPI PMU IRQ for CPU0 Feb 12 19:21:37.029027 kernel: No ACPI PMU IRQ for CPU1 Feb 12 19:21:37.029033 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 19:21:37.029040 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:21:37.029048 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:21:37.029054 kernel: pnp: PnP ACPI init Feb 12 19:21:37.029061 kernel: pnp: PnP ACPI: found 0 devices Feb 12 19:21:37.029067 kernel: NET: Registered PF_INET protocol family Feb 12 19:21:37.029074 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:21:37.029081 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:21:37.029087 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:21:37.029094 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:21:37.029100 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:21:37.029108 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:21:37.029115 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:21:37.029122 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:21:37.029128 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:21:37.029135 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:21:37.029141 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 12 19:21:37.029148 kernel: kvm [1]: HYP mode not available Feb 12 19:21:37.029155 kernel: Initialise system trusted keyrings Feb 12 19:21:37.029161 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:21:37.029169 kernel: Key type asymmetric registered Feb 12 19:21:37.029175 kernel: Asymmetric key parser 'x509' registered Feb 12 19:21:37.029182 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:21:37.029188 kernel: io scheduler mq-deadline registered Feb 12 19:21:37.029195 kernel: io scheduler kyber registered Feb 12 19:21:37.029201 kernel: io scheduler bfq registered Feb 12 19:21:37.029208 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:21:37.029214 kernel: thunder_xcv, ver 1.0 Feb 12 19:21:37.029221 kernel: thunder_bgx, ver 1.0 Feb 12 19:21:37.029229 kernel: nicpf, ver 1.0 Feb 12 19:21:37.029235 kernel: nicvf, ver 1.0 Feb 12 19:21:37.029362 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 19:21:37.029426 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T19:21:36 UTC (1707765696) Feb 12 19:21:37.029435 kernel: efifb: probing for efifb Feb 12 19:21:37.029442 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 12 19:21:37.029448 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 12 19:21:37.029455 kernel: efifb: scrolling: redraw Feb 12 19:21:37.029464 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 12 19:21:37.029471 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:21:37.029477 kernel: fb0: EFI VGA frame buffer device Feb 12 19:21:37.029484 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 12 19:21:37.029490 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:21:37.029497 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:21:37.029503 kernel: Segment Routing with IPv6 Feb 12 19:21:37.029510 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:21:37.029516 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:21:37.029524 kernel: Key type dns_resolver registered Feb 12 19:21:37.029531 kernel: registered taskstats version 1 Feb 12 19:21:37.029537 kernel: Loading compiled-in X.509 certificates Feb 12 19:21:37.029544 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 19:21:37.029550 kernel: Key type .fscrypt registered Feb 12 19:21:37.029557 kernel: Key type fscrypt-provisioning registered Feb 12 19:21:37.029564 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:21:37.029570 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:21:37.029577 kernel: ima: No architecture policies found Feb 12 19:21:37.029585 kernel: Freeing unused kernel memory: 34688K Feb 12 19:21:37.029591 kernel: Run /init as init process Feb 12 19:21:37.029598 kernel: with arguments: Feb 12 19:21:37.029604 kernel: /init Feb 12 19:21:37.029610 kernel: with environment: Feb 12 19:21:37.029617 kernel: HOME=/ Feb 12 19:21:37.029623 kernel: TERM=linux Feb 12 19:21:37.029630 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:21:37.029639 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:21:37.029649 systemd[1]: Detected virtualization microsoft. Feb 12 19:21:37.029656 systemd[1]: Detected architecture arm64. Feb 12 19:21:37.029663 systemd[1]: Running in initrd. Feb 12 19:21:37.029670 systemd[1]: No hostname configured, using default hostname. Feb 12 19:21:37.029677 systemd[1]: Hostname set to . Feb 12 19:21:37.029684 systemd[1]: Initializing machine ID from random generator. Feb 12 19:21:37.029691 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:21:37.029699 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:21:37.029706 systemd[1]: Reached target cryptsetup.target. Feb 12 19:21:37.029713 systemd[1]: Reached target paths.target. Feb 12 19:21:37.029719 systemd[1]: Reached target slices.target. Feb 12 19:21:37.029726 systemd[1]: Reached target swap.target. Feb 12 19:21:37.029733 systemd[1]: Reached target timers.target. Feb 12 19:21:37.029740 systemd[1]: Listening on iscsid.socket. Feb 12 19:21:37.029747 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:21:37.029755 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:21:37.029763 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:21:37.029770 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:21:37.029777 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:21:37.029784 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:21:37.029791 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:21:37.029798 systemd[1]: Reached target sockets.target. Feb 12 19:21:37.029805 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:21:37.029812 systemd[1]: Finished network-cleanup.service. Feb 12 19:21:37.029820 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:21:37.029827 systemd[1]: Starting systemd-journald.service... Feb 12 19:21:37.029834 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:21:37.029841 systemd[1]: Starting systemd-resolved.service... Feb 12 19:21:37.029852 systemd-journald[276]: Journal started Feb 12 19:21:37.029889 systemd-journald[276]: Runtime Journal (/run/log/journal/dec85c7bdfe948d789c311110c60cc53) is 8.0M, max 78.6M, 70.6M free. Feb 12 19:21:37.021398 systemd-modules-load[277]: Inserted module 'overlay' Feb 12 19:21:37.062340 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:21:37.062369 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:21:37.067653 kernel: Bridge firewalling registered Feb 12 19:21:37.067761 systemd-modules-load[277]: Inserted module 'br_netfilter' Feb 12 19:21:37.081542 systemd-resolved[278]: Positive Trust Anchors: Feb 12 19:21:37.099625 kernel: SCSI subsystem initialized Feb 12 19:21:37.099652 systemd[1]: Started systemd-journald.service. Feb 12 19:21:37.081557 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:21:37.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.081586 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:21:37.200388 kernel: audit: type=1130 audit(1707765697.104:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.200410 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:21:37.200420 kernel: audit: type=1130 audit(1707765697.144:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.200435 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:21:37.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.083707 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 12 19:21:37.233886 kernel: audit: type=1130 audit(1707765697.205:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.233905 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:21:37.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.121034 systemd[1]: Started systemd-resolved.service. Feb 12 19:21:37.260892 kernel: audit: type=1130 audit(1707765697.238:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.196353 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:21:37.286855 kernel: audit: type=1130 audit(1707765697.265:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.205495 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:21:37.312916 kernel: audit: type=1130 audit(1707765697.291:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.238875 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:21:37.265968 systemd-modules-load[277]: Inserted module 'dm_multipath' Feb 12 19:21:37.283631 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:21:37.291877 systemd[1]: Reached target nss-lookup.target. Feb 12 19:21:37.322343 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:21:37.327659 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:21:37.350162 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:21:37.363699 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:21:37.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.373596 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:21:37.442311 kernel: audit: type=1130 audit(1707765697.373:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.442350 kernel: audit: type=1130 audit(1707765697.395:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.442360 kernel: audit: type=1130 audit(1707765697.421:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.396253 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:21:37.442638 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:21:37.460022 dracut-cmdline[298]: dracut-dracut-053 Feb 12 19:21:37.460022 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=t Feb 12 19:21:37.460022 dracut-cmdline[298]: tyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:21:37.551337 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:21:37.562337 kernel: iscsi: registered transport (tcp) Feb 12 19:21:37.582554 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:21:37.582614 kernel: QLogic iSCSI HBA Driver Feb 12 19:21:37.612495 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:21:37.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:37.618087 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:21:37.674338 kernel: raid6: neonx8 gen() 13810 MB/s Feb 12 19:21:37.695329 kernel: raid6: neonx8 xor() 10824 MB/s Feb 12 19:21:37.715339 kernel: raid6: neonx4 gen() 13573 MB/s Feb 12 19:21:37.736330 kernel: raid6: neonx4 xor() 11204 MB/s Feb 12 19:21:37.756327 kernel: raid6: neonx2 gen() 12972 MB/s Feb 12 19:21:37.776322 kernel: raid6: neonx2 xor() 10247 MB/s Feb 12 19:21:37.798323 kernel: raid6: neonx1 gen() 10516 MB/s Feb 12 19:21:37.818322 kernel: raid6: neonx1 xor() 8800 MB/s Feb 12 19:21:37.840323 kernel: raid6: int64x8 gen() 6300 MB/s Feb 12 19:21:37.861324 kernel: raid6: int64x8 xor() 3549 MB/s Feb 12 19:21:37.881323 kernel: raid6: int64x4 gen() 7284 MB/s Feb 12 19:21:37.901327 kernel: raid6: int64x4 xor() 3851 MB/s Feb 12 19:21:37.922323 kernel: raid6: int64x2 gen() 6150 MB/s Feb 12 19:21:37.943327 kernel: raid6: int64x2 xor() 3324 MB/s Feb 12 19:21:37.964323 kernel: raid6: int64x1 gen() 5044 MB/s Feb 12 19:21:37.989853 kernel: raid6: int64x1 xor() 2646 MB/s Feb 12 19:21:37.989863 kernel: raid6: using algorithm neonx8 gen() 13810 MB/s Feb 12 19:21:37.989871 kernel: raid6: .... xor() 10824 MB/s, rmw enabled Feb 12 19:21:37.994477 kernel: raid6: using neon recovery algorithm Feb 12 19:21:38.015880 kernel: xor: measuring software checksum speed Feb 12 19:21:38.015892 kernel: 8regs : 17304 MB/sec Feb 12 19:21:38.020330 kernel: 32regs : 20760 MB/sec Feb 12 19:21:38.029493 kernel: arm64_neon : 27939 MB/sec Feb 12 19:21:38.029502 kernel: xor: using function: arm64_neon (27939 MB/sec) Feb 12 19:21:38.085329 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 19:21:38.094453 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:21:38.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:38.103000 audit: BPF prog-id=7 op=LOAD Feb 12 19:21:38.103000 audit: BPF prog-id=8 op=LOAD Feb 12 19:21:38.104039 systemd[1]: Starting systemd-udevd.service... Feb 12 19:21:38.118856 systemd-udevd[475]: Using default interface naming scheme 'v252'. Feb 12 19:21:38.125511 systemd[1]: Started systemd-udevd.service. Feb 12 19:21:38.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:38.136444 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:21:38.150037 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Feb 12 19:21:38.176659 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:21:38.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:38.182244 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:21:38.222471 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:21:38.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:38.288418 kernel: hv_vmbus: Vmbus version:5.3 Feb 12 19:21:38.297348 kernel: hv_vmbus: registering driver hid_hyperv Feb 12 19:21:38.305341 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 12 19:21:38.305389 kernel: hv_vmbus: registering driver hv_netvsc Feb 12 19:21:38.331340 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 12 19:21:38.331391 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 12 19:21:38.342019 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 12 19:21:38.343330 kernel: hv_vmbus: registering driver hv_storvsc Feb 12 19:21:38.353462 kernel: scsi host1: storvsc_host_t Feb 12 19:21:38.367396 kernel: scsi host0: storvsc_host_t Feb 12 19:21:38.367568 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 12 19:21:38.375226 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 12 19:21:38.394337 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 12 19:21:38.394581 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 19:21:38.396334 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 12 19:21:38.409784 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 12 19:21:38.410006 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 12 19:21:38.410091 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 12 19:21:38.421351 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 12 19:21:38.421569 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 12 19:21:38.435372 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:21:38.435425 kernel: hv_netvsc 0022487c-fa39-0022-487c-fa390022487c eth0: VF slot 1 added Feb 12 19:21:38.435578 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 12 19:21:38.452359 kernel: hv_vmbus: registering driver hv_pci Feb 12 19:21:38.460340 kernel: hv_pci 3ec871c5-8cac-4ef3-add3-4e46a99994e0: PCI VMBus probing: Using version 0x10004 Feb 12 19:21:38.477840 kernel: hv_pci 3ec871c5-8cac-4ef3-add3-4e46a99994e0: PCI host bridge to bus 8cac:00 Feb 12 19:21:38.478000 kernel: pci_bus 8cac:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 12 19:21:38.478115 kernel: pci_bus 8cac:00: No busn resource found for root bus, will use [bus 00-ff] Feb 12 19:21:38.493595 kernel: pci 8cac:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 12 19:21:38.506498 kernel: pci 8cac:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 12 19:21:38.527847 kernel: pci 8cac:00:02.0: enabling Extended Tags Feb 12 19:21:38.548393 kernel: pci 8cac:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8cac:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 12 19:21:38.561832 kernel: pci_bus 8cac:00: busn_res: [bus 00-ff] end is updated to 00 Feb 12 19:21:38.562032 kernel: pci 8cac:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 12 19:21:38.602344 kernel: mlx5_core 8cac:00:02.0: firmware version: 16.30.1284 Feb 12 19:21:38.758346 kernel: mlx5_core 8cac:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 12 19:21:38.817441 kernel: hv_netvsc 0022487c-fa39-0022-487c-fa390022487c eth0: VF registering: eth1 Feb 12 19:21:38.817704 kernel: mlx5_core 8cac:00:02.0 eth1: joined to eth0 Feb 12 19:21:38.830343 kernel: mlx5_core 8cac:00:02.0 enP36012s1: renamed from eth1 Feb 12 19:21:38.991607 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:21:39.101340 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (540) Feb 12 19:21:39.114205 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:21:39.231622 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:21:39.329659 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:21:39.336049 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:21:39.348918 systemd[1]: Starting disk-uuid.service... Feb 12 19:21:39.377343 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:21:39.384341 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:21:40.392216 disk-uuid[603]: The operation has completed successfully. Feb 12 19:21:40.397746 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:21:40.453746 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:21:40.458500 systemd[1]: Finished disk-uuid.service. Feb 12 19:21:40.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:40.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:40.464271 systemd[1]: Starting verity-setup.service... Feb 12 19:21:40.510341 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 19:21:40.719017 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:21:40.726506 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:21:40.739640 systemd[1]: Finished verity-setup.service. Feb 12 19:21:40.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:40.804000 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:21:40.812265 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:21:40.808688 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:21:40.809462 systemd[1]: Starting ignition-setup.service... Feb 12 19:21:40.817423 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:21:40.858145 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:21:40.858196 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:21:40.863531 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:21:40.916536 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:21:40.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:40.927000 audit: BPF prog-id=9 op=LOAD Feb 12 19:21:40.928003 systemd[1]: Starting systemd-networkd.service... Feb 12 19:21:40.955359 systemd-networkd[841]: lo: Link UP Feb 12 19:21:40.958349 systemd-networkd[841]: lo: Gained carrier Feb 12 19:21:40.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:40.958818 systemd-networkd[841]: Enumeration completed Feb 12 19:21:40.959218 systemd[1]: Started systemd-networkd.service. Feb 12 19:21:40.965155 systemd-networkd[841]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:21:40.965577 systemd[1]: Reached target network.target. Feb 12 19:21:41.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:40.975833 systemd[1]: Starting iscsiuio.service... Feb 12 19:21:41.035543 kernel: kauditd_printk_skb: 13 callbacks suppressed Feb 12 19:21:41.035570 kernel: audit: type=1130 audit(1707765701.002:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:40.989070 systemd[1]: Started iscsiuio.service. Feb 12 19:21:41.043668 iscsid[846]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:21:41.043668 iscsid[846]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 12 19:21:41.043668 iscsid[846]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:21:41.043668 iscsid[846]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:21:41.043668 iscsid[846]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:21:41.043668 iscsid[846]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:21:41.043668 iscsid[846]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:21:41.212744 kernel: audit: type=1130 audit(1707765701.082:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:41.212779 kernel: audit: type=1130 audit(1707765701.129:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:41.212790 kernel: mlx5_core 8cac:00:02.0 enP36012s1: Link up Feb 12 19:21:41.212964 kernel: hv_netvsc 0022487c-fa39-0022-487c-fa390022487c eth0: Data path switched to VF: enP36012s1 Feb 12 19:21:41.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:41.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:41.031113 systemd[1]: Starting iscsid.service... Feb 12 19:21:41.238958 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:21:41.056271 systemd[1]: Started iscsid.service. Feb 12 19:21:41.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:41.103308 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:21:41.268088 kernel: audit: type=1130 audit(1707765701.242:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:41.118311 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:21:41.129830 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:21:41.166875 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:21:41.174658 systemd[1]: Reached target remote-fs.target. Feb 12 19:21:41.204548 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:21:41.223269 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:21:41.223747 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:21:41.238367 systemd-networkd[841]: enP36012s1: Link UP Feb 12 19:21:41.238451 systemd-networkd[841]: eth0: Link UP Feb 12 19:21:41.238580 systemd-networkd[841]: eth0: Gained carrier Feb 12 19:21:41.268546 systemd-networkd[841]: enP36012s1: Gained carrier Feb 12 19:21:41.288380 systemd-networkd[841]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 12 19:21:41.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:41.354818 systemd[1]: Finished ignition-setup.service. Feb 12 19:21:41.384159 kernel: audit: type=1130 audit(1707765701.359:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:41.384398 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:21:42.844488 systemd-networkd[841]: eth0: Gained IPv6LL Feb 12 19:21:45.125458 ignition[868]: Ignition 2.14.0 Feb 12 19:21:45.125471 ignition[868]: Stage: fetch-offline Feb 12 19:21:45.125530 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:21:45.125554 ignition[868]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:21:45.276366 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:21:45.276526 ignition[868]: parsed url from cmdline: "" Feb 12 19:21:45.283665 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:21:45.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:45.276530 ignition[868]: no config URL provided Feb 12 19:21:45.324352 kernel: audit: type=1130 audit(1707765705.289:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:45.290479 systemd[1]: Starting ignition-fetch.service... Feb 12 19:21:45.276535 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:21:45.276544 ignition[868]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:21:45.276549 ignition[868]: failed to fetch config: resource requires networking Feb 12 19:21:45.276785 ignition[868]: Ignition finished successfully Feb 12 19:21:45.317674 ignition[874]: Ignition 2.14.0 Feb 12 19:21:45.317681 ignition[874]: Stage: fetch Feb 12 19:21:45.317795 ignition[874]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:21:45.317814 ignition[874]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:21:45.320306 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:21:45.324480 ignition[874]: parsed url from cmdline: "" Feb 12 19:21:45.367626 unknown[874]: fetched base config from "system" Feb 12 19:21:45.324485 ignition[874]: no config URL provided Feb 12 19:21:45.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:45.367634 unknown[874]: fetched base config from "system" Feb 12 19:21:45.414655 kernel: audit: type=1130 audit(1707765705.383:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:45.324493 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:21:45.367640 unknown[874]: fetched user config from "azure" Feb 12 19:21:45.324507 ignition[874]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:21:45.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:45.374540 systemd[1]: Finished ignition-fetch.service. Feb 12 19:21:45.465693 kernel: audit: type=1130 audit(1707765705.437:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:45.324538 ignition[874]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 12 19:21:45.405369 systemd[1]: Starting ignition-kargs.service... Feb 12 19:21:45.346890 ignition[874]: GET result: OK Feb 12 19:21:45.432385 systemd[1]: Finished ignition-kargs.service. Feb 12 19:21:45.346952 ignition[874]: config has been read from IMDS userdata Feb 12 19:21:45.346985 ignition[874]: parsing config with SHA512: e0d997cd738193fceff86d257f3e5278ed53d4cab9fd101d84218d78a675ddd9e8eda39358a699f749e4cae437e34b868c4519ac526681cd70db893e9cbd854b Feb 12 19:21:45.491548 systemd[1]: Starting ignition-disks.service... Feb 12 19:21:45.368165 ignition[874]: fetch: fetch complete Feb 12 19:21:45.368170 ignition[874]: fetch: fetch passed Feb 12 19:21:45.368211 ignition[874]: Ignition finished successfully Feb 12 19:21:45.419159 ignition[880]: Ignition 2.14.0 Feb 12 19:21:45.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:45.511691 systemd[1]: Finished ignition-disks.service. Feb 12 19:21:45.556531 kernel: audit: type=1130 audit(1707765705.519:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:45.419165 ignition[880]: Stage: kargs Feb 12 19:21:45.540014 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:21:45.419278 ignition[880]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:21:45.547397 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:21:45.419297 ignition[880]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:21:45.555386 systemd[1]: Reached target local-fs.target. Feb 12 19:21:45.422044 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:21:45.560868 systemd[1]: Reached target sysinit.target. Feb 12 19:21:45.426635 ignition[880]: kargs: kargs passed Feb 12 19:21:45.568893 systemd[1]: Reached target basic.target. Feb 12 19:21:45.426957 ignition[880]: Ignition finished successfully Feb 12 19:21:45.584744 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:21:45.503956 ignition[886]: Ignition 2.14.0 Feb 12 19:21:45.503963 ignition[886]: Stage: disks Feb 12 19:21:45.504081 ignition[886]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:21:45.504105 ignition[886]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:21:45.508365 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:21:45.510104 ignition[886]: disks: disks passed Feb 12 19:21:45.510156 ignition[886]: Ignition finished successfully Feb 12 19:21:45.656932 systemd-fsck[894]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 12 19:21:45.670109 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:21:45.700429 kernel: audit: type=1130 audit(1707765705.674:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:45.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:45.694752 systemd[1]: Mounting sysroot.mount... Feb 12 19:21:45.721338 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:21:45.722366 systemd[1]: Mounted sysroot.mount. Feb 12 19:21:45.727107 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:21:45.773008 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:21:45.778606 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 12 19:21:45.792073 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:21:45.792117 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:21:45.808207 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:21:45.854066 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:21:45.859345 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:21:45.884338 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (905) Feb 12 19:21:45.896484 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:21:45.896524 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:21:45.901555 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:21:45.905587 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:21:45.916497 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:21:45.940179 initrd-setup-root[936]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:21:45.950351 initrd-setup-root[944]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:21:45.959512 initrd-setup-root[952]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:21:46.658773 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:21:46.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:46.685909 systemd[1]: Starting ignition-mount.service... Feb 12 19:21:46.698347 kernel: audit: type=1130 audit(1707765706.663:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:46.693021 systemd[1]: Starting sysroot-boot.service... Feb 12 19:21:46.703441 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 19:21:46.703553 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 19:21:46.738385 ignition[972]: INFO : Ignition 2.14.0 Feb 12 19:21:46.738385 ignition[972]: INFO : Stage: mount Feb 12 19:21:46.738385 ignition[972]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:21:46.738385 ignition[972]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:21:46.828103 kernel: audit: type=1130 audit(1707765706.750:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:46.828136 kernel: audit: type=1130 audit(1707765706.782:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:46.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:46.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:46.739254 systemd[1]: Finished sysroot-boot.service. Feb 12 19:21:46.833197 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:21:46.833197 ignition[972]: INFO : mount: mount passed Feb 12 19:21:46.833197 ignition[972]: INFO : Ignition finished successfully Feb 12 19:21:46.776155 systemd[1]: Finished ignition-mount.service. Feb 12 19:21:47.340285 coreos-metadata[904]: Feb 12 19:21:47.340 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 12 19:21:47.350789 coreos-metadata[904]: Feb 12 19:21:47.350 INFO Fetch successful Feb 12 19:21:47.384043 coreos-metadata[904]: Feb 12 19:21:47.384 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 12 19:21:47.396744 coreos-metadata[904]: Feb 12 19:21:47.396 INFO Fetch successful Feb 12 19:21:47.404153 coreos-metadata[904]: Feb 12 19:21:47.402 INFO wrote hostname ci-3510.3.2-a-434dfde19b to /sysroot/etc/hostname Feb 12 19:21:47.404918 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 12 19:21:47.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:47.419388 systemd[1]: Starting ignition-files.service... Feb 12 19:21:47.452833 kernel: audit: type=1130 audit(1707765707.418:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:47.451785 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:21:47.480264 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (983) Feb 12 19:21:47.480341 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:21:47.480361 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:21:47.490271 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:21:47.495462 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:21:47.512394 ignition[1002]: INFO : Ignition 2.14.0 Feb 12 19:21:47.512394 ignition[1002]: INFO : Stage: files Feb 12 19:21:47.523805 ignition[1002]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:21:47.523805 ignition[1002]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:21:47.523805 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:21:47.523805 ignition[1002]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:21:47.523805 ignition[1002]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:21:47.523805 ignition[1002]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:21:47.609238 ignition[1002]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:21:47.617954 ignition[1002]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:21:47.670540 unknown[1002]: wrote ssh authorized keys file for user: core Feb 12 19:21:47.677597 ignition[1002]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:21:47.677597 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:21:47.677597 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:21:47.677597 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 19:21:47.677597 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 12 19:21:48.127158 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:21:48.359879 ignition[1002]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 12 19:21:48.377877 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 19:21:48.377877 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 19:21:48.377877 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 12 19:21:48.729630 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:21:48.862813 ignition[1002]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 12 19:21:48.862813 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 19:21:48.892005 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:21:48.892005 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 12 19:21:49.251891 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:21:49.529263 ignition[1002]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 12 19:21:49.548275 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:21:49.548275 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:21:49.548275 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 12 19:21:49.605877 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:21:50.302973 ignition[1002]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 12 19:21:50.322658 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:21:50.322658 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:21:50.322658 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:21:50.322658 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:21:50.322658 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:21:50.322658 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:21:50.322658 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:21:50.322658 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:21:50.322658 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:21:50.468095 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1002) Feb 12 19:21:50.468119 kernel: audit: type=1130 audit(1707765710.406:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.468169 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3602360518" Feb 12 19:21:50.468169 ignition[1002]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3602360518": device or resource busy Feb 12 19:21:50.468169 ignition[1002]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3602360518", trying btrfs: device or resource busy Feb 12 19:21:50.468169 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3602360518" Feb 12 19:21:50.468169 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3602360518" Feb 12 19:21:50.468169 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3602360518" Feb 12 19:21:50.468169 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3602360518" Feb 12 19:21:50.468169 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:21:50.468169 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:21:50.468169 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:21:50.468169 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2714925895" Feb 12 19:21:50.468169 ignition[1002]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2714925895": device or resource busy Feb 12 19:21:50.468169 ignition[1002]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2714925895", trying btrfs: device or resource busy Feb 12 19:21:50.468169 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2714925895" Feb 12 19:21:50.769184 kernel: audit: type=1130 audit(1707765710.472:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.769220 kernel: audit: type=1131 audit(1707765710.494:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.769233 kernel: audit: type=1130 audit(1707765710.540:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.769244 kernel: audit: type=1130 audit(1707765710.656:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.769771 kernel: audit: type=1131 audit(1707765710.685:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.357025 systemd[1]: mnt-oem3602360518.mount: Deactivated successfully. Feb 12 19:21:50.783556 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2714925895" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem2714925895" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem2714925895" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: op(13): [started] processing unit "nvidia.service" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: op(13): [finished] processing unit "nvidia.service" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: op(14): [started] processing unit "waagent.service" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: op(14): [finished] processing unit "waagent.service" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: op(15): [started] processing unit "containerd.service" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: op(15): op(16): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: op(15): op(16): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: op(15): [finished] processing unit "containerd.service" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: op(17): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: op(17): op(18): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: op(17): op(18): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: op(17): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: op(19): [started] processing unit "prepare-critools.service" Feb 12 19:21:50.783556 ignition[1002]: INFO : files: op(19): op(1a): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:21:50.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.401718 systemd[1]: Finished ignition-files.service. Feb 12 19:21:51.069730 ignition[1002]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:21:51.069730 ignition[1002]: INFO : files: op(19): [finished] processing unit "prepare-critools.service" Feb 12 19:21:51.069730 ignition[1002]: INFO : files: op(1b): [started] setting preset to enabled for "nvidia.service" Feb 12 19:21:51.069730 ignition[1002]: INFO : files: op(1b): [finished] setting preset to enabled for "nvidia.service" Feb 12 19:21:51.069730 ignition[1002]: INFO : files: op(1c): [started] setting preset to enabled for "waagent.service" Feb 12 19:21:51.069730 ignition[1002]: INFO : files: op(1c): [finished] setting preset to enabled for "waagent.service" Feb 12 19:21:51.069730 ignition[1002]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:21:51.069730 ignition[1002]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:21:51.069730 ignition[1002]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:21:51.069730 ignition[1002]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:21:51.069730 ignition[1002]: INFO : files: createResultFile: createFiles: op(1f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:21:51.069730 ignition[1002]: INFO : files: createResultFile: createFiles: op(1f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:21:51.069730 ignition[1002]: INFO : files: files passed Feb 12 19:21:51.069730 ignition[1002]: INFO : Ignition finished successfully Feb 12 19:21:51.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.269302 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:21:51.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.435297 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:21:51.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.293946 iscsid[846]: iscsid shutting down. Feb 12 19:21:51.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.440608 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:21:51.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.441391 systemd[1]: Starting ignition-quench.service... Feb 12 19:21:51.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.457415 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:21:51.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.457515 systemd[1]: Finished ignition-quench.service. Feb 12 19:21:50.521623 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:21:50.575228 systemd[1]: Reached target ignition-complete.target. Feb 12 19:21:51.361507 ignition[1040]: INFO : Ignition 2.14.0 Feb 12 19:21:51.361507 ignition[1040]: INFO : Stage: umount Feb 12 19:21:51.361507 ignition[1040]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:21:51.361507 ignition[1040]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:21:51.361507 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:21:51.361507 ignition[1040]: INFO : umount: umount passed Feb 12 19:21:51.361507 ignition[1040]: INFO : Ignition finished successfully Feb 12 19:21:51.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.602584 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:21:51.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.642602 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:21:50.642724 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:21:50.685620 systemd[1]: Reached target initrd-fs.target. Feb 12 19:21:50.718322 systemd[1]: Reached target initrd.target. Feb 12 19:21:50.727483 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:21:50.740226 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:21:51.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.783824 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:21:51.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.802264 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:21:51.523000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:21:50.830999 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:21:50.845688 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:21:51.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.861465 systemd[1]: Stopped target timers.target. Feb 12 19:21:51.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.874609 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:21:51.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.874726 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:21:50.887418 systemd[1]: Stopped target initrd.target. Feb 12 19:21:50.899270 systemd[1]: Stopped target basic.target. Feb 12 19:21:51.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:50.911677 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:21:50.939084 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:21:50.970477 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:21:50.985558 systemd[1]: Stopped target remote-fs.target. Feb 12 19:21:51.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.004759 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:21:51.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.025650 systemd[1]: Stopped target sysinit.target. Feb 12 19:21:51.642963 kernel: hv_netvsc 0022487c-fa39-0022-487c-fa390022487c eth0: Data path switched from VF: enP36012s1 Feb 12 19:21:51.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.048207 systemd[1]: Stopped target local-fs.target. Feb 12 19:21:51.065549 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:21:51.074791 systemd[1]: Stopped target swap.target. Feb 12 19:21:51.093446 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:21:51.693450 kernel: kauditd_printk_skb: 29 callbacks suppressed Feb 12 19:21:51.693473 kernel: audit: type=1131 audit(1707765711.664:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.093561 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:21:51.117585 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:21:51.748059 kernel: audit: type=1131 audit(1707765711.697:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.748083 kernel: audit: type=1131 audit(1707765711.725:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.132673 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:21:51.771503 kernel: audit: type=1131 audit(1707765711.749:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.132776 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:21:51.154021 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:21:51.825892 kernel: audit: type=1130 audit(1707765711.776:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.825921 kernel: audit: type=1131 audit(1707765711.776:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.154128 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:21:51.168307 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:21:51.168408 systemd[1]: Stopped ignition-files.service. Feb 12 19:21:51.183461 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 12 19:21:51.895717 kernel: audit: type=1131 audit(1707765711.799:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.895742 kernel: audit: type=1131 audit(1707765711.865:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:21:51.183555 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 12 19:21:51.208235 systemd[1]: Stopping ignition-mount.service... Feb 12 19:21:51.225298 systemd[1]: Stopping iscsid.service... Feb 12 19:21:51.230186 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:21:51.247367 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:21:51.247532 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:21:51.253195 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:21:51.253294 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:21:51.932000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:21:51.264185 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:21:51.960374 kernel: audit: type=1334 audit(1707765711.932:81): prog-id=8 op=UNLOAD Feb 12 19:21:51.960399 kernel: audit: type=1334 audit(1707765711.932:82): prog-id=7 op=UNLOAD Feb 12 19:21:51.932000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:21:51.933000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:21:51.933000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:21:51.933000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:21:51.264287 systemd[1]: Stopped iscsid.service. Feb 12 19:21:51.274669 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:21:51.274753 systemd[1]: Stopped ignition-mount.service. Feb 12 19:21:51.288477 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:21:51.288581 systemd[1]: Stopped ignition-disks.service. Feb 12 19:21:51.299129 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:21:51.299227 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:21:51.307710 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 19:21:51.307795 systemd[1]: Stopped ignition-fetch.service. Feb 12 19:21:51.325853 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:21:51.325942 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:21:51.336750 systemd[1]: Stopped target paths.target. Feb 12 19:21:51.346622 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:21:51.361251 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:21:51.367014 systemd[1]: Stopped target slices.target. Feb 12 19:21:51.376797 systemd[1]: Stopped target sockets.target. Feb 12 19:21:51.386017 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:21:51.386109 systemd[1]: Closed iscsid.socket. Feb 12 19:21:51.398427 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:21:51.398564 systemd[1]: Stopped ignition-setup.service. Feb 12 19:21:51.417802 systemd[1]: Stopping iscsiuio.service... Feb 12 19:21:51.431416 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:21:51.431535 systemd[1]: Stopped iscsiuio.service. Feb 12 19:21:51.440666 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:21:51.440751 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:21:51.450595 systemd[1]: Stopped target network.target. Feb 12 19:21:51.979351 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Feb 12 19:21:51.458746 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:21:51.458783 systemd[1]: Closed iscsiuio.socket. Feb 12 19:21:51.470346 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:21:51.479051 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:21:51.491294 systemd-networkd[841]: eth0: DHCPv6 lease lost Feb 12 19:21:51.492547 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:21:51.492650 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:21:51.506091 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:21:51.506205 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:21:51.515098 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:21:51.515146 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:21:51.524745 systemd[1]: Stopping network-cleanup.service... Feb 12 19:21:51.531839 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:21:51.531918 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:21:51.541502 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:21:51.541562 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:21:51.553819 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:21:51.553866 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:21:51.558867 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:21:51.569624 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:21:51.574500 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:21:51.574666 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:21:51.582930 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:21:51.582975 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:21:51.591516 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:21:51.591550 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:21:51.599898 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:21:51.599947 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:21:51.613474 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:21:51.613523 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:21:51.621819 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:21:51.621875 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:21:51.639929 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:21:51.655845 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 19:21:51.655922 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 19:21:51.669687 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:21:51.669740 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:21:51.697760 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:21:51.697810 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:21:51.980000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:21:51.726802 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:21:51.726885 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 19:21:51.727464 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:21:51.727568 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:21:51.749927 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:21:51.750002 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:21:51.777180 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:21:51.777231 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:21:51.840502 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:21:51.840589 systemd[1]: Stopped network-cleanup.service. Feb 12 19:21:51.866266 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:21:51.905886 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:21:51.931710 systemd[1]: Switching root. Feb 12 19:21:51.981072 systemd-journald[276]: Journal stopped Feb 12 19:22:06.464994 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:22:06.465054 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:22:06.465067 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:22:06.465079 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:22:06.465087 kernel: SELinux: policy capability open_perms=1 Feb 12 19:22:06.465095 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:22:06.465105 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:22:06.465113 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:22:06.465122 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:22:06.465130 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:22:06.465140 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:22:06.465150 systemd[1]: Successfully loaded SELinux policy in 305.155ms. Feb 12 19:22:06.465160 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.288ms. Feb 12 19:22:06.465192 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:22:06.465208 systemd[1]: Detected virtualization microsoft. Feb 12 19:22:06.465218 systemd[1]: Detected architecture arm64. Feb 12 19:22:06.465227 systemd[1]: Detected first boot. Feb 12 19:22:06.465236 systemd[1]: Hostname set to . Feb 12 19:22:06.465245 systemd[1]: Initializing machine ID from random generator. Feb 12 19:22:06.465254 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:22:06.465263 kernel: kauditd_printk_skb: 6 callbacks suppressed Feb 12 19:22:06.465272 kernel: audit: type=1400 audit(1707765718.134:89): avc: denied { associate } for pid=1091 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:22:06.465284 kernel: audit: type=1300 audit(1707765718.134:89): arch=c00000b7 syscall=5 success=yes exit=0 a0=400014766c a1=40000c8af8 a2=40000cea00 a3=32 items=0 ppid=1074 pid=1091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:06.465294 kernel: audit: type=1327 audit(1707765718.134:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:22:06.465303 kernel: audit: type=1400 audit(1707765718.148:90): avc: denied { associate } for pid=1091 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:22:06.465325 kernel: audit: type=1300 audit(1707765718.148:90): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000147745 a2=1ed a3=0 items=2 ppid=1074 pid=1091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:06.465334 kernel: audit: type=1307 audit(1707765718.148:90): cwd="/" Feb 12 19:22:06.465345 kernel: audit: type=1302 audit(1707765718.148:90): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:06.465354 kernel: audit: type=1302 audit(1707765718.148:90): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:06.465364 kernel: audit: type=1327 audit(1707765718.148:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:22:06.465373 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:22:06.465383 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:22:06.465392 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:22:06.465402 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:22:06.465457 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:22:06.465469 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:22:06.465479 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:22:06.465489 systemd[1]: Created slice system-getty.slice. Feb 12 19:22:06.465498 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:22:06.465507 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:22:06.465519 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:22:06.465530 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:22:06.465539 systemd[1]: Created slice user.slice. Feb 12 19:22:06.465549 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:22:06.465559 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:22:06.465568 systemd[1]: Set up automount boot.automount. Feb 12 19:22:06.465577 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:22:06.465587 systemd[1]: Reached target integritysetup.target. Feb 12 19:22:06.465596 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:22:06.465606 systemd[1]: Reached target remote-fs.target. Feb 12 19:22:06.465632 systemd[1]: Reached target slices.target. Feb 12 19:22:06.465644 systemd[1]: Reached target swap.target. Feb 12 19:22:06.465654 systemd[1]: Reached target torcx.target. Feb 12 19:22:06.465663 systemd[1]: Reached target veritysetup.target. Feb 12 19:22:06.465674 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:22:06.465683 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:22:06.465693 kernel: audit: type=1400 audit(1707765725.991:91): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:22:06.465742 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:22:06.465755 kernel: audit: type=1335 audit(1707765725.991:92): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:22:06.465764 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:22:06.465774 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:22:06.465783 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:22:06.465793 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:22:06.465803 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:22:06.465814 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:22:06.465825 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:22:06.465834 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:22:06.465844 systemd[1]: Mounting media.mount... Feb 12 19:22:06.465853 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:22:06.465863 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:22:06.465888 systemd[1]: Mounting tmp.mount... Feb 12 19:22:06.465901 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:22:06.465911 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:22:06.465922 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:22:06.465965 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:22:06.465980 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:22:06.465991 systemd[1]: Starting modprobe@drm.service... Feb 12 19:22:06.466000 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:22:06.466010 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:22:06.466019 systemd[1]: Starting modprobe@loop.service... Feb 12 19:22:06.466032 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:22:06.466042 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 19:22:06.466070 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 19:22:06.466081 systemd[1]: Starting systemd-journald.service... Feb 12 19:22:06.466091 kernel: loop: module loaded Feb 12 19:22:06.466100 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:22:06.466109 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:22:06.466119 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:22:06.466130 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:22:06.466160 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:22:06.466174 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:22:06.466222 kernel: fuse: init (API version 7.34) Feb 12 19:22:06.466237 systemd[1]: Mounted media.mount. Feb 12 19:22:06.466247 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:22:06.466257 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:22:06.466267 systemd[1]: Mounted tmp.mount. Feb 12 19:22:06.466276 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:22:06.466288 kernel: audit: type=1130 audit(1707765726.441:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.466298 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:22:06.466335 systemd-journald[1202]: Journal started Feb 12 19:22:06.466404 systemd-journald[1202]: Runtime Journal (/run/log/journal/8cb052fb8f304ecd8a7ff846c2585774) is 8.0M, max 78.6M, 70.6M free. Feb 12 19:22:05.991000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:22:06.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.460000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:22:06.482703 kernel: audit: type=1305 audit(1707765726.460:94): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:22:06.482751 systemd[1]: Started systemd-journald.service. Feb 12 19:22:06.460000 audit[1202]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffdca4d950 a2=4000 a3=1 items=0 ppid=1 pid=1202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:06.498329 kernel: audit: type=1300 audit(1707765726.460:94): arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffdca4d950 a2=4000 a3=1 items=0 ppid=1 pid=1202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:06.460000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:22:06.527974 kernel: audit: type=1327 audit(1707765726.460:94): proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:22:06.528018 kernel: audit: type=1130 audit(1707765726.481:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.528753 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:22:06.529127 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:22:06.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.571434 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:22:06.571710 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:22:06.573074 kernel: audit: type=1130 audit(1707765726.527:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.597204 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:22:06.597461 systemd[1]: Finished modprobe@drm.service. Feb 12 19:22:06.597694 kernel: audit: type=1130 audit(1707765726.570:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.602975 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:22:06.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.613520 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:22:06.628334 kernel: audit: type=1131 audit(1707765726.570:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.630578 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:22:06.630743 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:22:06.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.641144 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:22:06.641355 systemd[1]: Finished modprobe@loop.service. Feb 12 19:22:06.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.653093 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:22:06.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.658874 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:22:06.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.664638 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:22:06.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.670895 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:22:06.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.676507 systemd[1]: Reached target network-pre.target. Feb 12 19:22:06.682605 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:22:06.688750 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:22:06.693113 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:22:06.711207 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:22:06.718366 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:22:06.722970 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:22:06.724168 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:22:06.728733 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:22:06.729957 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:22:06.735448 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:22:06.741649 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:22:06.748630 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:22:06.753894 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:22:06.760784 udevadm[1244]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 19:22:06.801139 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:22:06.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.812533 systemd-journald[1202]: Time spent on flushing to /var/log/journal/8cb052fb8f304ecd8a7ff846c2585774 is 13.026ms for 1064 entries. Feb 12 19:22:06.812533 systemd-journald[1202]: System Journal (/var/log/journal/8cb052fb8f304ecd8a7ff846c2585774) is 8.0M, max 2.6G, 2.6G free. Feb 12 19:22:06.922354 systemd-journald[1202]: Received client request to flush runtime journal. Feb 12 19:22:06.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.808275 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:22:06.819862 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:22:06.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:06.923490 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:22:07.500713 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:22:07.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:07.507033 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:22:07.920403 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:22:07.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:07.973071 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:22:07.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:07.980806 systemd[1]: Starting systemd-udevd.service... Feb 12 19:22:07.999405 systemd-udevd[1255]: Using default interface naming scheme 'v252'. Feb 12 19:22:08.303577 systemd[1]: Started systemd-udevd.service. Feb 12 19:22:08.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:08.321976 systemd[1]: Starting systemd-networkd.service... Feb 12 19:22:08.346412 systemd[1]: Found device dev-ttyAMA0.device. Feb 12 19:22:08.404774 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:22:08.412462 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:22:08.450000 audit[1258]: AVC avc: denied { confidentiality } for pid=1258 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:22:08.464667 kernel: hv_vmbus: registering driver hyperv_fb Feb 12 19:22:08.471173 kernel: hv_vmbus: registering driver hv_balloon Feb 12 19:22:08.471209 kernel: hv_utils: Registering HyperV Utility Driver Feb 12 19:22:08.471237 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 12 19:22:08.471256 kernel: hv_vmbus: registering driver hv_utils Feb 12 19:22:08.481645 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 12 19:22:08.482466 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 12 19:22:08.482552 kernel: hv_utils: Heartbeat IC version 3.0 Feb 12 19:22:08.482587 kernel: hv_utils: Shutdown IC version 3.2 Feb 12 19:22:08.482604 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 12 19:22:08.483343 kernel: hv_utils: TimeSync IC version 4.0 Feb 12 19:22:09.009952 systemd[1]: Started systemd-userdbd.service. Feb 12 19:22:09.019799 kernel: Console: switching to colour dummy device 80x25 Feb 12 19:22:09.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:09.025802 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:22:08.450000 audit[1258]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaae76f0530 a1=aa2c a2=ffffa27c24b0 a3=aaaae7626010 items=12 ppid=1255 pid=1258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:08.450000 audit: CWD cwd="/" Feb 12 19:22:08.450000 audit: PATH item=0 name=(null) inode=5867 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:08.450000 audit: PATH item=1 name=(null) inode=11320 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:08.450000 audit: PATH item=2 name=(null) inode=11320 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:08.450000 audit: PATH item=3 name=(null) inode=11321 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:08.450000 audit: PATH item=4 name=(null) inode=11320 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:08.450000 audit: PATH item=5 name=(null) inode=11322 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:08.450000 audit: PATH item=6 name=(null) inode=11320 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:08.450000 audit: PATH item=7 name=(null) inode=11323 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:08.450000 audit: PATH item=8 name=(null) inode=11320 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:08.450000 audit: PATH item=9 name=(null) inode=11324 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:08.450000 audit: PATH item=10 name=(null) inode=11320 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:08.450000 audit: PATH item=11 name=(null) inode=11325 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:22:08.450000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:22:09.287440 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1273) Feb 12 19:22:09.309624 systemd-networkd[1276]: lo: Link UP Feb 12 19:22:09.309630 systemd-networkd[1276]: lo: Gained carrier Feb 12 19:22:09.310016 systemd-networkd[1276]: Enumeration completed Feb 12 19:22:09.310124 systemd[1]: Started systemd-networkd.service. Feb 12 19:22:09.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:09.330575 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 12 19:22:09.332104 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:22:09.337962 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:22:09.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:09.345010 systemd-networkd[1276]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:22:09.345674 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:22:09.403434 kernel: mlx5_core 8cac:00:02.0 enP36012s1: Link up Feb 12 19:22:09.431428 kernel: hv_netvsc 0022487c-fa39-0022-487c-fa390022487c eth0: Data path switched to VF: enP36012s1 Feb 12 19:22:09.432256 systemd-networkd[1276]: enP36012s1: Link UP Feb 12 19:22:09.432493 systemd-networkd[1276]: eth0: Link UP Feb 12 19:22:09.432553 systemd-networkd[1276]: eth0: Gained carrier Feb 12 19:22:09.437703 systemd-networkd[1276]: enP36012s1: Gained carrier Feb 12 19:22:09.447554 systemd-networkd[1276]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 12 19:22:09.664539 lvm[1334]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:22:09.704280 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:22:09.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:09.710000 systemd[1]: Reached target cryptsetup.target. Feb 12 19:22:09.716011 systemd[1]: Starting lvm2-activation.service... Feb 12 19:22:09.720466 lvm[1336]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:22:09.750377 systemd[1]: Finished lvm2-activation.service. Feb 12 19:22:09.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:09.755591 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:22:09.760622 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:22:09.760651 systemd[1]: Reached target local-fs.target. Feb 12 19:22:09.765103 systemd[1]: Reached target machines.target. Feb 12 19:22:09.770990 systemd[1]: Starting ldconfig.service... Feb 12 19:22:09.775283 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:22:09.775363 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:22:09.776658 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:22:09.782239 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:22:09.789527 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:22:09.794432 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:22:09.794487 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:22:09.795758 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:22:09.808358 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:22:09.855495 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:22:09.867165 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:22:10.137665 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1339 (bootctl) Feb 12 19:22:10.139303 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:22:10.209524 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:22:10.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.329455 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:22:10.330115 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:22:10.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.425456 systemd-fsck[1348]: fsck.fat 4.2 (2021-01-31) Feb 12 19:22:10.425456 systemd-fsck[1348]: /dev/sda1: 236 files, 113719/258078 clusters Feb 12 19:22:10.427100 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:22:10.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.435931 systemd[1]: Mounting boot.mount... Feb 12 19:22:10.450535 systemd[1]: Mounted boot.mount. Feb 12 19:22:10.461376 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:22:10.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:10.992609 systemd-networkd[1276]: eth0: Gained IPv6LL Feb 12 19:22:10.996353 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:22:11.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:11.881207 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:22:11.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:11.888022 systemd[1]: Starting audit-rules.service... Feb 12 19:22:11.891768 kernel: kauditd_printk_skb: 47 callbacks suppressed Feb 12 19:22:11.891844 kernel: audit: type=1130 audit(1707765731.885:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:11.917012 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:22:11.925869 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:22:11.933168 systemd[1]: Starting systemd-resolved.service... Feb 12 19:22:11.939037 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:22:11.944980 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:22:11.950458 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:22:11.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:11.956224 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:22:11.978813 kernel: audit: type=1130 audit(1707765731.954:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:12.001000 audit[1367]: SYSTEM_BOOT pid=1367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:22:12.022249 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:22:12.028652 kernel: audit: type=1127 audit(1707765732.001:133): pid=1367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:22:12.028727 kernel: audit: type=1130 audit(1707765732.027:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:12.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:12.091544 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:22:12.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:12.097102 systemd[1]: Reached target time-set.target. Feb 12 19:22:12.119440 kernel: audit: type=1130 audit(1707765732.095:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:12.189257 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:22:12.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:12.214449 kernel: audit: type=1130 audit(1707765732.194:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:12.219563 systemd-resolved[1365]: Positive Trust Anchors: Feb 12 19:22:12.219886 systemd-resolved[1365]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:22:12.219966 systemd-resolved[1365]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:22:12.281871 systemd-resolved[1365]: Using system hostname 'ci-3510.3.2-a-434dfde19b'. Feb 12 19:22:12.283597 systemd[1]: Started systemd-resolved.service. Feb 12 19:22:12.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:12.288607 systemd[1]: Reached target network.target. Feb 12 19:22:12.312312 systemd[1]: Reached target network-online.target. Feb 12 19:22:12.313430 kernel: audit: type=1130 audit(1707765732.287:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:12.317383 systemd[1]: Reached target nss-lookup.target. Feb 12 19:22:12.402000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:22:12.415835 augenrules[1384]: No rules Feb 12 19:22:12.402000 audit[1384]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffee0850e0 a2=420 a3=0 items=0 ppid=1360 pid=1384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:12.416937 systemd[1]: Finished audit-rules.service. Feb 12 19:22:12.442491 kernel: audit: type=1305 audit(1707765732.402:138): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:22:12.442567 kernel: audit: type=1300 audit(1707765732.402:138): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffee0850e0 a2=420 a3=0 items=0 ppid=1360 pid=1384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:12.442597 kernel: audit: type=1327 audit(1707765732.402:138): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:22:12.402000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:22:12.700720 systemd-timesyncd[1366]: Contacted time server 162.220.14.14:123 (0.flatcar.pool.ntp.org). Feb 12 19:22:12.700792 systemd-timesyncd[1366]: Initial clock synchronization to Mon 2024-02-12 19:22:12.707944 UTC. Feb 12 19:22:19.407076 ldconfig[1338]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:22:19.422126 systemd[1]: Finished ldconfig.service. Feb 12 19:22:19.428585 systemd[1]: Starting systemd-update-done.service... Feb 12 19:22:19.468684 systemd[1]: Finished systemd-update-done.service. Feb 12 19:22:19.474068 systemd[1]: Reached target sysinit.target. Feb 12 19:22:19.478835 systemd[1]: Started motdgen.path. Feb 12 19:22:19.483138 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:22:19.489593 systemd[1]: Started logrotate.timer. Feb 12 19:22:19.494076 systemd[1]: Started mdadm.timer. Feb 12 19:22:19.500558 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:22:19.506164 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:22:19.506199 systemd[1]: Reached target paths.target. Feb 12 19:22:19.510841 systemd[1]: Reached target timers.target. Feb 12 19:22:19.517500 systemd[1]: Listening on dbus.socket. Feb 12 19:22:19.522828 systemd[1]: Starting docker.socket... Feb 12 19:22:19.567527 systemd[1]: Listening on sshd.socket. Feb 12 19:22:19.571741 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:22:19.572156 systemd[1]: Listening on docker.socket. Feb 12 19:22:19.578572 systemd[1]: Reached target sockets.target. Feb 12 19:22:19.582972 systemd[1]: Reached target basic.target. Feb 12 19:22:19.587383 systemd[1]: System is tainted: cgroupsv1 Feb 12 19:22:19.587445 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:22:19.587466 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:22:19.588625 systemd[1]: Starting containerd.service... Feb 12 19:22:19.593696 systemd[1]: Starting dbus.service... Feb 12 19:22:19.598061 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:22:19.603720 systemd[1]: Starting extend-filesystems.service... Feb 12 19:22:19.608301 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:22:19.609468 systemd[1]: Starting motdgen.service... Feb 12 19:22:19.614081 systemd[1]: Started nvidia.service. Feb 12 19:22:19.619158 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:22:19.625610 systemd[1]: Starting prepare-critools.service... Feb 12 19:22:19.631017 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:22:19.636685 systemd[1]: Starting sshd-keygen.service... Feb 12 19:22:19.642578 systemd[1]: Starting systemd-logind.service... Feb 12 19:22:19.647323 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:22:19.647377 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:22:19.648449 systemd[1]: Starting update-engine.service... Feb 12 19:22:19.653593 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:22:19.661324 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:22:19.661779 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:22:19.703693 jq[1419]: true Feb 12 19:22:19.703956 jq[1398]: false Feb 12 19:22:19.710103 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:22:19.710347 systemd[1]: Finished motdgen.service. Feb 12 19:22:19.716935 extend-filesystems[1399]: Found sda Feb 12 19:22:19.716935 extend-filesystems[1399]: Found sda1 Feb 12 19:22:19.716935 extend-filesystems[1399]: Found sda2 Feb 12 19:22:19.716935 extend-filesystems[1399]: Found sda3 Feb 12 19:22:19.716935 extend-filesystems[1399]: Found usr Feb 12 19:22:19.716935 extend-filesystems[1399]: Found sda4 Feb 12 19:22:19.716935 extend-filesystems[1399]: Found sda6 Feb 12 19:22:19.716935 extend-filesystems[1399]: Found sda7 Feb 12 19:22:19.716935 extend-filesystems[1399]: Found sda9 Feb 12 19:22:19.770470 extend-filesystems[1399]: Checking size of /dev/sda9 Feb 12 19:22:19.739031 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:22:19.739272 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:22:19.750792 systemd-logind[1414]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:22:19.757717 systemd-logind[1414]: New seat seat0. Feb 12 19:22:19.789122 env[1425]: time="2024-02-12T19:22:19.787861365Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:22:19.795216 jq[1437]: true Feb 12 19:22:19.816533 env[1425]: time="2024-02-12T19:22:19.816495230Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:22:19.816820 env[1425]: time="2024-02-12T19:22:19.816800725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:19.818033 env[1425]: time="2024-02-12T19:22:19.817975692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:22:19.818457 env[1425]: time="2024-02-12T19:22:19.818434876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:19.818800 env[1425]: time="2024-02-12T19:22:19.818777143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:22:19.818880 env[1425]: time="2024-02-12T19:22:19.818865731Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:19.818942 env[1425]: time="2024-02-12T19:22:19.818927150Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:22:19.818995 env[1425]: time="2024-02-12T19:22:19.818982327Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:19.819142 env[1425]: time="2024-02-12T19:22:19.819123891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:19.819449 env[1425]: time="2024-02-12T19:22:19.819402898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:19.819677 env[1425]: time="2024-02-12T19:22:19.819656137Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:22:19.819758 env[1425]: time="2024-02-12T19:22:19.819743125Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:22:19.819868 env[1425]: time="2024-02-12T19:22:19.819850518Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:22:19.819943 env[1425]: time="2024-02-12T19:22:19.819928783Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:22:19.832313 env[1425]: time="2024-02-12T19:22:19.832271519Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:22:19.832532 env[1425]: time="2024-02-12T19:22:19.832514674Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:22:19.832616 env[1425]: time="2024-02-12T19:22:19.832601542Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:22:19.832700 env[1425]: time="2024-02-12T19:22:19.832684888Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:22:19.832826 env[1425]: time="2024-02-12T19:22:19.832811847Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:22:19.832891 env[1425]: time="2024-02-12T19:22:19.832878228Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:22:19.832951 env[1425]: time="2024-02-12T19:22:19.832936406Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:22:19.833352 env[1425]: time="2024-02-12T19:22:19.833326568Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:22:19.833475 env[1425]: time="2024-02-12T19:22:19.833457409Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:22:19.833556 env[1425]: time="2024-02-12T19:22:19.833541515Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:22:19.833626 env[1425]: time="2024-02-12T19:22:19.833612577Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:22:19.833690 env[1425]: time="2024-02-12T19:22:19.833676597Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:22:19.833868 env[1425]: time="2024-02-12T19:22:19.833851812Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:22:19.834019 env[1425]: time="2024-02-12T19:22:19.834000219Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:22:19.834440 env[1425]: time="2024-02-12T19:22:19.834404385Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:22:19.834547 env[1425]: time="2024-02-12T19:22:19.834530664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:22:19.834611 env[1425]: time="2024-02-12T19:22:19.834597965Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:22:19.834711 env[1425]: time="2024-02-12T19:22:19.834696356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:22:19.834780 env[1425]: time="2024-02-12T19:22:19.834760176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:22:19.834847 env[1425]: time="2024-02-12T19:22:19.834833759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:22:19.834907 env[1425]: time="2024-02-12T19:22:19.834894778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:22:19.834969 env[1425]: time="2024-02-12T19:22:19.834957077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:22:19.835032 env[1425]: time="2024-02-12T19:22:19.835019577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:22:19.835121 env[1425]: time="2024-02-12T19:22:19.835106044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:22:19.835188 env[1425]: time="2024-02-12T19:22:19.835173865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:22:19.835256 env[1425]: time="2024-02-12T19:22:19.835242287Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:22:19.835472 env[1425]: time="2024-02-12T19:22:19.835450912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:22:19.835567 env[1425]: time="2024-02-12T19:22:19.835552744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:22:19.835630 env[1425]: time="2024-02-12T19:22:19.835617884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:22:19.835694 env[1425]: time="2024-02-12T19:22:19.835680303Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:22:19.835758 env[1425]: time="2024-02-12T19:22:19.835742083Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:22:19.835814 env[1425]: time="2024-02-12T19:22:19.835801181Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:22:19.835884 env[1425]: time="2024-02-12T19:22:19.835869883Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:22:19.835975 env[1425]: time="2024-02-12T19:22:19.835960311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:22:19.836521 env[1425]: time="2024-02-12T19:22:19.836435819Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:22:19.864658 env[1425]: time="2024-02-12T19:22:19.836736273Z" level=info msg="Connect containerd service" Feb 12 19:22:19.864658 env[1425]: time="2024-02-12T19:22:19.836798133Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:22:19.864658 env[1425]: time="2024-02-12T19:22:19.837849421Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:22:19.864658 env[1425]: time="2024-02-12T19:22:19.838574488Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:22:19.864658 env[1425]: time="2024-02-12T19:22:19.838621742Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:22:19.864658 env[1425]: time="2024-02-12T19:22:19.838674279Z" level=info msg="containerd successfully booted in 0.055259s" Feb 12 19:22:19.864658 env[1425]: time="2024-02-12T19:22:19.845974119Z" level=info msg="Start subscribing containerd event" Feb 12 19:22:19.864658 env[1425]: time="2024-02-12T19:22:19.846039260Z" level=info msg="Start recovering state" Feb 12 19:22:19.864658 env[1425]: time="2024-02-12T19:22:19.846113603Z" level=info msg="Start event monitor" Feb 12 19:22:19.864658 env[1425]: time="2024-02-12T19:22:19.846137810Z" level=info msg="Start snapshots syncer" Feb 12 19:22:19.864658 env[1425]: time="2024-02-12T19:22:19.846149774Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:22:19.864658 env[1425]: time="2024-02-12T19:22:19.846158257Z" level=info msg="Start streaming server" Feb 12 19:22:19.864972 tar[1423]: crictl Feb 12 19:22:19.865201 extend-filesystems[1399]: Old size kept for /dev/sda9 Feb 12 19:22:19.865201 extend-filesystems[1399]: Found sr0 Feb 12 19:22:19.883558 tar[1422]: ./ Feb 12 19:22:19.883558 tar[1422]: ./macvlan Feb 12 19:22:19.838779 systemd[1]: Started containerd.service. Feb 12 19:22:19.847942 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:22:19.849731 systemd[1]: Finished extend-filesystems.service. Feb 12 19:22:19.936935 bash[1474]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:22:19.937940 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:22:19.955446 tar[1422]: ./static Feb 12 19:22:19.979203 dbus-daemon[1397]: [system] SELinux support is enabled Feb 12 19:22:19.985786 dbus-daemon[1397]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 12 19:22:19.979388 systemd[1]: Started dbus.service. Feb 12 19:22:19.985197 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:22:19.985220 systemd[1]: Reached target system-config.target. Feb 12 19:22:19.993795 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:22:19.993814 systemd[1]: Reached target user-config.target. Feb 12 19:22:20.001663 systemd[1]: Started systemd-logind.service. Feb 12 19:22:20.016811 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 19:22:20.035942 tar[1422]: ./vlan Feb 12 19:22:20.118370 tar[1422]: ./portmap Feb 12 19:22:20.170288 tar[1422]: ./host-local Feb 12 19:22:20.223856 tar[1422]: ./vrf Feb 12 19:22:20.281096 tar[1422]: ./bridge Feb 12 19:22:20.338265 tar[1422]: ./tuning Feb 12 19:22:20.385046 tar[1422]: ./firewall Feb 12 19:22:20.447465 tar[1422]: ./host-device Feb 12 19:22:20.503746 systemd[1]: Finished prepare-critools.service. Feb 12 19:22:20.517012 tar[1422]: ./sbr Feb 12 19:22:20.541676 tar[1422]: ./loopback Feb 12 19:22:20.564792 tar[1422]: ./dhcp Feb 12 19:22:20.598064 update_engine[1416]: I0212 19:22:20.577503 1416 main.cc:92] Flatcar Update Engine starting Feb 12 19:22:20.630541 tar[1422]: ./ptp Feb 12 19:22:20.652184 systemd[1]: Started update-engine.service. Feb 12 19:22:20.659101 systemd[1]: Started locksmithd.service. Feb 12 19:22:20.663715 update_engine[1416]: I0212 19:22:20.663678 1416 update_check_scheduler.cc:74] Next update check in 10m19s Feb 12 19:22:20.671436 tar[1422]: ./ipvlan Feb 12 19:22:20.698945 tar[1422]: ./bandwidth Feb 12 19:22:20.807547 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:22:21.998187 sshd_keygen[1418]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:22:22.015705 systemd[1]: Finished sshd-keygen.service. Feb 12 19:22:22.021915 systemd[1]: Starting issuegen.service... Feb 12 19:22:22.029490 systemd[1]: Started waagent.service. Feb 12 19:22:22.034262 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:22:22.035147 systemd[1]: Finished issuegen.service. Feb 12 19:22:22.041147 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:22:22.049234 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:22:22.056486 systemd[1]: Started getty@tty1.service. Feb 12 19:22:22.062491 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 12 19:22:22.068174 systemd[1]: Reached target getty.target. Feb 12 19:22:22.074759 systemd[1]: Reached target multi-user.target. Feb 12 19:22:22.081811 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:22:22.091659 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:22:22.093676 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:22:22.093908 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:22:22.100012 systemd[1]: Startup finished in 19.442s (kernel) + 26.256s (userspace) = 45.698s. Feb 12 19:22:22.753684 login[1544]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 12 19:22:22.754028 login[1543]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:22:22.797740 systemd[1]: Created slice user-500.slice. Feb 12 19:22:22.798743 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:22:22.800569 systemd-logind[1414]: New session 2 of user core. Feb 12 19:22:22.851549 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:22:22.853153 systemd[1]: Starting user@500.service... Feb 12 19:22:22.872049 (systemd)[1550]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:22:23.104188 systemd[1550]: Queued start job for default target default.target. Feb 12 19:22:23.105236 systemd[1550]: Reached target paths.target. Feb 12 19:22:23.105360 systemd[1550]: Reached target sockets.target. Feb 12 19:22:23.105461 systemd[1550]: Reached target timers.target. Feb 12 19:22:23.105536 systemd[1550]: Reached target basic.target. Feb 12 19:22:23.105653 systemd[1550]: Reached target default.target. Feb 12 19:22:23.105740 systemd[1]: Started user@500.service. Feb 12 19:22:23.106733 systemd[1]: Started session-2.scope. Feb 12 19:22:23.106932 systemd[1550]: Startup finished in 227ms. Feb 12 19:22:23.755333 login[1544]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:22:23.758770 systemd-logind[1414]: New session 1 of user core. Feb 12 19:22:23.759613 systemd[1]: Started session-1.scope. Feb 12 19:22:29.821745 waagent[1540]: 2024-02-12T19:22:29.821637Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 12 19:22:29.829039 waagent[1540]: 2024-02-12T19:22:29.828962Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 12 19:22:29.834328 waagent[1540]: 2024-02-12T19:22:29.834259Z INFO Daemon Daemon Python: 3.9.16 Feb 12 19:22:29.841676 waagent[1540]: 2024-02-12T19:22:29.841561Z INFO Daemon Daemon Run daemon Feb 12 19:22:29.846742 waagent[1540]: 2024-02-12T19:22:29.846666Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 12 19:22:29.868498 waagent[1540]: 2024-02-12T19:22:29.868339Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:22:29.886058 waagent[1540]: 2024-02-12T19:22:29.885916Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:22:29.897330 waagent[1540]: 2024-02-12T19:22:29.897249Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:22:29.902819 waagent[1540]: 2024-02-12T19:22:29.902745Z INFO Daemon Daemon Using waagent for provisioning Feb 12 19:22:29.909124 waagent[1540]: 2024-02-12T19:22:29.909061Z INFO Daemon Daemon Activate resource disk Feb 12 19:22:29.914706 waagent[1540]: 2024-02-12T19:22:29.914642Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 12 19:22:29.929603 waagent[1540]: 2024-02-12T19:22:29.929527Z INFO Daemon Daemon Found device: None Feb 12 19:22:29.936079 waagent[1540]: 2024-02-12T19:22:29.936000Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 12 19:22:29.945821 waagent[1540]: 2024-02-12T19:22:29.945748Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 12 19:22:29.958510 waagent[1540]: 2024-02-12T19:22:29.958444Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:22:29.964636 waagent[1540]: 2024-02-12T19:22:29.964578Z INFO Daemon Daemon Running default provisioning handler Feb 12 19:22:29.977590 waagent[1540]: 2024-02-12T19:22:29.977451Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:22:29.993368 waagent[1540]: 2024-02-12T19:22:29.993232Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:22:30.003588 waagent[1540]: 2024-02-12T19:22:30.003509Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:22:30.009143 waagent[1540]: 2024-02-12T19:22:30.009077Z INFO Daemon Daemon Copying ovf-env.xml Feb 12 19:22:30.114546 waagent[1540]: 2024-02-12T19:22:30.113750Z INFO Daemon Daemon Successfully mounted dvd Feb 12 19:22:30.246055 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 12 19:22:30.289393 waagent[1540]: 2024-02-12T19:22:30.289255Z INFO Daemon Daemon Detect protocol endpoint Feb 12 19:22:30.295316 waagent[1540]: 2024-02-12T19:22:30.295219Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:22:30.302056 waagent[1540]: 2024-02-12T19:22:30.301967Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 12 19:22:30.309385 waagent[1540]: 2024-02-12T19:22:30.309296Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 12 19:22:30.315748 waagent[1540]: 2024-02-12T19:22:30.315668Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 12 19:22:30.321525 waagent[1540]: 2024-02-12T19:22:30.321444Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 12 19:22:30.486243 waagent[1540]: 2024-02-12T19:22:30.486096Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 12 19:22:30.493859 waagent[1540]: 2024-02-12T19:22:30.493812Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 12 19:22:30.499996 waagent[1540]: 2024-02-12T19:22:30.499924Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 12 19:22:32.026764 waagent[1540]: 2024-02-12T19:22:32.026599Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 12 19:22:32.042940 waagent[1540]: 2024-02-12T19:22:32.042867Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 12 19:22:32.049149 waagent[1540]: 2024-02-12T19:22:32.049080Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 12 19:22:32.127524 waagent[1540]: 2024-02-12T19:22:32.127346Z INFO Daemon Daemon Found private key matching thumbprint 792ED35A802A97E7503E3B48E49597918AABDF9C Feb 12 19:22:32.137213 waagent[1540]: 2024-02-12T19:22:32.137123Z INFO Daemon Daemon Certificate with thumbprint DE3181B5706DC8A1C61A7D5876A5D109D1A4DA45 has no matching private key. Feb 12 19:22:32.149796 waagent[1540]: 2024-02-12T19:22:32.149711Z INFO Daemon Daemon Fetch goal state completed Feb 12 19:22:32.194820 waagent[1540]: 2024-02-12T19:22:32.194760Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 9c39f1b5-32c3-4c96-9708-74c56a889595 New eTag: 15123475525181720195] Feb 12 19:22:32.206506 waagent[1540]: 2024-02-12T19:22:32.206420Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:22:32.224104 waagent[1540]: 2024-02-12T19:22:32.224038Z INFO Daemon Daemon Starting provisioning Feb 12 19:22:32.229663 waagent[1540]: 2024-02-12T19:22:32.229588Z INFO Daemon Daemon Handle ovf-env.xml. Feb 12 19:22:32.234925 waagent[1540]: 2024-02-12T19:22:32.234855Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-434dfde19b] Feb 12 19:22:32.278076 waagent[1540]: 2024-02-12T19:22:32.277939Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-434dfde19b] Feb 12 19:22:32.286035 waagent[1540]: 2024-02-12T19:22:32.285949Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 12 19:22:32.292986 waagent[1540]: 2024-02-12T19:22:32.292915Z INFO Daemon Daemon Primary interface is [eth0] Feb 12 19:22:32.311043 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 12 19:22:32.311265 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 12 19:22:32.311319 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 12 19:22:32.311543 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:22:32.316472 systemd-networkd[1276]: eth0: DHCPv6 lease lost Feb 12 19:22:32.318037 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:22:32.318291 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:22:32.320185 systemd[1]: Starting systemd-networkd.service... Feb 12 19:22:32.352176 systemd-networkd[1596]: enP36012s1: Link UP Feb 12 19:22:32.352187 systemd-networkd[1596]: enP36012s1: Gained carrier Feb 12 19:22:32.353085 systemd-networkd[1596]: eth0: Link UP Feb 12 19:22:32.353095 systemd-networkd[1596]: eth0: Gained carrier Feb 12 19:22:32.353405 systemd-networkd[1596]: lo: Link UP Feb 12 19:22:32.353547 systemd-networkd[1596]: lo: Gained carrier Feb 12 19:22:32.353788 systemd-networkd[1596]: eth0: Gained IPv6LL Feb 12 19:22:32.355105 systemd-networkd[1596]: Enumeration completed Feb 12 19:22:32.355224 systemd[1]: Started systemd-networkd.service. Feb 12 19:22:32.356294 waagent[1540]: 2024-02-12T19:22:32.356154Z INFO Daemon Daemon Create user account if not exists Feb 12 19:22:32.357609 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:22:32.362996 waagent[1540]: 2024-02-12T19:22:32.362914Z INFO Daemon Daemon User core already exists, skip useradd Feb 12 19:22:32.366298 systemd-networkd[1596]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:22:32.369385 waagent[1540]: 2024-02-12T19:22:32.369301Z INFO Daemon Daemon Configure sudoer Feb 12 19:22:32.374659 waagent[1540]: 2024-02-12T19:22:32.374588Z INFO Daemon Daemon Configure sshd Feb 12 19:22:32.379682 waagent[1540]: 2024-02-12T19:22:32.379613Z INFO Daemon Daemon Deploy ssh public key. Feb 12 19:22:32.390193 systemd-networkd[1596]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 12 19:22:32.400914 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:22:33.614858 waagent[1540]: 2024-02-12T19:22:33.614794Z INFO Daemon Daemon Provisioning complete Feb 12 19:22:33.636041 waagent[1540]: 2024-02-12T19:22:33.635976Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 12 19:22:33.644092 waagent[1540]: 2024-02-12T19:22:33.644004Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 12 19:22:33.655277 waagent[1540]: 2024-02-12T19:22:33.655200Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 12 19:22:33.955728 waagent[1606]: 2024-02-12T19:22:33.955578Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 12 19:22:33.956442 waagent[1606]: 2024-02-12T19:22:33.956368Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:22:33.956586 waagent[1606]: 2024-02-12T19:22:33.956538Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:22:33.970977 waagent[1606]: 2024-02-12T19:22:33.970902Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 12 19:22:33.971159 waagent[1606]: 2024-02-12T19:22:33.971111Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 12 19:22:34.037330 waagent[1606]: 2024-02-12T19:22:34.037193Z INFO ExtHandler ExtHandler Found private key matching thumbprint 792ED35A802A97E7503E3B48E49597918AABDF9C Feb 12 19:22:34.037576 waagent[1606]: 2024-02-12T19:22:34.037519Z INFO ExtHandler ExtHandler Certificate with thumbprint DE3181B5706DC8A1C61A7D5876A5D109D1A4DA45 has no matching private key. Feb 12 19:22:34.037803 waagent[1606]: 2024-02-12T19:22:34.037755Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 12 19:22:34.051015 waagent[1606]: 2024-02-12T19:22:34.050961Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 5971ca4f-0bb1-4d9e-b22e-fc43529f8481 New eTag: 15123475525181720195] Feb 12 19:22:34.051628 waagent[1606]: 2024-02-12T19:22:34.051569Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:22:34.133234 waagent[1606]: 2024-02-12T19:22:34.133097Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:22:34.160631 waagent[1606]: 2024-02-12T19:22:34.160486Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1606 Feb 12 19:22:34.166080 waagent[1606]: 2024-02-12T19:22:34.165048Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:22:34.166941 waagent[1606]: 2024-02-12T19:22:34.166555Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:22:34.314493 waagent[1606]: 2024-02-12T19:22:34.314403Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:22:34.314917 waagent[1606]: 2024-02-12T19:22:34.314855Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:22:34.322634 waagent[1606]: 2024-02-12T19:22:34.322573Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:22:34.323148 waagent[1606]: 2024-02-12T19:22:34.323088Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:22:34.324324 waagent[1606]: 2024-02-12T19:22:34.324258Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 12 19:22:34.325720 waagent[1606]: 2024-02-12T19:22:34.325645Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:22:34.326358 waagent[1606]: 2024-02-12T19:22:34.326296Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:22:34.326656 waagent[1606]: 2024-02-12T19:22:34.326603Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:22:34.327292 waagent[1606]: 2024-02-12T19:22:34.327238Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:22:34.327698 waagent[1606]: 2024-02-12T19:22:34.327642Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:22:34.327698 waagent[1606]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:22:34.327698 waagent[1606]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:22:34.327698 waagent[1606]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:22:34.327698 waagent[1606]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:22:34.327698 waagent[1606]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:22:34.327698 waagent[1606]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:22:34.330025 waagent[1606]: 2024-02-12T19:22:34.329869Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:22:34.330879 waagent[1606]: 2024-02-12T19:22:34.330816Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:22:34.331159 waagent[1606]: 2024-02-12T19:22:34.331106Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:22:34.331843 waagent[1606]: 2024-02-12T19:22:34.331782Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:22:34.332073 waagent[1606]: 2024-02-12T19:22:34.332026Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:22:34.332269 waagent[1606]: 2024-02-12T19:22:34.332225Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:22:34.333265 waagent[1606]: 2024-02-12T19:22:34.333208Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:22:34.333353 waagent[1606]: 2024-02-12T19:22:34.333288Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:22:34.334104 waagent[1606]: 2024-02-12T19:22:34.334020Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:22:34.334181 waagent[1606]: 2024-02-12T19:22:34.334125Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:22:34.334657 waagent[1606]: 2024-02-12T19:22:34.334582Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:22:34.347263 waagent[1606]: 2024-02-12T19:22:34.347193Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 12 19:22:34.347942 waagent[1606]: 2024-02-12T19:22:34.347885Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:22:34.348926 waagent[1606]: 2024-02-12T19:22:34.348863Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 12 19:22:34.395550 waagent[1606]: 2024-02-12T19:22:34.395487Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 12 19:22:34.399909 waagent[1606]: 2024-02-12T19:22:34.399830Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1596' Feb 12 19:22:34.539819 waagent[1606]: 2024-02-12T19:22:34.539702Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 12 19:22:34.658803 waagent[1540]: 2024-02-12T19:22:34.658647Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 12 19:22:34.662778 waagent[1540]: 2024-02-12T19:22:34.662726Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 12 19:22:35.838489 waagent[1637]: 2024-02-12T19:22:35.838375Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 12 19:22:35.839544 waagent[1637]: 2024-02-12T19:22:35.839487Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 12 19:22:35.839774 waagent[1637]: 2024-02-12T19:22:35.839726Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 12 19:22:35.848265 waagent[1637]: 2024-02-12T19:22:35.848140Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:22:35.848859 waagent[1637]: 2024-02-12T19:22:35.848805Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:22:35.849099 waagent[1637]: 2024-02-12T19:22:35.849050Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:22:35.861972 waagent[1637]: 2024-02-12T19:22:35.861891Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 12 19:22:35.870928 waagent[1637]: 2024-02-12T19:22:35.870867Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 12 19:22:35.872127 waagent[1637]: 2024-02-12T19:22:35.872070Z INFO ExtHandler Feb 12 19:22:35.872371 waagent[1637]: 2024-02-12T19:22:35.872322Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 5ddfe4e9-da18-4e68-907d-4fabf31130a5 eTag: 15123475525181720195 source: Fabric] Feb 12 19:22:35.873304 waagent[1637]: 2024-02-12T19:22:35.873247Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 12 19:22:35.874722 waagent[1637]: 2024-02-12T19:22:35.874664Z INFO ExtHandler Feb 12 19:22:35.874936 waagent[1637]: 2024-02-12T19:22:35.874890Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 12 19:22:35.881673 waagent[1637]: 2024-02-12T19:22:35.881623Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 12 19:22:35.882308 waagent[1637]: 2024-02-12T19:22:35.882264Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:22:35.912069 waagent[1637]: 2024-02-12T19:22:35.912003Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 12 19:22:35.986182 waagent[1637]: 2024-02-12T19:22:35.986036Z INFO ExtHandler Downloaded certificate {'thumbprint': 'DE3181B5706DC8A1C61A7D5876A5D109D1A4DA45', 'hasPrivateKey': False} Feb 12 19:22:35.987486 waagent[1637]: 2024-02-12T19:22:35.987399Z INFO ExtHandler Downloaded certificate {'thumbprint': '792ED35A802A97E7503E3B48E49597918AABDF9C', 'hasPrivateKey': True} Feb 12 19:22:35.988709 waagent[1637]: 2024-02-12T19:22:35.988649Z INFO ExtHandler Fetch goal state completed Feb 12 19:22:36.012650 waagent[1637]: 2024-02-12T19:22:36.012573Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1637 Feb 12 19:22:36.016352 waagent[1637]: 2024-02-12T19:22:36.016277Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:22:36.017998 waagent[1637]: 2024-02-12T19:22:36.017935Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:22:36.023463 waagent[1637]: 2024-02-12T19:22:36.023385Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:22:36.023998 waagent[1637]: 2024-02-12T19:22:36.023942Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:22:36.031944 waagent[1637]: 2024-02-12T19:22:36.031884Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:22:36.032656 waagent[1637]: 2024-02-12T19:22:36.032595Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:22:36.039126 waagent[1637]: 2024-02-12T19:22:36.039014Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 12 19:22:36.042941 waagent[1637]: 2024-02-12T19:22:36.042883Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 12 19:22:36.044622 waagent[1637]: 2024-02-12T19:22:36.044551Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:22:36.044891 waagent[1637]: 2024-02-12T19:22:36.044823Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:22:36.045488 waagent[1637]: 2024-02-12T19:22:36.045393Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:22:36.046121 waagent[1637]: 2024-02-12T19:22:36.046050Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:22:36.046452 waagent[1637]: 2024-02-12T19:22:36.046374Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:22:36.046452 waagent[1637]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:22:36.046452 waagent[1637]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:22:36.046452 waagent[1637]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:22:36.046452 waagent[1637]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:22:36.046452 waagent[1637]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:22:36.046452 waagent[1637]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:22:36.048903 waagent[1637]: 2024-02-12T19:22:36.048780Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:22:36.049380 waagent[1637]: 2024-02-12T19:22:36.049307Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:22:36.051677 waagent[1637]: 2024-02-12T19:22:36.050025Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:22:36.052673 waagent[1637]: 2024-02-12T19:22:36.052559Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:22:36.052822 waagent[1637]: 2024-02-12T19:22:36.052768Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:22:36.052935 waagent[1637]: 2024-02-12T19:22:36.052889Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:22:36.053649 waagent[1637]: 2024-02-12T19:22:36.053579Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:22:36.054020 waagent[1637]: 2024-02-12T19:22:36.053950Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:22:36.056971 waagent[1637]: 2024-02-12T19:22:36.056894Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:22:36.059282 waagent[1637]: 2024-02-12T19:22:36.059082Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:22:36.061560 waagent[1637]: 2024-02-12T19:22:36.061485Z INFO MonitorHandler ExtHandler Network interfaces: Feb 12 19:22:36.061560 waagent[1637]: Executing ['ip', '-a', '-o', 'link']: Feb 12 19:22:36.061560 waagent[1637]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 12 19:22:36.061560 waagent[1637]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:fa:39 brd ff:ff:ff:ff:ff:ff Feb 12 19:22:36.061560 waagent[1637]: 3: enP36012s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:fa:39 brd ff:ff:ff:ff:ff:ff\ altname enP36012p0s2 Feb 12 19:22:36.061560 waagent[1637]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 12 19:22:36.061560 waagent[1637]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 12 19:22:36.061560 waagent[1637]: 2: eth0 inet 10.200.20.24/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 12 19:22:36.061560 waagent[1637]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 12 19:22:36.061560 waagent[1637]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 12 19:22:36.061560 waagent[1637]: 2: eth0 inet6 fe80::222:48ff:fe7c:fa39/64 scope link \ valid_lft forever preferred_lft forever Feb 12 19:22:36.064151 waagent[1637]: 2024-02-12T19:22:36.063994Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:22:36.075791 waagent[1637]: 2024-02-12T19:22:36.075711Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 12 19:22:36.076822 waagent[1637]: 2024-02-12T19:22:36.076761Z INFO ExtHandler ExtHandler Downloading manifest Feb 12 19:22:36.122945 waagent[1637]: 2024-02-12T19:22:36.122839Z INFO ExtHandler ExtHandler Feb 12 19:22:36.123255 waagent[1637]: 2024-02-12T19:22:36.123190Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 6b5ec24c-fcbc-42eb-9875-d349dfdcc852 correlation 7f76bed9-2fd7-4fdc-8277-b86db4706ec5 created: 2024-02-12T19:20:46.743477Z] Feb 12 19:22:36.124785 waagent[1637]: 2024-02-12T19:22:36.124723Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 12 19:22:36.126724 waagent[1637]: 2024-02-12T19:22:36.126669Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Feb 12 19:22:36.155010 waagent[1637]: 2024-02-12T19:22:36.154932Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 12 19:22:36.177889 waagent[1637]: 2024-02-12T19:22:36.177808Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 1679955E-16F9-4B5C-9B9D-E43AF9E5ECDB;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 12 19:22:36.357057 waagent[1637]: 2024-02-12T19:22:36.356917Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 12 19:22:36.357057 waagent[1637]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:22:36.357057 waagent[1637]: pkts bytes target prot opt in out source destination Feb 12 19:22:36.357057 waagent[1637]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:22:36.357057 waagent[1637]: pkts bytes target prot opt in out source destination Feb 12 19:22:36.357057 waagent[1637]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:22:36.357057 waagent[1637]: pkts bytes target prot opt in out source destination Feb 12 19:22:36.357057 waagent[1637]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 12 19:22:36.357057 waagent[1637]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:22:36.357057 waagent[1637]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:22:36.364661 waagent[1637]: 2024-02-12T19:22:36.364535Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 12 19:22:36.364661 waagent[1637]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:22:36.364661 waagent[1637]: pkts bytes target prot opt in out source destination Feb 12 19:22:36.364661 waagent[1637]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:22:36.364661 waagent[1637]: pkts bytes target prot opt in out source destination Feb 12 19:22:36.364661 waagent[1637]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:22:36.364661 waagent[1637]: pkts bytes target prot opt in out source destination Feb 12 19:22:36.364661 waagent[1637]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 12 19:22:36.364661 waagent[1637]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:22:36.364661 waagent[1637]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:22:36.365213 waagent[1637]: 2024-02-12T19:22:36.365155Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 12 19:22:57.121082 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 12 19:23:06.432534 update_engine[1416]: I0212 19:23:06.432482 1416 update_attempter.cc:509] Updating boot flags... Feb 12 19:23:23.919477 systemd[1]: Created slice system-sshd.slice. Feb 12 19:23:23.920708 systemd[1]: Started sshd@0-10.200.20.24:22-10.200.12.6:50384.service. Feb 12 19:23:24.632117 sshd[1725]: Accepted publickey for core from 10.200.12.6 port 50384 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:24.650595 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:24.654785 systemd[1]: Started session-3.scope. Feb 12 19:23:24.655727 systemd-logind[1414]: New session 3 of user core. Feb 12 19:23:24.984394 systemd[1]: Started sshd@1-10.200.20.24:22-10.200.12.6:50400.service. Feb 12 19:23:25.389333 sshd[1730]: Accepted publickey for core from 10.200.12.6 port 50400 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:25.390597 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:25.394808 systemd[1]: Started session-4.scope. Feb 12 19:23:25.395001 systemd-logind[1414]: New session 4 of user core. Feb 12 19:23:25.683821 sshd[1730]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:25.687026 systemd[1]: sshd@1-10.200.20.24:22-10.200.12.6:50400.service: Deactivated successfully. Feb 12 19:23:25.688334 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:23:25.688937 systemd-logind[1414]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:23:25.689927 systemd-logind[1414]: Removed session 4. Feb 12 19:23:25.758754 systemd[1]: Started sshd@2-10.200.20.24:22-10.200.12.6:50410.service. Feb 12 19:23:26.192079 sshd[1737]: Accepted publickey for core from 10.200.12.6 port 50410 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:26.193608 sshd[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:26.197126 systemd-logind[1414]: New session 5 of user core. Feb 12 19:23:26.197554 systemd[1]: Started session-5.scope. Feb 12 19:23:26.503805 sshd[1737]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:26.507256 systemd[1]: sshd@2-10.200.20.24:22-10.200.12.6:50410.service: Deactivated successfully. Feb 12 19:23:26.508760 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:23:26.509265 systemd-logind[1414]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:23:26.510045 systemd-logind[1414]: Removed session 5. Feb 12 19:23:26.584467 systemd[1]: Started sshd@3-10.200.20.24:22-10.200.12.6:50422.service. Feb 12 19:23:27.016814 sshd[1744]: Accepted publickey for core from 10.200.12.6 port 50422 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:27.018357 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:27.022332 systemd[1]: Started session-6.scope. Feb 12 19:23:27.022828 systemd-logind[1414]: New session 6 of user core. Feb 12 19:23:27.332326 sshd[1744]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:27.335241 systemd-logind[1414]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:23:27.335387 systemd[1]: sshd@3-10.200.20.24:22-10.200.12.6:50422.service: Deactivated successfully. Feb 12 19:23:27.336149 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:23:27.336545 systemd-logind[1414]: Removed session 6. Feb 12 19:23:27.399517 systemd[1]: Started sshd@4-10.200.20.24:22-10.200.12.6:45952.service. Feb 12 19:23:27.806859 sshd[1751]: Accepted publickey for core from 10.200.12.6 port 45952 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:27.808402 sshd[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:27.812028 systemd-logind[1414]: New session 7 of user core. Feb 12 19:23:27.812461 systemd[1]: Started session-7.scope. Feb 12 19:23:28.411134 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:23:28.411352 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:23:29.117901 systemd[1]: Reloading. Feb 12 19:23:29.177971 /usr/lib/systemd/system-generators/torcx-generator[1785]: time="2024-02-12T19:23:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:23:29.178326 /usr/lib/systemd/system-generators/torcx-generator[1785]: time="2024-02-12T19:23:29Z" level=info msg="torcx already run" Feb 12 19:23:29.252155 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:23:29.252174 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:23:29.269112 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:23:29.340755 systemd[1]: Started kubelet.service. Feb 12 19:23:29.350571 systemd[1]: Starting coreos-metadata.service... Feb 12 19:23:29.389329 coreos-metadata[1857]: Feb 12 19:23:29.389 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 12 19:23:29.398846 coreos-metadata[1857]: Feb 12 19:23:29.398 INFO Fetch successful Feb 12 19:23:29.398846 coreos-metadata[1857]: Feb 12 19:23:29.398 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 12 19:23:29.400772 coreos-metadata[1857]: Feb 12 19:23:29.400 INFO Fetch successful Feb 12 19:23:29.400841 coreos-metadata[1857]: Feb 12 19:23:29.400 INFO Fetching http://168.63.129.16/machine/87963b68-10f3-4ec1-ae6d-c6d9e806e52c/01a99875%2De9fd%2D4508%2Daef0%2Da4f391db610c.%5Fci%2D3510.3.2%2Da%2D434dfde19b?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 12 19:23:29.402042 kubelet[1850]: E0212 19:23:29.401992 1850 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:23:29.402668 coreos-metadata[1857]: Feb 12 19:23:29.402 INFO Fetch successful Feb 12 19:23:29.404386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:23:29.404545 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:23:29.437125 coreos-metadata[1857]: Feb 12 19:23:29.437 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 12 19:23:29.449358 coreos-metadata[1857]: Feb 12 19:23:29.449 INFO Fetch successful Feb 12 19:23:29.458406 systemd[1]: Finished coreos-metadata.service. Feb 12 19:23:32.934639 systemd[1]: Stopped kubelet.service. Feb 12 19:23:32.949447 systemd[1]: Reloading. Feb 12 19:23:33.006003 /usr/lib/systemd/system-generators/torcx-generator[1917]: time="2024-02-12T19:23:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:23:33.006356 /usr/lib/systemd/system-generators/torcx-generator[1917]: time="2024-02-12T19:23:33Z" level=info msg="torcx already run" Feb 12 19:23:33.087883 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:23:33.087901 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:23:33.104953 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:23:33.183151 systemd[1]: Started kubelet.service. Feb 12 19:23:33.232988 kubelet[1983]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:23:33.233512 kubelet[1983]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:23:33.233665 kubelet[1983]: I0212 19:23:33.233633 1983 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:23:33.234972 kubelet[1983]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:23:33.235057 kubelet[1983]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:23:34.037719 kubelet[1983]: I0212 19:23:34.037692 1983 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:23:34.037878 kubelet[1983]: I0212 19:23:34.037867 1983 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:23:34.038142 kubelet[1983]: I0212 19:23:34.038127 1983 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:23:34.043253 kubelet[1983]: I0212 19:23:34.043082 1983 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:23:34.045367 kubelet[1983]: W0212 19:23:34.045346 1983 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:23:34.045853 kubelet[1983]: I0212 19:23:34.045836 1983 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:23:34.046139 kubelet[1983]: I0212 19:23:34.046121 1983 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:23:34.046211 kubelet[1983]: I0212 19:23:34.046195 1983 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:23:34.046298 kubelet[1983]: I0212 19:23:34.046216 1983 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:23:34.046298 kubelet[1983]: I0212 19:23:34.046227 1983 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:23:34.046351 kubelet[1983]: I0212 19:23:34.046320 1983 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:23:34.049529 kubelet[1983]: I0212 19:23:34.049513 1983 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:23:34.049661 kubelet[1983]: I0212 19:23:34.049644 1983 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:23:34.049744 kubelet[1983]: I0212 19:23:34.049735 1983 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:23:34.049823 kubelet[1983]: I0212 19:23:34.049813 1983 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:23:34.049886 kubelet[1983]: E0212 19:23:34.049867 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:34.049886 kubelet[1983]: E0212 19:23:34.049823 1983 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:34.050396 kubelet[1983]: I0212 19:23:34.050374 1983 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:23:34.051527 kubelet[1983]: W0212 19:23:34.051512 1983 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:23:34.051986 kubelet[1983]: I0212 19:23:34.051970 1983 server.go:1186] "Started kubelet" Feb 12 19:23:34.061251 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:23:34.061350 kubelet[1983]: E0212 19:23:34.057334 1983 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:23:34.061350 kubelet[1983]: E0212 19:23:34.057357 1983 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:23:34.061350 kubelet[1983]: I0212 19:23:34.058147 1983 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:23:34.061350 kubelet[1983]: I0212 19:23:34.058704 1983 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:23:34.061676 kubelet[1983]: I0212 19:23:34.061659 1983 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:23:34.064191 kubelet[1983]: E0212 19:23:34.064094 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b21f4790", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 51948432, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 51948432, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:34.065194 kubelet[1983]: E0212 19:23:34.065109 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b271aaea", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 57347818, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 57347818, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:34.066825 kubelet[1983]: W0212 19:23:34.066672 1983 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.20.24" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:23:34.066825 kubelet[1983]: E0212 19:23:34.066700 1983 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.20.24" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:23:34.066825 kubelet[1983]: W0212 19:23:34.066723 1983 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:34.066825 kubelet[1983]: E0212 19:23:34.066732 1983 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:34.067140 kubelet[1983]: E0212 19:23:34.067008 1983 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.24\" not found" Feb 12 19:23:34.067140 kubelet[1983]: I0212 19:23:34.067037 1983 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:23:34.067140 kubelet[1983]: I0212 19:23:34.067085 1983 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:23:34.067959 kubelet[1983]: W0212 19:23:34.067817 1983 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:34.067959 kubelet[1983]: E0212 19:23:34.067838 1983 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:34.069543 kubelet[1983]: E0212 19:23:34.069443 1983 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.200.20.24" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:23:34.105999 kubelet[1983]: I0212 19:23:34.105975 1983 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:23:34.106145 kubelet[1983]: I0212 19:23:34.106135 1983 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:23:34.106205 kubelet[1983]: I0212 19:23:34.106197 1983 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:23:34.106620 kubelet[1983]: E0212 19:23:34.106534 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54dcf35", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.24 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105329461, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105329461, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:34.107243 kubelet[1983]: E0212 19:23:34.107187 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54de58a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.24 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105335178, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105335178, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:34.107814 kubelet[1983]: E0212 19:23:34.107758 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54df051", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.24 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105337937, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105337937, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:34.110711 kubelet[1983]: I0212 19:23:34.110695 1983 policy_none.go:49] "None policy: Start" Feb 12 19:23:34.111464 kubelet[1983]: I0212 19:23:34.111444 1983 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:23:34.111529 kubelet[1983]: I0212 19:23:34.111473 1983 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:23:34.120427 kubelet[1983]: I0212 19:23:34.120391 1983 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:23:34.120647 kubelet[1983]: I0212 19:23:34.120627 1983 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:23:34.124072 kubelet[1983]: E0212 19:23:34.124050 1983 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.24\" not found" Feb 12 19:23:34.124279 kubelet[1983]: E0212 19:23:34.124198 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b645c50f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 121579791, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 121579791, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:34.168655 kubelet[1983]: I0212 19:23:34.168625 1983 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.24" Feb 12 19:23:34.169957 kubelet[1983]: E0212 19:23:34.169888 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54dcf35", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.24 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105329461, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 168587436, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54dcf35" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:34.170181 kubelet[1983]: E0212 19:23:34.170160 1983 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.24" Feb 12 19:23:34.170681 kubelet[1983]: E0212 19:23:34.170625 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54de58a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.24 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105335178, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 168593193, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54de58a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:34.171382 kubelet[1983]: E0212 19:23:34.171329 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54df051", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.24 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105337937, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 168595752, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54df051" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:34.271314 kubelet[1983]: E0212 19:23:34.271288 1983 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.200.20.24" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:23:34.371154 kubelet[1983]: I0212 19:23:34.371054 1983 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.24" Feb 12 19:23:34.372589 kubelet[1983]: E0212 19:23:34.372228 1983 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.24" Feb 12 19:23:34.373033 kubelet[1983]: E0212 19:23:34.372961 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54dcf35", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.24 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105329461, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 371007104, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54dcf35" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:34.373950 kubelet[1983]: E0212 19:23:34.373895 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54de58a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.24 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105335178, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 371013700, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54de58a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:34.389736 kubelet[1983]: I0212 19:23:34.389715 1983 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:23:34.448561 kubelet[1983]: I0212 19:23:34.448533 1983 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:23:34.448561 kubelet[1983]: I0212 19:23:34.448558 1983 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:23:34.448714 kubelet[1983]: I0212 19:23:34.448588 1983 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:23:34.448714 kubelet[1983]: E0212 19:23:34.448637 1983 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:23:34.449792 kubelet[1983]: W0212 19:23:34.449768 1983 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:34.449942 kubelet[1983]: E0212 19:23:34.449930 1983 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:34.453849 kubelet[1983]: E0212 19:23:34.453774 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54df051", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.24 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105337937, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 371016259, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54df051" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:34.673384 kubelet[1983]: E0212 19:23:34.672940 1983 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.200.20.24" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:23:34.773153 kubelet[1983]: I0212 19:23:34.773129 1983 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.24" Feb 12 19:23:34.774518 kubelet[1983]: E0212 19:23:34.774492 1983 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.24" Feb 12 19:23:34.774707 kubelet[1983]: E0212 19:23:34.774632 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54dcf35", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.24 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105329461, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 773094953, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54dcf35" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:34.854140 kubelet[1983]: E0212 19:23:34.854068 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54de58a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.24 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105335178, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 773099751, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54de58a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:34.885667 kubelet[1983]: W0212 19:23:34.885644 1983 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:34.885836 kubelet[1983]: E0212 19:23:34.885826 1983 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:35.050488 kubelet[1983]: E0212 19:23:35.050458 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:35.053567 kubelet[1983]: E0212 19:23:35.053488 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54df051", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.24 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105337937, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 773103269, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54df051" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:35.064582 kubelet[1983]: W0212 19:23:35.064563 1983 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:35.064711 kubelet[1983]: E0212 19:23:35.064701 1983 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:35.369797 kubelet[1983]: W0212 19:23:35.369508 1983 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.20.24" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:23:35.369797 kubelet[1983]: E0212 19:23:35.369543 1983 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.20.24" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:23:35.384582 kubelet[1983]: W0212 19:23:35.384560 1983 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:35.384700 kubelet[1983]: E0212 19:23:35.384690 1983 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:35.474617 kubelet[1983]: E0212 19:23:35.474591 1983 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.200.20.24" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:23:35.575915 kubelet[1983]: I0212 19:23:35.575886 1983 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.24" Feb 12 19:23:35.576862 kubelet[1983]: E0212 19:23:35.576840 1983 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.24" Feb 12 19:23:35.576954 kubelet[1983]: E0212 19:23:35.576887 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54dcf35", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.24 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105329461, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 35, 575847695, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54dcf35" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:35.577702 kubelet[1983]: E0212 19:23:35.577645 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54de58a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.24 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105335178, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 35, 575852213, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54de58a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:35.654500 kubelet[1983]: E0212 19:23:35.654027 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54df051", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.24 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105337937, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 35, 575859849, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54df051" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:36.051545 kubelet[1983]: E0212 19:23:36.051517 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:37.051996 kubelet[1983]: E0212 19:23:37.051781 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:37.075719 kubelet[1983]: E0212 19:23:37.075690 1983 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.200.20.24" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:23:37.091727 kubelet[1983]: W0212 19:23:37.091706 1983 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:37.091780 kubelet[1983]: E0212 19:23:37.091738 1983 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:37.157095 kubelet[1983]: W0212 19:23:37.157067 1983 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:37.157152 kubelet[1983]: E0212 19:23:37.157103 1983 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:37.178244 kubelet[1983]: I0212 19:23:37.178003 1983 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.24" Feb 12 19:23:37.179001 kubelet[1983]: E0212 19:23:37.178976 1983 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.24" Feb 12 19:23:37.179061 kubelet[1983]: E0212 19:23:37.178971 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54dcf35", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.24 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105329461, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 37, 177962866, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54dcf35" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:37.179780 kubelet[1983]: E0212 19:23:37.179724 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54de58a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.24 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105335178, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 37, 177975660, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54de58a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:37.180512 kubelet[1983]: E0212 19:23:37.180461 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54df051", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.24 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105337937, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 37, 177979018, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54df051" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:37.530495 kubelet[1983]: W0212 19:23:37.530442 1983 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:37.530495 kubelet[1983]: E0212 19:23:37.530476 1983 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:37.986234 kubelet[1983]: W0212 19:23:37.986012 1983 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.20.24" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:23:37.986234 kubelet[1983]: E0212 19:23:37.986046 1983 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.20.24" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:23:38.052294 kubelet[1983]: E0212 19:23:38.052261 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:39.053401 kubelet[1983]: E0212 19:23:39.053356 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:40.053650 kubelet[1983]: E0212 19:23:40.053609 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:40.277598 kubelet[1983]: E0212 19:23:40.277546 1983 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.200.20.24" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:23:40.381105 kubelet[1983]: I0212 19:23:40.380500 1983 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.24" Feb 12 19:23:40.381804 kubelet[1983]: E0212 19:23:40.381472 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54dcf35", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.24 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105329461, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 40, 380460842, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54dcf35" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:40.381804 kubelet[1983]: E0212 19:23:40.381781 1983 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.24" Feb 12 19:23:40.382350 kubelet[1983]: E0212 19:23:40.382292 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54de58a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.24 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105335178, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 40, 380472636, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54de58a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:40.383136 kubelet[1983]: E0212 19:23:40.383080 1983 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.24.17b333f5b54df051", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.24", UID:"10.200.20.24", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.24 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.24"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 34, 105337937, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 40, 380475515, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.24.17b333f5b54df051" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:41.054402 kubelet[1983]: E0212 19:23:41.054350 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:41.978978 kubelet[1983]: W0212 19:23:41.978949 1983 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:41.979166 kubelet[1983]: E0212 19:23:41.979155 1983 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:42.054646 kubelet[1983]: E0212 19:23:42.054627 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:42.508341 kubelet[1983]: W0212 19:23:42.508314 1983 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:42.508544 kubelet[1983]: E0212 19:23:42.508531 1983 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:42.909952 kubelet[1983]: W0212 19:23:42.909917 1983 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:42.910132 kubelet[1983]: E0212 19:23:42.910121 1983 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:43.055403 kubelet[1983]: E0212 19:23:43.055369 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:44.041730 kubelet[1983]: I0212 19:23:44.041691 1983 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 19:23:44.056148 kubelet[1983]: E0212 19:23:44.056123 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:44.124539 kubelet[1983]: E0212 19:23:44.124500 1983 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.24\" not found" Feb 12 19:23:44.417794 kubelet[1983]: E0212 19:23:44.417544 1983 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.20.24" not found Feb 12 19:23:45.056013 kubelet[1983]: I0212 19:23:45.055969 1983 apiserver.go:52] "Watching apiserver" Feb 12 19:23:45.057169 kubelet[1983]: E0212 19:23:45.057154 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:45.367667 kubelet[1983]: I0212 19:23:45.367445 1983 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:23:45.409637 kubelet[1983]: I0212 19:23:45.409609 1983 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:23:45.869939 kubelet[1983]: E0212 19:23:45.869903 1983 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.20.24" not found Feb 12 19:23:46.058023 kubelet[1983]: E0212 19:23:46.058000 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:46.681592 kubelet[1983]: E0212 19:23:46.681563 1983 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.20.24\" not found" node="10.200.20.24" Feb 12 19:23:46.783015 kubelet[1983]: I0212 19:23:46.782985 1983 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.24" Feb 12 19:23:47.059355 kubelet[1983]: E0212 19:23:47.059326 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:47.070609 kubelet[1983]: I0212 19:23:47.070582 1983 kubelet_node_status.go:73] "Successfully registered node" node="10.200.20.24" Feb 12 19:23:47.100191 kubelet[1983]: I0212 19:23:47.100156 1983 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:23:47.104471 kubelet[1983]: I0212 19:23:47.104445 1983 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:23:47.187533 kubelet[1983]: I0212 19:23:47.187499 1983 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 19:23:47.187857 env[1425]: time="2024-02-12T19:23:47.187814016Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:23:47.188374 kubelet[1983]: I0212 19:23:47.188348 1983 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 19:23:47.198090 sudo[1755]: pam_unix(sudo:session): session closed for user root Feb 12 19:23:47.218012 kubelet[1983]: I0212 19:23:47.217963 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/180c4292-f4b5-4f4e-a61f-ff098b49cd49-lib-modules\") pod \"kube-proxy-gqfs5\" (UID: \"180c4292-f4b5-4f4e-a61f-ff098b49cd49\") " pod="kube-system/kube-proxy-gqfs5" Feb 12 19:23:47.218012 kubelet[1983]: I0212 19:23:47.218013 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-host-proc-sys-net\") pod \"cilium-j9p9s\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " pod="kube-system/cilium-j9p9s" Feb 12 19:23:47.218147 kubelet[1983]: I0212 19:23:47.218046 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-host-proc-sys-kernel\") pod \"cilium-j9p9s\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " pod="kube-system/cilium-j9p9s" Feb 12 19:23:47.218147 kubelet[1983]: I0212 19:23:47.218065 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-cilium-run\") pod \"cilium-j9p9s\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " pod="kube-system/cilium-j9p9s" Feb 12 19:23:47.218147 kubelet[1983]: I0212 19:23:47.218087 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trd8n\" (UniqueName: \"kubernetes.io/projected/180c4292-f4b5-4f4e-a61f-ff098b49cd49-kube-api-access-trd8n\") pod \"kube-proxy-gqfs5\" (UID: \"180c4292-f4b5-4f4e-a61f-ff098b49cd49\") " pod="kube-system/kube-proxy-gqfs5" Feb 12 19:23:47.218147 kubelet[1983]: I0212 19:23:47.218115 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-bpf-maps\") pod \"cilium-j9p9s\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " pod="kube-system/cilium-j9p9s" Feb 12 19:23:47.218147 kubelet[1983]: I0212 19:23:47.218135 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-hostproc\") pod \"cilium-j9p9s\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " pod="kube-system/cilium-j9p9s" Feb 12 19:23:47.218261 kubelet[1983]: I0212 19:23:47.218154 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-cni-path\") pod \"cilium-j9p9s\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " pod="kube-system/cilium-j9p9s" Feb 12 19:23:47.218261 kubelet[1983]: I0212 19:23:47.218192 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-etc-cni-netd\") pod \"cilium-j9p9s\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " pod="kube-system/cilium-j9p9s" Feb 12 19:23:47.218261 kubelet[1983]: I0212 19:23:47.218215 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-lib-modules\") pod \"cilium-j9p9s\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " pod="kube-system/cilium-j9p9s" Feb 12 19:23:47.218261 kubelet[1983]: I0212 19:23:47.218234 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ac773f8-326a-42ed-aef3-22c40d334eaf-clustermesh-secrets\") pod \"cilium-j9p9s\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " pod="kube-system/cilium-j9p9s" Feb 12 19:23:47.218261 kubelet[1983]: I0212 19:23:47.218254 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/180c4292-f4b5-4f4e-a61f-ff098b49cd49-xtables-lock\") pod \"kube-proxy-gqfs5\" (UID: \"180c4292-f4b5-4f4e-a61f-ff098b49cd49\") " pod="kube-system/kube-proxy-gqfs5" Feb 12 19:23:47.218369 kubelet[1983]: I0212 19:23:47.218284 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ac773f8-326a-42ed-aef3-22c40d334eaf-hubble-tls\") pod \"cilium-j9p9s\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " pod="kube-system/cilium-j9p9s" Feb 12 19:23:47.218369 kubelet[1983]: I0212 19:23:47.218303 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-cilium-cgroup\") pod \"cilium-j9p9s\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " pod="kube-system/cilium-j9p9s" Feb 12 19:23:47.218369 kubelet[1983]: I0212 19:23:47.218322 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-xtables-lock\") pod \"cilium-j9p9s\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " pod="kube-system/cilium-j9p9s" Feb 12 19:23:47.218369 kubelet[1983]: I0212 19:23:47.218351 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ac773f8-326a-42ed-aef3-22c40d334eaf-cilium-config-path\") pod \"cilium-j9p9s\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " pod="kube-system/cilium-j9p9s" Feb 12 19:23:47.218484 kubelet[1983]: I0212 19:23:47.218379 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbwlw\" (UniqueName: \"kubernetes.io/projected/4ac773f8-326a-42ed-aef3-22c40d334eaf-kube-api-access-zbwlw\") pod \"cilium-j9p9s\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " pod="kube-system/cilium-j9p9s" Feb 12 19:23:47.218484 kubelet[1983]: I0212 19:23:47.218402 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/180c4292-f4b5-4f4e-a61f-ff098b49cd49-kube-proxy\") pod \"kube-proxy-gqfs5\" (UID: \"180c4292-f4b5-4f4e-a61f-ff098b49cd49\") " pod="kube-system/kube-proxy-gqfs5" Feb 12 19:23:47.292009 sshd[1751]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:47.294222 systemd[1]: sshd@4-10.200.20.24:22-10.200.12.6:45952.service: Deactivated successfully. Feb 12 19:23:47.295028 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:23:47.296209 systemd-logind[1414]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:23:47.297006 systemd-logind[1414]: Removed session 7. Feb 12 19:23:48.059669 kubelet[1983]: E0212 19:23:48.059643 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:48.267732 kubelet[1983]: I0212 19:23:48.267709 1983 request.go:690] Waited for 1.162694935s due to client-side throttling, not priority and fairness, request: GET:https://10.200.20.34:6443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dhubble-server-certs&limit=500&resourceVersion=0 Feb 12 19:23:48.321104 kubelet[1983]: E0212 19:23:48.321012 1983 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 19:23:48.321448 kubelet[1983]: E0212 19:23:48.321432 1983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4ac773f8-326a-42ed-aef3-22c40d334eaf-cilium-config-path podName:4ac773f8-326a-42ed-aef3-22c40d334eaf nodeName:}" failed. No retries permitted until 2024-02-12 19:23:48.821381705 +0000 UTC m=+15.633786087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/4ac773f8-326a-42ed-aef3-22c40d334eaf-cilium-config-path") pod "cilium-j9p9s" (UID: "4ac773f8-326a-42ed-aef3-22c40d334eaf") : failed to sync configmap cache: timed out waiting for the condition Feb 12 19:23:48.904031 env[1425]: time="2024-02-12T19:23:48.903735079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gqfs5,Uid:180c4292-f4b5-4f4e-a61f-ff098b49cd49,Namespace:kube-system,Attempt:0,}" Feb 12 19:23:49.060367 kubelet[1983]: E0212 19:23:49.060331 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:49.207869 env[1425]: time="2024-02-12T19:23:49.207769196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j9p9s,Uid:4ac773f8-326a-42ed-aef3-22c40d334eaf,Namespace:kube-system,Attempt:0,}" Feb 12 19:23:50.061518 kubelet[1983]: E0212 19:23:50.061472 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:50.170535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2073796430.mount: Deactivated successfully. Feb 12 19:23:50.191198 env[1425]: time="2024-02-12T19:23:50.191143840Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:50.197046 env[1425]: time="2024-02-12T19:23:50.197001851Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:50.200298 env[1425]: time="2024-02-12T19:23:50.200269514Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:50.205921 env[1425]: time="2024-02-12T19:23:50.205890849Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:50.209207 env[1425]: time="2024-02-12T19:23:50.209167429Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:50.212014 env[1425]: time="2024-02-12T19:23:50.211986334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:50.215475 env[1425]: time="2024-02-12T19:23:50.215438091Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:50.222832 env[1425]: time="2024-02-12T19:23:50.222795121Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:50.270236 env[1425]: time="2024-02-12T19:23:50.261772045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:23:50.270236 env[1425]: time="2024-02-12T19:23:50.261818428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:23:50.270236 env[1425]: time="2024-02-12T19:23:50.261828664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:23:50.270236 env[1425]: time="2024-02-12T19:23:50.265118279Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6 pid=2068 runtime=io.containerd.runc.v2 Feb 12 19:23:50.271182 env[1425]: time="2024-02-12T19:23:50.271106563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:23:50.271182 env[1425]: time="2024-02-12T19:23:50.271163222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:23:50.275303 env[1425]: time="2024-02-12T19:23:50.271323045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:23:50.275303 env[1425]: time="2024-02-12T19:23:50.271601784Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/399d813ead78ad854404c131e33c28b538dd3b18ff7ca82c0141be3e7ae58357 pid=2085 runtime=io.containerd.runc.v2 Feb 12 19:23:50.325598 env[1425]: time="2024-02-12T19:23:50.325483820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gqfs5,Uid:180c4292-f4b5-4f4e-a61f-ff098b49cd49,Namespace:kube-system,Attempt:0,} returns sandbox id \"399d813ead78ad854404c131e33c28b538dd3b18ff7ca82c0141be3e7ae58357\"" Feb 12 19:23:50.326436 env[1425]: time="2024-02-12T19:23:50.326385815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j9p9s,Uid:4ac773f8-326a-42ed-aef3-22c40d334eaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\"" Feb 12 19:23:50.328610 env[1425]: time="2024-02-12T19:23:50.328578185Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 19:23:51.061798 kubelet[1983]: E0212 19:23:51.061763 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:51.364594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1816654894.mount: Deactivated successfully. Feb 12 19:23:51.733151 env[1425]: time="2024-02-12T19:23:51.733048539Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:51.737609 env[1425]: time="2024-02-12T19:23:51.737555713Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:51.741126 env[1425]: time="2024-02-12T19:23:51.741094468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:51.743855 env[1425]: time="2024-02-12T19:23:51.743827506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:51.744235 env[1425]: time="2024-02-12T19:23:51.744206853Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 12 19:23:51.744904 kubelet[1983]: E0212 19:23:51.744828 1983 kuberuntime_manager.go:872] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.26.13,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-trd8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-gqfs5_kube-system(180c4292-f4b5-4f4e-a61f-ff098b49cd49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 12 19:23:51.745051 kubelet[1983]: E0212 19:23:51.744870 1983 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-gqfs5" podUID=180c4292-f4b5-4f4e-a61f-ff098b49cd49 Feb 12 19:23:51.745364 env[1425]: time="2024-02-12T19:23:51.745323940Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:23:52.061992 kubelet[1983]: E0212 19:23:52.061952 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:52.472854 kubelet[1983]: E0212 19:23:52.472722 1983 kuberuntime_manager.go:872] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.26.13,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-trd8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-gqfs5_kube-system(180c4292-f4b5-4f4e-a61f-ff098b49cd49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 12 19:23:52.473127 kubelet[1983]: E0212 19:23:52.472758 1983 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-gqfs5" podUID=180c4292-f4b5-4f4e-a61f-ff098b49cd49 Feb 12 19:23:53.062707 kubelet[1983]: E0212 19:23:53.062673 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:54.050444 kubelet[1983]: E0212 19:23:54.050395 1983 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:54.063764 kubelet[1983]: E0212 19:23:54.063727 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:55.064745 kubelet[1983]: E0212 19:23:55.064705 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:56.065393 kubelet[1983]: E0212 19:23:56.065347 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:56.436593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2187152302.mount: Deactivated successfully. Feb 12 19:23:57.066186 kubelet[1983]: E0212 19:23:57.065990 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:58.066942 kubelet[1983]: E0212 19:23:58.066901 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:58.797497 env[1425]: time="2024-02-12T19:23:58.797458611Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:58.804247 env[1425]: time="2024-02-12T19:23:58.804215420Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:58.809644 env[1425]: time="2024-02-12T19:23:58.809619755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:58.810389 env[1425]: time="2024-02-12T19:23:58.810362532Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 12 19:23:58.812359 env[1425]: time="2024-02-12T19:23:58.812333300Z" level=info msg="CreateContainer within sandbox \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:23:58.830046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1563401505.mount: Deactivated successfully. Feb 12 19:23:58.834597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1929526326.mount: Deactivated successfully. Feb 12 19:23:58.852940 env[1425]: time="2024-02-12T19:23:58.852888550Z" level=info msg="CreateContainer within sandbox \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01\"" Feb 12 19:23:58.853571 env[1425]: time="2024-02-12T19:23:58.853546033Z" level=info msg="StartContainer for \"b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01\"" Feb 12 19:23:58.898818 env[1425]: time="2024-02-12T19:23:58.898780517Z" level=info msg="StartContainer for \"b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01\" returns successfully" Feb 12 19:23:59.068098 kubelet[1983]: E0212 19:23:59.068000 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:59.827949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01-rootfs.mount: Deactivated successfully. Feb 12 19:24:00.068712 kubelet[1983]: E0212 19:24:00.068684 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:00.509709 env[1425]: time="2024-02-12T19:24:00.509653872Z" level=info msg="shim disconnected" id=b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01 Feb 12 19:24:00.510122 env[1425]: time="2024-02-12T19:24:00.510101983Z" level=warning msg="cleaning up after shim disconnected" id=b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01 namespace=k8s.io Feb 12 19:24:00.510199 env[1425]: time="2024-02-12T19:24:00.510185639Z" level=info msg="cleaning up dead shim" Feb 12 19:24:00.516747 env[1425]: time="2024-02-12T19:24:00.516705522Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2194 runtime=io.containerd.runc.v2\n" Feb 12 19:24:01.069694 kubelet[1983]: E0212 19:24:01.069668 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:01.488129 env[1425]: time="2024-02-12T19:24:01.487764684Z" level=info msg="CreateContainer within sandbox \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:24:01.508184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3876606207.mount: Deactivated successfully. Feb 12 19:24:01.513128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2849956928.mount: Deactivated successfully. Feb 12 19:24:01.528676 env[1425]: time="2024-02-12T19:24:01.528622688Z" level=info msg="CreateContainer within sandbox \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417\"" Feb 12 19:24:01.529342 env[1425]: time="2024-02-12T19:24:01.529316733Z" level=info msg="StartContainer for \"b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417\"" Feb 12 19:24:01.574843 env[1425]: time="2024-02-12T19:24:01.574788756Z" level=info msg="StartContainer for \"b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417\" returns successfully" Feb 12 19:24:01.581919 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:24:01.582157 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:24:01.582315 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:24:01.585805 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:24:01.596987 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:24:01.623627 env[1425]: time="2024-02-12T19:24:01.623575926Z" level=info msg="shim disconnected" id=b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417 Feb 12 19:24:01.623627 env[1425]: time="2024-02-12T19:24:01.623625592Z" level=warning msg="cleaning up after shim disconnected" id=b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417 namespace=k8s.io Feb 12 19:24:01.623627 env[1425]: time="2024-02-12T19:24:01.623634949Z" level=info msg="cleaning up dead shim" Feb 12 19:24:01.630993 env[1425]: time="2024-02-12T19:24:01.630938451Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2259 runtime=io.containerd.runc.v2\n" Feb 12 19:24:02.071159 kubelet[1983]: E0212 19:24:02.071126 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:02.490961 env[1425]: time="2024-02-12T19:24:02.490584297Z" level=info msg="CreateContainer within sandbox \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:24:02.506181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417-rootfs.mount: Deactivated successfully. Feb 12 19:24:02.523385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1990497251.mount: Deactivated successfully. Feb 12 19:24:02.543326 env[1425]: time="2024-02-12T19:24:02.543273755Z" level=info msg="CreateContainer within sandbox \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103\"" Feb 12 19:24:02.544279 env[1425]: time="2024-02-12T19:24:02.544256204Z" level=info msg="StartContainer for \"abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103\"" Feb 12 19:24:02.597229 env[1425]: time="2024-02-12T19:24:02.597182517Z" level=info msg="StartContainer for \"abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103\" returns successfully" Feb 12 19:24:02.625523 env[1425]: time="2024-02-12T19:24:02.625472309Z" level=info msg="shim disconnected" id=abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103 Feb 12 19:24:02.625745 env[1425]: time="2024-02-12T19:24:02.625725279Z" level=warning msg="cleaning up after shim disconnected" id=abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103 namespace=k8s.io Feb 12 19:24:02.625818 env[1425]: time="2024-02-12T19:24:02.625805497Z" level=info msg="cleaning up dead shim" Feb 12 19:24:02.632581 env[1425]: time="2024-02-12T19:24:02.632544397Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2318 runtime=io.containerd.runc.v2\n" Feb 12 19:24:03.072150 kubelet[1983]: E0212 19:24:03.072116 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:03.451397 env[1425]: time="2024-02-12T19:24:03.451103300Z" level=info msg="CreateContainer within sandbox \"399d813ead78ad854404c131e33c28b538dd3b18ff7ca82c0141be3e7ae58357\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:24:03.484287 env[1425]: time="2024-02-12T19:24:03.484233344Z" level=info msg="CreateContainer within sandbox \"399d813ead78ad854404c131e33c28b538dd3b18ff7ca82c0141be3e7ae58357\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"92019f13192ac70c1bae17f404803d154d275498fe767b0a0e4075b4ad4cc69e\"" Feb 12 19:24:03.485155 env[1425]: time="2024-02-12T19:24:03.485102829Z" level=info msg="StartContainer for \"92019f13192ac70c1bae17f404803d154d275498fe767b0a0e4075b4ad4cc69e\"" Feb 12 19:24:03.496054 env[1425]: time="2024-02-12T19:24:03.496005561Z" level=info msg="CreateContainer within sandbox \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:24:03.506898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103-rootfs.mount: Deactivated successfully. Feb 12 19:24:03.513185 systemd[1]: run-containerd-runc-k8s.io-92019f13192ac70c1bae17f404803d154d275498fe767b0a0e4075b4ad4cc69e-runc.2G50M6.mount: Deactivated successfully. Feb 12 19:24:03.545162 env[1425]: time="2024-02-12T19:24:03.545117485Z" level=info msg="CreateContainer within sandbox \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d\"" Feb 12 19:24:03.546391 env[1425]: time="2024-02-12T19:24:03.546341874Z" level=info msg="StartContainer for \"ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d\"" Feb 12 19:24:03.556152 env[1425]: time="2024-02-12T19:24:03.556114193Z" level=info msg="StartContainer for \"92019f13192ac70c1bae17f404803d154d275498fe767b0a0e4075b4ad4cc69e\" returns successfully" Feb 12 19:24:03.604798 env[1425]: time="2024-02-12T19:24:03.604758883Z" level=info msg="StartContainer for \"ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d\" returns successfully" Feb 12 19:24:03.634613 env[1425]: time="2024-02-12T19:24:03.634559147Z" level=info msg="shim disconnected" id=ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d Feb 12 19:24:03.634613 env[1425]: time="2024-02-12T19:24:03.634609293Z" level=warning msg="cleaning up after shim disconnected" id=ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d namespace=k8s.io Feb 12 19:24:03.634840 env[1425]: time="2024-02-12T19:24:03.634619011Z" level=info msg="cleaning up dead shim" Feb 12 19:24:03.641797 env[1425]: time="2024-02-12T19:24:03.641746044Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2437 runtime=io.containerd.runc.v2\n" Feb 12 19:24:04.072764 kubelet[1983]: E0212 19:24:04.072734 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:04.500428 env[1425]: time="2024-02-12T19:24:04.500309692Z" level=info msg="CreateContainer within sandbox \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:24:04.521321 kubelet[1983]: I0212 19:24:04.521294 1983 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gqfs5" podStartSLOduration=-9.223372019333534e+09 pod.CreationTimestamp="2024-02-12 19:23:47 +0000 UTC" firstStartedPulling="2024-02-12 19:23:50.327964926 +0000 UTC m=+17.140369308" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:04.50662458 +0000 UTC m=+31.319029002" watchObservedRunningTime="2024-02-12 19:24:04.521241269 +0000 UTC m=+31.333645651" Feb 12 19:24:04.529049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2698689074.mount: Deactivated successfully. Feb 12 19:24:04.536070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237834110.mount: Deactivated successfully. Feb 12 19:24:04.553001 env[1425]: time="2024-02-12T19:24:04.552954030Z" level=info msg="CreateContainer within sandbox \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\"" Feb 12 19:24:04.554052 env[1425]: time="2024-02-12T19:24:04.554030225Z" level=info msg="StartContainer for \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\"" Feb 12 19:24:04.599766 env[1425]: time="2024-02-12T19:24:04.599721245Z" level=info msg="StartContainer for \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\" returns successfully" Feb 12 19:24:04.677447 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:24:04.677964 kubelet[1983]: I0212 19:24:04.677941 1983 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:24:05.073100 kubelet[1983]: E0212 19:24:05.073060 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:05.149444 kernel: Initializing XFRM netlink socket Feb 12 19:24:05.157480 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:24:05.519777 kubelet[1983]: I0212 19:24:05.519675 1983 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-j9p9s" podStartSLOduration=-9.22337201833514e+09 pod.CreationTimestamp="2024-02-12 19:23:47 +0000 UTC" firstStartedPulling="2024-02-12 19:23:50.328168373 +0000 UTC m=+17.140572755" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:05.519537444 +0000 UTC m=+32.331941826" watchObservedRunningTime="2024-02-12 19:24:05.519634899 +0000 UTC m=+32.332039241" Feb 12 19:24:06.073440 kubelet[1983]: E0212 19:24:06.073398 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:06.792925 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:24:06.792467 systemd-networkd[1596]: cilium_host: Link UP Feb 12 19:24:06.792574 systemd-networkd[1596]: cilium_net: Link UP Feb 12 19:24:06.792577 systemd-networkd[1596]: cilium_net: Gained carrier Feb 12 19:24:06.792725 systemd-networkd[1596]: cilium_host: Gained carrier Feb 12 19:24:06.795778 systemd-networkd[1596]: cilium_host: Gained IPv6LL Feb 12 19:24:06.944973 systemd-networkd[1596]: cilium_vxlan: Link UP Feb 12 19:24:06.944983 systemd-networkd[1596]: cilium_vxlan: Gained carrier Feb 12 19:24:07.075240 kubelet[1983]: E0212 19:24:07.074766 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:07.183438 kernel: NET: Registered PF_ALG protocol family Feb 12 19:24:07.232545 systemd-networkd[1596]: cilium_net: Gained IPv6LL Feb 12 19:24:07.376692 kubelet[1983]: I0212 19:24:07.376584 1983 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:24:07.429505 kubelet[1983]: I0212 19:24:07.429467 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lw52\" (UniqueName: \"kubernetes.io/projected/015760a9-6a9b-48da-aee8-c2c2d368fa6d-kube-api-access-5lw52\") pod \"nginx-deployment-8ffc5cf85-fq5ct\" (UID: \"015760a9-6a9b-48da-aee8-c2c2d368fa6d\") " pod="default/nginx-deployment-8ffc5cf85-fq5ct" Feb 12 19:24:07.681034 env[1425]: time="2024-02-12T19:24:07.680924357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-fq5ct,Uid:015760a9-6a9b-48da-aee8-c2c2d368fa6d,Namespace:default,Attempt:0,}" Feb 12 19:24:07.866895 systemd-networkd[1596]: lxc_health: Link UP Feb 12 19:24:07.887440 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:24:07.887459 systemd-networkd[1596]: lxc_health: Gained carrier Feb 12 19:24:08.075697 kubelet[1983]: E0212 19:24:08.075650 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:08.236437 systemd-networkd[1596]: lxc83f887930234: Link UP Feb 12 19:24:08.242508 kernel: eth0: renamed from tmp8dce4 Feb 12 19:24:08.251504 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc83f887930234: link becomes ready Feb 12 19:24:08.251544 systemd-networkd[1596]: lxc83f887930234: Gained carrier Feb 12 19:24:08.560624 systemd-networkd[1596]: cilium_vxlan: Gained IPv6LL Feb 12 19:24:09.076126 kubelet[1983]: E0212 19:24:09.076089 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:09.200594 systemd-networkd[1596]: lxc_health: Gained IPv6LL Feb 12 19:24:10.032592 systemd-networkd[1596]: lxc83f887930234: Gained IPv6LL Feb 12 19:24:10.076598 kubelet[1983]: E0212 19:24:10.076561 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:11.077357 kubelet[1983]: E0212 19:24:11.077323 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:11.812458 env[1425]: time="2024-02-12T19:24:11.812377359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:11.812835 env[1425]: time="2024-02-12T19:24:11.812441984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:11.812835 env[1425]: time="2024-02-12T19:24:11.812456301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:11.813051 env[1425]: time="2024-02-12T19:24:11.813009493Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8dce46438f2e88738da2fa95225a576f868833734a0833aa3f9a999c5531eaec pid=3071 runtime=io.containerd.runc.v2 Feb 12 19:24:11.867080 env[1425]: time="2024-02-12T19:24:11.867034299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-fq5ct,Uid:015760a9-6a9b-48da-aee8-c2c2d368fa6d,Namespace:default,Attempt:0,} returns sandbox id \"8dce46438f2e88738da2fa95225a576f868833734a0833aa3f9a999c5531eaec\"" Feb 12 19:24:11.868815 env[1425]: time="2024-02-12T19:24:11.868788454Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:24:12.078124 kubelet[1983]: E0212 19:24:12.078006 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:13.078960 kubelet[1983]: E0212 19:24:13.078921 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:14.050287 kubelet[1983]: E0212 19:24:14.050250 1983 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:14.079643 kubelet[1983]: E0212 19:24:14.079586 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:14.331480 kubelet[1983]: I0212 19:24:14.331155 1983 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 19:24:14.476197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300273538.mount: Deactivated successfully. Feb 12 19:24:15.080033 kubelet[1983]: E0212 19:24:15.079992 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:15.333805 env[1425]: time="2024-02-12T19:24:15.333692223Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:15.340796 env[1425]: time="2024-02-12T19:24:15.340758706Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:15.344560 env[1425]: time="2024-02-12T19:24:15.344528897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:15.348561 env[1425]: time="2024-02-12T19:24:15.348527878Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:15.349225 env[1425]: time="2024-02-12T19:24:15.349193615Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 12 19:24:15.351788 env[1425]: time="2024-02-12T19:24:15.351752506Z" level=info msg="CreateContainer within sandbox \"8dce46438f2e88738da2fa95225a576f868833734a0833aa3f9a999c5531eaec\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 19:24:15.374241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount116203507.mount: Deactivated successfully. Feb 12 19:24:15.379368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3760589763.mount: Deactivated successfully. Feb 12 19:24:15.390522 env[1425]: time="2024-02-12T19:24:15.390480953Z" level=info msg="CreateContainer within sandbox \"8dce46438f2e88738da2fa95225a576f868833734a0833aa3f9a999c5531eaec\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"94e6d6dc11d6f0edcbbda6f24a5feae9b26974ad49f71f8caeb464af83879610\"" Feb 12 19:24:15.391196 env[1425]: time="2024-02-12T19:24:15.391113937Z" level=info msg="StartContainer for \"94e6d6dc11d6f0edcbbda6f24a5feae9b26974ad49f71f8caeb464af83879610\"" Feb 12 19:24:15.440775 env[1425]: time="2024-02-12T19:24:15.440732767Z" level=info msg="StartContainer for \"94e6d6dc11d6f0edcbbda6f24a5feae9b26974ad49f71f8caeb464af83879610\" returns successfully" Feb 12 19:24:15.525717 kubelet[1983]: I0212 19:24:15.525676 1983 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-fq5ct" podStartSLOduration=-9.223372028329136e+09 pod.CreationTimestamp="2024-02-12 19:24:07 +0000 UTC" firstStartedPulling="2024-02-12 19:24:11.868251418 +0000 UTC m=+38.680655760" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:15.524852631 +0000 UTC m=+42.337257013" watchObservedRunningTime="2024-02-12 19:24:15.525640262 +0000 UTC m=+42.338044644" Feb 12 19:24:16.080890 kubelet[1983]: E0212 19:24:16.080844 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:17.081470 kubelet[1983]: E0212 19:24:17.081425 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:18.082323 kubelet[1983]: E0212 19:24:18.082293 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:19.083241 kubelet[1983]: E0212 19:24:19.083199 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:20.084188 kubelet[1983]: E0212 19:24:20.084152 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:21.085534 kubelet[1983]: E0212 19:24:21.085496 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:22.085719 kubelet[1983]: E0212 19:24:22.085689 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:22.402975 kubelet[1983]: I0212 19:24:22.402877 1983 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:24:22.502983 kubelet[1983]: I0212 19:24:22.502945 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e7c01d3f-1aca-44e1-966c-322e4c45b166-data\") pod \"nfs-server-provisioner-0\" (UID: \"e7c01d3f-1aca-44e1-966c-322e4c45b166\") " pod="default/nfs-server-provisioner-0" Feb 12 19:24:22.503148 kubelet[1983]: I0212 19:24:22.503079 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr986\" (UniqueName: \"kubernetes.io/projected/e7c01d3f-1aca-44e1-966c-322e4c45b166-kube-api-access-dr986\") pod \"nfs-server-provisioner-0\" (UID: \"e7c01d3f-1aca-44e1-966c-322e4c45b166\") " pod="default/nfs-server-provisioner-0" Feb 12 19:24:22.706116 env[1425]: time="2024-02-12T19:24:22.705680155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e7c01d3f-1aca-44e1-966c-322e4c45b166,Namespace:default,Attempt:0,}" Feb 12 19:24:22.769479 systemd-networkd[1596]: lxc6a95151682d4: Link UP Feb 12 19:24:22.779433 kernel: eth0: renamed from tmpb0b58 Feb 12 19:24:22.796215 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:24:22.796330 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6a95151682d4: link becomes ready Feb 12 19:24:22.796491 systemd-networkd[1596]: lxc6a95151682d4: Gained carrier Feb 12 19:24:23.025995 env[1425]: time="2024-02-12T19:24:23.025931350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:23.025995 env[1425]: time="2024-02-12T19:24:23.025970942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:23.026494 env[1425]: time="2024-02-12T19:24:23.025981620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:23.026494 env[1425]: time="2024-02-12T19:24:23.026220655Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0b58a6d057f5b33e1cb6d8c2c69fcecc51f85f4622e92d7914a4ac3703b7a07 pid=3240 runtime=io.containerd.runc.v2 Feb 12 19:24:23.068001 env[1425]: time="2024-02-12T19:24:23.067952661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e7c01d3f-1aca-44e1-966c-322e4c45b166,Namespace:default,Attempt:0,} returns sandbox id \"b0b58a6d057f5b33e1cb6d8c2c69fcecc51f85f4622e92d7914a4ac3703b7a07\"" Feb 12 19:24:23.069300 env[1425]: time="2024-02-12T19:24:23.069273653Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 19:24:23.087067 kubelet[1983]: E0212 19:24:23.087033 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:23.615469 systemd[1]: run-containerd-runc-k8s.io-b0b58a6d057f5b33e1cb6d8c2c69fcecc51f85f4622e92d7914a4ac3703b7a07-runc.hb2r0g.mount: Deactivated successfully. Feb 12 19:24:24.087504 kubelet[1983]: E0212 19:24:24.087462 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:24.496590 systemd-networkd[1596]: lxc6a95151682d4: Gained IPv6LL Feb 12 19:24:25.088043 kubelet[1983]: E0212 19:24:25.087992 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:25.246669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2966920694.mount: Deactivated successfully. Feb 12 19:24:26.088148 kubelet[1983]: E0212 19:24:26.088096 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:27.088622 kubelet[1983]: E0212 19:24:27.088576 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:27.123465 env[1425]: time="2024-02-12T19:24:27.123392243Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:27.131035 env[1425]: time="2024-02-12T19:24:27.130998620Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:27.134489 env[1425]: time="2024-02-12T19:24:27.134458729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:27.138423 env[1425]: time="2024-02-12T19:24:27.138372518Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:27.139133 env[1425]: time="2024-02-12T19:24:27.139104748Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 12 19:24:27.141772 env[1425]: time="2024-02-12T19:24:27.141740603Z" level=info msg="CreateContainer within sandbox \"b0b58a6d057f5b33e1cb6d8c2c69fcecc51f85f4622e92d7914a4ac3703b7a07\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 19:24:27.162924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1385289599.mount: Deactivated successfully. Feb 12 19:24:27.168339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount958051522.mount: Deactivated successfully. Feb 12 19:24:27.187584 env[1425]: time="2024-02-12T19:24:27.187531956Z" level=info msg="CreateContainer within sandbox \"b0b58a6d057f5b33e1cb6d8c2c69fcecc51f85f4622e92d7914a4ac3703b7a07\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f3be279c33bf5e7b6c739b3e70a87238e52a82aba554ec16652a4a9aa78f3895\"" Feb 12 19:24:27.188236 env[1425]: time="2024-02-12T19:24:27.188197478Z" level=info msg="StartContainer for \"f3be279c33bf5e7b6c739b3e70a87238e52a82aba554ec16652a4a9aa78f3895\"" Feb 12 19:24:27.243944 env[1425]: time="2024-02-12T19:24:27.241611085Z" level=info msg="StartContainer for \"f3be279c33bf5e7b6c739b3e70a87238e52a82aba554ec16652a4a9aa78f3895\" returns successfully" Feb 12 19:24:27.550153 kubelet[1983]: I0212 19:24:27.550112 1983 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372031304699e+09 pod.CreationTimestamp="2024-02-12 19:24:22 +0000 UTC" firstStartedPulling="2024-02-12 19:24:23.06907993 +0000 UTC m=+49.881484312" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:27.549332417 +0000 UTC m=+54.361736799" watchObservedRunningTime="2024-02-12 19:24:27.550077606 +0000 UTC m=+54.362481988" Feb 12 19:24:28.088757 kubelet[1983]: E0212 19:24:28.088707 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:29.089176 kubelet[1983]: E0212 19:24:29.089141 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:30.089941 kubelet[1983]: E0212 19:24:30.089901 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:31.091062 kubelet[1983]: E0212 19:24:31.091029 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:32.091752 kubelet[1983]: E0212 19:24:32.091721 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:33.092646 kubelet[1983]: E0212 19:24:33.092609 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:34.050393 kubelet[1983]: E0212 19:24:34.050350 1983 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:34.093588 kubelet[1983]: E0212 19:24:34.093557 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:35.094428 kubelet[1983]: E0212 19:24:35.094382 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:36.095318 kubelet[1983]: E0212 19:24:36.095285 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:37.095435 kubelet[1983]: E0212 19:24:37.095373 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:37.161836 kubelet[1983]: I0212 19:24:37.161688 1983 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:24:37.270123 kubelet[1983]: I0212 19:24:37.270092 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghjvg\" (UniqueName: \"kubernetes.io/projected/7fe8fbcc-8d71-4979-9153-4825be6095d0-kube-api-access-ghjvg\") pod \"test-pod-1\" (UID: \"7fe8fbcc-8d71-4979-9153-4825be6095d0\") " pod="default/test-pod-1" Feb 12 19:24:37.270333 kubelet[1983]: I0212 19:24:37.270322 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dd36e747-f7e7-46c7-a133-6ac037f498d4\" (UniqueName: \"kubernetes.io/nfs/7fe8fbcc-8d71-4979-9153-4825be6095d0-pvc-dd36e747-f7e7-46c7-a133-6ac037f498d4\") pod \"test-pod-1\" (UID: \"7fe8fbcc-8d71-4979-9153-4825be6095d0\") " pod="default/test-pod-1" Feb 12 19:24:37.603441 kernel: FS-Cache: Loaded Feb 12 19:24:37.688698 kernel: RPC: Registered named UNIX socket transport module. Feb 12 19:24:37.688824 kernel: RPC: Registered udp transport module. Feb 12 19:24:37.688855 kernel: RPC: Registered tcp transport module. Feb 12 19:24:37.697279 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 19:24:37.826447 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 19:24:38.056202 kernel: NFS: Registering the id_resolver key type Feb 12 19:24:38.056326 kernel: Key type id_resolver registered Feb 12 19:24:38.059375 kernel: Key type id_legacy registered Feb 12 19:24:38.095913 kubelet[1983]: E0212 19:24:38.095872 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:38.644755 nfsidmap[3382]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-434dfde19b' Feb 12 19:24:38.710353 nfsidmap[3383]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-434dfde19b' Feb 12 19:24:38.965566 env[1425]: time="2024-02-12T19:24:38.965174627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7fe8fbcc-8d71-4979-9153-4825be6095d0,Namespace:default,Attempt:0,}" Feb 12 19:24:39.014884 systemd-networkd[1596]: lxcbc6898b8fd5c: Link UP Feb 12 19:24:39.026505 kernel: eth0: renamed from tmpb7b25 Feb 12 19:24:39.044005 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:24:39.044121 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbc6898b8fd5c: link becomes ready Feb 12 19:24:39.044485 systemd-networkd[1596]: lxcbc6898b8fd5c: Gained carrier Feb 12 19:24:39.097332 kubelet[1983]: E0212 19:24:39.097286 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:39.268935 env[1425]: time="2024-02-12T19:24:39.268584300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:39.268935 env[1425]: time="2024-02-12T19:24:39.268677846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:39.268935 env[1425]: time="2024-02-12T19:24:39.268707601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:39.269256 env[1425]: time="2024-02-12T19:24:39.269150215Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7b2555b98c5220cd51ab73e062f85a91f91138683a0619ed143de0238d06002 pid=3410 runtime=io.containerd.runc.v2 Feb 12 19:24:39.309192 env[1425]: time="2024-02-12T19:24:39.309152989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7fe8fbcc-8d71-4979-9153-4825be6095d0,Namespace:default,Attempt:0,} returns sandbox id \"b7b2555b98c5220cd51ab73e062f85a91f91138683a0619ed143de0238d06002\"" Feb 12 19:24:39.310936 env[1425]: time="2024-02-12T19:24:39.310901206Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:24:39.808512 env[1425]: time="2024-02-12T19:24:39.808471658Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:39.814904 env[1425]: time="2024-02-12T19:24:39.814872374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:39.817977 env[1425]: time="2024-02-12T19:24:39.817950830Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:39.821553 env[1425]: time="2024-02-12T19:24:39.821520492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:39.822266 env[1425]: time="2024-02-12T19:24:39.822237624Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 12 19:24:39.824849 env[1425]: time="2024-02-12T19:24:39.824821715Z" level=info msg="CreateContainer within sandbox \"b7b2555b98c5220cd51ab73e062f85a91f91138683a0619ed143de0238d06002\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 19:24:39.848525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount568778765.mount: Deactivated successfully. Feb 12 19:24:39.853140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839645967.mount: Deactivated successfully. Feb 12 19:24:39.870052 env[1425]: time="2024-02-12T19:24:39.870001230Z" level=info msg="CreateContainer within sandbox \"b7b2555b98c5220cd51ab73e062f85a91f91138683a0619ed143de0238d06002\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"7997f92388f14e663b6bfb2c68cf143612ceb27656226e15301073612378eaed\"" Feb 12 19:24:39.870836 env[1425]: time="2024-02-12T19:24:39.870807548Z" level=info msg="StartContainer for \"7997f92388f14e663b6bfb2c68cf143612ceb27656226e15301073612378eaed\"" Feb 12 19:24:39.914186 env[1425]: time="2024-02-12T19:24:39.914147500Z" level=info msg="StartContainer for \"7997f92388f14e663b6bfb2c68cf143612ceb27656226e15301073612378eaed\" returns successfully" Feb 12 19:24:40.098176 kubelet[1983]: E0212 19:24:40.097807 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:40.569241 kubelet[1983]: I0212 19:24:40.569192 1983 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372019285616e+09 pod.CreationTimestamp="2024-02-12 19:24:23 +0000 UTC" firstStartedPulling="2024-02-12 19:24:39.310404481 +0000 UTC m=+66.122808863" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:40.568397692 +0000 UTC m=+67.380802074" watchObservedRunningTime="2024-02-12 19:24:40.569159259 +0000 UTC m=+67.381563641" Feb 12 19:24:40.816594 systemd-networkd[1596]: lxcbc6898b8fd5c: Gained IPv6LL Feb 12 19:24:41.098405 kubelet[1983]: E0212 19:24:41.098361 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:42.098723 kubelet[1983]: E0212 19:24:42.098690 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:43.099486 kubelet[1983]: E0212 19:24:43.099441 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:44.100026 kubelet[1983]: E0212 19:24:44.099992 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:44.214999 systemd[1]: run-containerd-runc-k8s.io-63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d-runc.2fUlpC.mount: Deactivated successfully. Feb 12 19:24:44.227218 env[1425]: time="2024-02-12T19:24:44.227155097Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:24:44.231439 env[1425]: time="2024-02-12T19:24:44.231383494Z" level=info msg="StopContainer for \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\" with timeout 1 (s)" Feb 12 19:24:44.231858 env[1425]: time="2024-02-12T19:24:44.231836950Z" level=info msg="Stop container \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\" with signal terminated" Feb 12 19:24:44.236861 systemd-networkd[1596]: lxc_health: Link DOWN Feb 12 19:24:44.236869 systemd-networkd[1596]: lxc_health: Lost carrier Feb 12 19:24:44.275179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d-rootfs.mount: Deactivated successfully. Feb 12 19:24:44.892832 env[1425]: time="2024-02-12T19:24:44.892783469Z" level=info msg="shim disconnected" id=63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d Feb 12 19:24:44.892832 env[1425]: time="2024-02-12T19:24:44.892831742Z" level=warning msg="cleaning up after shim disconnected" id=63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d namespace=k8s.io Feb 12 19:24:44.893074 env[1425]: time="2024-02-12T19:24:44.892841621Z" level=info msg="cleaning up dead shim" Feb 12 19:24:44.899540 env[1425]: time="2024-02-12T19:24:44.899490434Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3538 runtime=io.containerd.runc.v2\n" Feb 12 19:24:44.903582 env[1425]: time="2024-02-12T19:24:44.903543897Z" level=info msg="StopContainer for \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\" returns successfully" Feb 12 19:24:44.904176 env[1425]: time="2024-02-12T19:24:44.904146531Z" level=info msg="StopPodSandbox for \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\"" Feb 12 19:24:44.904236 env[1425]: time="2024-02-12T19:24:44.904209042Z" level=info msg="Container to stop \"abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:44.904236 env[1425]: time="2024-02-12T19:24:44.904228239Z" level=info msg="Container to stop \"ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:44.904298 env[1425]: time="2024-02-12T19:24:44.904239718Z" level=info msg="Container to stop \"b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:44.904298 env[1425]: time="2024-02-12T19:24:44.904250876Z" level=info msg="Container to stop \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:44.904298 env[1425]: time="2024-02-12T19:24:44.904261115Z" level=info msg="Container to stop \"b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:44.905917 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6-shm.mount: Deactivated successfully. Feb 12 19:24:44.934636 env[1425]: time="2024-02-12T19:24:44.934580637Z" level=info msg="shim disconnected" id=6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6 Feb 12 19:24:44.934969 env[1425]: time="2024-02-12T19:24:44.934939986Z" level=warning msg="cleaning up after shim disconnected" id=6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6 namespace=k8s.io Feb 12 19:24:44.935047 env[1425]: time="2024-02-12T19:24:44.935034252Z" level=info msg="cleaning up dead shim" Feb 12 19:24:44.941669 env[1425]: time="2024-02-12T19:24:44.941635592Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3571 runtime=io.containerd.runc.v2\n" Feb 12 19:24:44.942070 env[1425]: time="2024-02-12T19:24:44.942045454Z" level=info msg="TearDown network for sandbox \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" successfully" Feb 12 19:24:44.942159 env[1425]: time="2024-02-12T19:24:44.942142720Z" level=info msg="StopPodSandbox for \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" returns successfully" Feb 12 19:24:45.011440 kubelet[1983]: I0212 19:24:45.011238 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-host-proc-sys-kernel\") pod \"4ac773f8-326a-42ed-aef3-22c40d334eaf\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " Feb 12 19:24:45.011440 kubelet[1983]: I0212 19:24:45.011289 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ac773f8-326a-42ed-aef3-22c40d334eaf-cilium-config-path\") pod \"4ac773f8-326a-42ed-aef3-22c40d334eaf\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " Feb 12 19:24:45.011440 kubelet[1983]: I0212 19:24:45.011311 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-cni-path\") pod \"4ac773f8-326a-42ed-aef3-22c40d334eaf\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " Feb 12 19:24:45.011440 kubelet[1983]: I0212 19:24:45.011331 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-host-proc-sys-net\") pod \"4ac773f8-326a-42ed-aef3-22c40d334eaf\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " Feb 12 19:24:45.011440 kubelet[1983]: I0212 19:24:45.011346 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-bpf-maps\") pod \"4ac773f8-326a-42ed-aef3-22c40d334eaf\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " Feb 12 19:24:45.011440 kubelet[1983]: I0212 19:24:45.011364 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-hostproc\") pod \"4ac773f8-326a-42ed-aef3-22c40d334eaf\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " Feb 12 19:24:45.011715 kubelet[1983]: I0212 19:24:45.011380 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-xtables-lock\") pod \"4ac773f8-326a-42ed-aef3-22c40d334eaf\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " Feb 12 19:24:45.011715 kubelet[1983]: I0212 19:24:45.011397 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-cilium-run\") pod \"4ac773f8-326a-42ed-aef3-22c40d334eaf\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " Feb 12 19:24:45.011715 kubelet[1983]: I0212 19:24:45.011427 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-cilium-cgroup\") pod \"4ac773f8-326a-42ed-aef3-22c40d334eaf\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " Feb 12 19:24:45.011715 kubelet[1983]: I0212 19:24:45.011444 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-etc-cni-netd\") pod \"4ac773f8-326a-42ed-aef3-22c40d334eaf\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " Feb 12 19:24:45.011715 kubelet[1983]: I0212 19:24:45.011462 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-lib-modules\") pod \"4ac773f8-326a-42ed-aef3-22c40d334eaf\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " Feb 12 19:24:45.011715 kubelet[1983]: I0212 19:24:45.011482 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbwlw\" (UniqueName: \"kubernetes.io/projected/4ac773f8-326a-42ed-aef3-22c40d334eaf-kube-api-access-zbwlw\") pod \"4ac773f8-326a-42ed-aef3-22c40d334eaf\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " Feb 12 19:24:45.011866 kubelet[1983]: I0212 19:24:45.011503 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ac773f8-326a-42ed-aef3-22c40d334eaf-hubble-tls\") pod \"4ac773f8-326a-42ed-aef3-22c40d334eaf\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " Feb 12 19:24:45.011866 kubelet[1983]: I0212 19:24:45.011524 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ac773f8-326a-42ed-aef3-22c40d334eaf-clustermesh-secrets\") pod \"4ac773f8-326a-42ed-aef3-22c40d334eaf\" (UID: \"4ac773f8-326a-42ed-aef3-22c40d334eaf\") " Feb 12 19:24:45.011948 kubelet[1983]: I0212 19:24:45.011921 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4ac773f8-326a-42ed-aef3-22c40d334eaf" (UID: "4ac773f8-326a-42ed-aef3-22c40d334eaf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:45.012086 kubelet[1983]: W0212 19:24:45.012056 1983 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/4ac773f8-326a-42ed-aef3-22c40d334eaf/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:24:45.014297 kubelet[1983]: I0212 19:24:45.013747 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac773f8-326a-42ed-aef3-22c40d334eaf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4ac773f8-326a-42ed-aef3-22c40d334eaf" (UID: "4ac773f8-326a-42ed-aef3-22c40d334eaf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:24:45.014297 kubelet[1983]: I0212 19:24:45.013804 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-cni-path" (OuterVolumeSpecName: "cni-path") pod "4ac773f8-326a-42ed-aef3-22c40d334eaf" (UID: "4ac773f8-326a-42ed-aef3-22c40d334eaf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:45.014297 kubelet[1983]: I0212 19:24:45.013821 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4ac773f8-326a-42ed-aef3-22c40d334eaf" (UID: "4ac773f8-326a-42ed-aef3-22c40d334eaf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:45.014297 kubelet[1983]: I0212 19:24:45.013838 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4ac773f8-326a-42ed-aef3-22c40d334eaf" (UID: "4ac773f8-326a-42ed-aef3-22c40d334eaf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:45.014297 kubelet[1983]: I0212 19:24:45.013852 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-hostproc" (OuterVolumeSpecName: "hostproc") pod "4ac773f8-326a-42ed-aef3-22c40d334eaf" (UID: "4ac773f8-326a-42ed-aef3-22c40d334eaf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:45.014511 kubelet[1983]: I0212 19:24:45.013866 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4ac773f8-326a-42ed-aef3-22c40d334eaf" (UID: "4ac773f8-326a-42ed-aef3-22c40d334eaf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:45.014511 kubelet[1983]: I0212 19:24:45.013881 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4ac773f8-326a-42ed-aef3-22c40d334eaf" (UID: "4ac773f8-326a-42ed-aef3-22c40d334eaf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:45.014511 kubelet[1983]: I0212 19:24:45.013895 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4ac773f8-326a-42ed-aef3-22c40d334eaf" (UID: "4ac773f8-326a-42ed-aef3-22c40d334eaf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:45.014511 kubelet[1983]: I0212 19:24:45.013908 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4ac773f8-326a-42ed-aef3-22c40d334eaf" (UID: "4ac773f8-326a-42ed-aef3-22c40d334eaf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:45.014511 kubelet[1983]: I0212 19:24:45.013922 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4ac773f8-326a-42ed-aef3-22c40d334eaf" (UID: "4ac773f8-326a-42ed-aef3-22c40d334eaf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:45.015407 kubelet[1983]: I0212 19:24:45.015374 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac773f8-326a-42ed-aef3-22c40d334eaf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4ac773f8-326a-42ed-aef3-22c40d334eaf" (UID: "4ac773f8-326a-42ed-aef3-22c40d334eaf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:24:45.016377 kubelet[1983]: I0212 19:24:45.016349 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac773f8-326a-42ed-aef3-22c40d334eaf-kube-api-access-zbwlw" (OuterVolumeSpecName: "kube-api-access-zbwlw") pod "4ac773f8-326a-42ed-aef3-22c40d334eaf" (UID: "4ac773f8-326a-42ed-aef3-22c40d334eaf"). InnerVolumeSpecName "kube-api-access-zbwlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:45.017450 kubelet[1983]: I0212 19:24:45.017429 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac773f8-326a-42ed-aef3-22c40d334eaf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4ac773f8-326a-42ed-aef3-22c40d334eaf" (UID: "4ac773f8-326a-42ed-aef3-22c40d334eaf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:45.100942 kubelet[1983]: E0212 19:24:45.100916 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:45.112225 kubelet[1983]: I0212 19:24:45.112195 1983 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ac773f8-326a-42ed-aef3-22c40d334eaf-hubble-tls\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:45.112287 kubelet[1983]: I0212 19:24:45.112231 1983 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ac773f8-326a-42ed-aef3-22c40d334eaf-clustermesh-secrets\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:45.112287 kubelet[1983]: I0212 19:24:45.112245 1983 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-host-proc-sys-kernel\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:45.112287 kubelet[1983]: I0212 19:24:45.112254 1983 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ac773f8-326a-42ed-aef3-22c40d334eaf-cilium-config-path\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:45.112287 kubelet[1983]: I0212 19:24:45.112266 1983 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-cni-path\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:45.112287 kubelet[1983]: I0212 19:24:45.112276 1983 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-host-proc-sys-net\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:45.112287 kubelet[1983]: I0212 19:24:45.112284 1983 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-bpf-maps\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:45.112448 kubelet[1983]: I0212 19:24:45.112293 1983 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-hostproc\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:45.112448 kubelet[1983]: I0212 19:24:45.112303 1983 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-xtables-lock\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:45.112448 kubelet[1983]: I0212 19:24:45.112313 1983 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-zbwlw\" (UniqueName: \"kubernetes.io/projected/4ac773f8-326a-42ed-aef3-22c40d334eaf-kube-api-access-zbwlw\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:45.112448 kubelet[1983]: I0212 19:24:45.112321 1983 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-cilium-run\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:45.112448 kubelet[1983]: I0212 19:24:45.112330 1983 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-cilium-cgroup\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:45.112448 kubelet[1983]: I0212 19:24:45.112340 1983 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-etc-cni-netd\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:45.112448 kubelet[1983]: I0212 19:24:45.112349 1983 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ac773f8-326a-42ed-aef3-22c40d334eaf-lib-modules\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:45.208658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6-rootfs.mount: Deactivated successfully. Feb 12 19:24:45.208804 systemd[1]: var-lib-kubelet-pods-4ac773f8\x2d326a\x2d42ed\x2daef3\x2d22c40d334eaf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzbwlw.mount: Deactivated successfully. Feb 12 19:24:45.208899 systemd[1]: var-lib-kubelet-pods-4ac773f8\x2d326a\x2d42ed\x2daef3\x2d22c40d334eaf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:24:45.208980 systemd[1]: var-lib-kubelet-pods-4ac773f8\x2d326a\x2d42ed\x2daef3\x2d22c40d334eaf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:24:45.569885 kubelet[1983]: I0212 19:24:45.569863 1983 scope.go:115] "RemoveContainer" containerID="63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d" Feb 12 19:24:45.571847 env[1425]: time="2024-02-12T19:24:45.571800744Z" level=info msg="RemoveContainer for \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\"" Feb 12 19:24:45.587340 env[1425]: time="2024-02-12T19:24:45.587296760Z" level=info msg="RemoveContainer for \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\" returns successfully" Feb 12 19:24:45.587595 kubelet[1983]: I0212 19:24:45.587572 1983 scope.go:115] "RemoveContainer" containerID="ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d" Feb 12 19:24:45.590691 env[1425]: time="2024-02-12T19:24:45.590390884Z" level=info msg="RemoveContainer for \"ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d\"" Feb 12 19:24:45.598193 env[1425]: time="2024-02-12T19:24:45.598093679Z" level=info msg="RemoveContainer for \"ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d\" returns successfully" Feb 12 19:24:45.598315 kubelet[1983]: I0212 19:24:45.598287 1983 scope.go:115] "RemoveContainer" containerID="abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103" Feb 12 19:24:45.599264 env[1425]: time="2024-02-12T19:24:45.599233998Z" level=info msg="RemoveContainer for \"abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103\"" Feb 12 19:24:45.607040 env[1425]: time="2024-02-12T19:24:45.607000184Z" level=info msg="RemoveContainer for \"abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103\" returns successfully" Feb 12 19:24:45.607298 kubelet[1983]: I0212 19:24:45.607279 1983 scope.go:115] "RemoveContainer" containerID="b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417" Feb 12 19:24:45.608579 env[1425]: time="2024-02-12T19:24:45.608328317Z" level=info msg="RemoveContainer for \"b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417\"" Feb 12 19:24:45.616803 env[1425]: time="2024-02-12T19:24:45.616772447Z" level=info msg="RemoveContainer for \"b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417\" returns successfully" Feb 12 19:24:45.617148 kubelet[1983]: I0212 19:24:45.617111 1983 scope.go:115] "RemoveContainer" containerID="b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01" Feb 12 19:24:45.618105 env[1425]: time="2024-02-12T19:24:45.618080663Z" level=info msg="RemoveContainer for \"b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01\"" Feb 12 19:24:45.626436 env[1425]: time="2024-02-12T19:24:45.626376094Z" level=info msg="RemoveContainer for \"b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01\" returns successfully" Feb 12 19:24:45.626769 kubelet[1983]: I0212 19:24:45.626745 1983 scope.go:115] "RemoveContainer" containerID="63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d" Feb 12 19:24:45.627015 env[1425]: time="2024-02-12T19:24:45.626937175Z" level=error msg="ContainerStatus for \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\": not found" Feb 12 19:24:45.627163 kubelet[1983]: E0212 19:24:45.627143 1983 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\": not found" containerID="63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d" Feb 12 19:24:45.627200 kubelet[1983]: I0212 19:24:45.627185 1983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d} err="failed to get container status \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\": rpc error: code = NotFound desc = an error occurred when try to find container \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\": not found" Feb 12 19:24:45.627200 kubelet[1983]: I0212 19:24:45.627200 1983 scope.go:115] "RemoveContainer" containerID="ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d" Feb 12 19:24:45.627385 env[1425]: time="2024-02-12T19:24:45.627336878Z" level=error msg="ContainerStatus for \"ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d\": not found" Feb 12 19:24:45.627518 kubelet[1983]: E0212 19:24:45.627495 1983 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d\": not found" containerID="ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d" Feb 12 19:24:45.627560 kubelet[1983]: I0212 19:24:45.627533 1983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d} err="failed to get container status \"ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef12d49ae71bdda5cdf6d5c33c5eec05a36b3bc33c288812915c5f1cb4ff502d\": not found" Feb 12 19:24:45.627560 kubelet[1983]: I0212 19:24:45.627542 1983 scope.go:115] "RemoveContainer" containerID="abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103" Feb 12 19:24:45.627710 env[1425]: time="2024-02-12T19:24:45.627669871Z" level=error msg="ContainerStatus for \"abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103\": not found" Feb 12 19:24:45.627818 kubelet[1983]: E0212 19:24:45.627799 1983 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103\": not found" containerID="abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103" Feb 12 19:24:45.627856 kubelet[1983]: I0212 19:24:45.627834 1983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103} err="failed to get container status \"abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103\": rpc error: code = NotFound desc = an error occurred when try to find container \"abde3eb23ed500bc0eb4c104854a5306fba0bc94fe7c2420f9cf29982ebbd103\": not found" Feb 12 19:24:45.627856 kubelet[1983]: I0212 19:24:45.627843 1983 scope.go:115] "RemoveContainer" containerID="b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417" Feb 12 19:24:45.628009 env[1425]: time="2024-02-12T19:24:45.627966110Z" level=error msg="ContainerStatus for \"b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417\": not found" Feb 12 19:24:45.628124 kubelet[1983]: E0212 19:24:45.628106 1983 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417\": not found" containerID="b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417" Feb 12 19:24:45.628164 kubelet[1983]: I0212 19:24:45.628142 1983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417} err="failed to get container status \"b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417\": rpc error: code = NotFound desc = an error occurred when try to find container \"b53bdeac185da2ceb98e7a9c2154468a6b4c56514d470d2aacd679e3ee12f417\": not found" Feb 12 19:24:45.628164 kubelet[1983]: I0212 19:24:45.628152 1983 scope.go:115] "RemoveContainer" containerID="b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01" Feb 12 19:24:45.628318 env[1425]: time="2024-02-12T19:24:45.628276026Z" level=error msg="ContainerStatus for \"b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01\": not found" Feb 12 19:24:45.628454 kubelet[1983]: E0212 19:24:45.628425 1983 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01\": not found" containerID="b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01" Feb 12 19:24:45.628493 kubelet[1983]: I0212 19:24:45.628461 1983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01} err="failed to get container status \"b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01\": rpc error: code = NotFound desc = an error occurred when try to find container \"b791ea2d2252f22efaf301c0c2091cbfce02751a1325271533b8cc569aa88c01\": not found" Feb 12 19:24:46.101658 kubelet[1983]: E0212 19:24:46.101627 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:46.450323 env[1425]: time="2024-02-12T19:24:46.450223213Z" level=info msg="StopContainer for \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\" with timeout 1 (s)" Feb 12 19:24:46.450655 env[1425]: time="2024-02-12T19:24:46.450595322Z" level=error msg="StopContainer for \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\": not found" Feb 12 19:24:46.451064 kubelet[1983]: E0212 19:24:46.451041 1983 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d\": not found" containerID="63eae295a6b2ff4a30c300b47ffb0d0d58573ae6dcbec67b4a3243599fc3c93d" Feb 12 19:24:46.451274 env[1425]: time="2024-02-12T19:24:46.451251510Z" level=info msg="StopPodSandbox for \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\"" Feb 12 19:24:46.451456 env[1425]: time="2024-02-12T19:24:46.451390291Z" level=info msg="TearDown network for sandbox \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" successfully" Feb 12 19:24:46.451539 kubelet[1983]: I0212 19:24:46.451518 1983 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4ac773f8-326a-42ed-aef3-22c40d334eaf path="/var/lib/kubelet/pods/4ac773f8-326a-42ed-aef3-22c40d334eaf/volumes" Feb 12 19:24:46.451595 env[1425]: time="2024-02-12T19:24:46.451525312Z" level=info msg="StopPodSandbox for \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" returns successfully" Feb 12 19:24:47.102822 kubelet[1983]: E0212 19:24:47.102787 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:47.596153 kubelet[1983]: I0212 19:24:47.596120 1983 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:24:47.596316 kubelet[1983]: E0212 19:24:47.596167 1983 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ac773f8-326a-42ed-aef3-22c40d334eaf" containerName="mount-cgroup" Feb 12 19:24:47.596316 kubelet[1983]: E0212 19:24:47.596176 1983 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ac773f8-326a-42ed-aef3-22c40d334eaf" containerName="apply-sysctl-overwrites" Feb 12 19:24:47.596316 kubelet[1983]: E0212 19:24:47.596182 1983 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ac773f8-326a-42ed-aef3-22c40d334eaf" containerName="cilium-agent" Feb 12 19:24:47.596316 kubelet[1983]: E0212 19:24:47.596189 1983 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ac773f8-326a-42ed-aef3-22c40d334eaf" containerName="mount-bpf-fs" Feb 12 19:24:47.596316 kubelet[1983]: E0212 19:24:47.596195 1983 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ac773f8-326a-42ed-aef3-22c40d334eaf" containerName="clean-cilium-state" Feb 12 19:24:47.596316 kubelet[1983]: I0212 19:24:47.596212 1983 memory_manager.go:346] "RemoveStaleState removing state" podUID="4ac773f8-326a-42ed-aef3-22c40d334eaf" containerName="cilium-agent" Feb 12 19:24:47.600066 kubelet[1983]: W0212 19:24:47.600037 1983 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.200.20.24" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.200.20.24' and this object Feb 12 19:24:47.600066 kubelet[1983]: E0212 19:24:47.600069 1983 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.200.20.24" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.200.20.24' and this object Feb 12 19:24:47.622722 kubelet[1983]: I0212 19:24:47.622691 1983 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:24:47.622851 kubelet[1983]: I0212 19:24:47.622821 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1743907-b54b-4a67-a198-5ee4f79187a7-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-vvz62\" (UID: \"c1743907-b54b-4a67-a198-5ee4f79187a7\") " pod="kube-system/cilium-operator-f59cbd8c6-vvz62" Feb 12 19:24:47.622851 kubelet[1983]: I0212 19:24:47.622849 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpr9r\" (UniqueName: \"kubernetes.io/projected/c1743907-b54b-4a67-a198-5ee4f79187a7-kube-api-access-hpr9r\") pod \"cilium-operator-f59cbd8c6-vvz62\" (UID: \"c1743907-b54b-4a67-a198-5ee4f79187a7\") " pod="kube-system/cilium-operator-f59cbd8c6-vvz62" Feb 12 19:24:47.723776 kubelet[1983]: I0212 19:24:47.723736 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-bpf-maps\") pod \"cilium-n8kh8\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " pod="kube-system/cilium-n8kh8" Feb 12 19:24:47.723932 kubelet[1983]: I0212 19:24:47.723842 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-lib-modules\") pod \"cilium-n8kh8\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " pod="kube-system/cilium-n8kh8" Feb 12 19:24:47.723932 kubelet[1983]: I0212 19:24:47.723867 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fb668a3-1984-41ff-bea5-3d1d334f60ac-clustermesh-secrets\") pod \"cilium-n8kh8\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " pod="kube-system/cilium-n8kh8" Feb 12 19:24:47.723932 kubelet[1983]: I0212 19:24:47.723917 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cilium-config-path\") pod \"cilium-n8kh8\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " pod="kube-system/cilium-n8kh8" Feb 12 19:24:47.724012 kubelet[1983]: I0212 19:24:47.723938 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cilium-run\") pod \"cilium-n8kh8\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " pod="kube-system/cilium-n8kh8" Feb 12 19:24:47.724012 kubelet[1983]: I0212 19:24:47.723955 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-hostproc\") pod \"cilium-n8kh8\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " pod="kube-system/cilium-n8kh8" Feb 12 19:24:47.724012 kubelet[1983]: I0212 19:24:47.724000 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-xtables-lock\") pod \"cilium-n8kh8\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " pod="kube-system/cilium-n8kh8" Feb 12 19:24:47.724077 kubelet[1983]: I0212 19:24:47.724070 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-host-proc-sys-kernel\") pod \"cilium-n8kh8\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " pod="kube-system/cilium-n8kh8" Feb 12 19:24:47.724124 kubelet[1983]: I0212 19:24:47.724103 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cilium-cgroup\") pod \"cilium-n8kh8\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " pod="kube-system/cilium-n8kh8" Feb 12 19:24:47.724191 kubelet[1983]: I0212 19:24:47.724173 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-host-proc-sys-net\") pod \"cilium-n8kh8\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " pod="kube-system/cilium-n8kh8" Feb 12 19:24:47.724227 kubelet[1983]: I0212 19:24:47.724200 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fb668a3-1984-41ff-bea5-3d1d334f60ac-hubble-tls\") pod \"cilium-n8kh8\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " pod="kube-system/cilium-n8kh8" Feb 12 19:24:47.724227 kubelet[1983]: I0212 19:24:47.724223 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cni-path\") pod \"cilium-n8kh8\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " pod="kube-system/cilium-n8kh8" Feb 12 19:24:47.724288 kubelet[1983]: I0212 19:24:47.724242 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-etc-cni-netd\") pod \"cilium-n8kh8\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " pod="kube-system/cilium-n8kh8" Feb 12 19:24:47.724288 kubelet[1983]: I0212 19:24:47.724261 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cilium-ipsec-secrets\") pod \"cilium-n8kh8\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " pod="kube-system/cilium-n8kh8" Feb 12 19:24:47.724288 kubelet[1983]: I0212 19:24:47.724280 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rbsz\" (UniqueName: \"kubernetes.io/projected/9fb668a3-1984-41ff-bea5-3d1d334f60ac-kube-api-access-2rbsz\") pod \"cilium-n8kh8\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " pod="kube-system/cilium-n8kh8" Feb 12 19:24:48.103524 kubelet[1983]: E0212 19:24:48.103490 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:48.724196 kubelet[1983]: E0212 19:24:48.724164 1983 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 19:24:48.724511 kubelet[1983]: E0212 19:24:48.724497 1983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1743907-b54b-4a67-a198-5ee4f79187a7-cilium-config-path podName:c1743907-b54b-4a67-a198-5ee4f79187a7 nodeName:}" failed. No retries permitted until 2024-02-12 19:24:49.224475913 +0000 UTC m=+76.036880255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/c1743907-b54b-4a67-a198-5ee4f79187a7-cilium-config-path") pod "cilium-operator-f59cbd8c6-vvz62" (UID: "c1743907-b54b-4a67-a198-5ee4f79187a7") : failed to sync configmap cache: timed out waiting for the condition Feb 12 19:24:48.826927 env[1425]: time="2024-02-12T19:24:48.826873875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n8kh8,Uid:9fb668a3-1984-41ff-bea5-3d1d334f60ac,Namespace:kube-system,Attempt:0,}" Feb 12 19:24:48.860561 env[1425]: time="2024-02-12T19:24:48.860480401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:48.860561 env[1425]: time="2024-02-12T19:24:48.860528474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:48.860791 env[1425]: time="2024-02-12T19:24:48.860552111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:48.860867 env[1425]: time="2024-02-12T19:24:48.860807116Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79 pid=3600 runtime=io.containerd.runc.v2 Feb 12 19:24:48.896563 env[1425]: time="2024-02-12T19:24:48.896517994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n8kh8,Uid:9fb668a3-1984-41ff-bea5-3d1d334f60ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79\"" Feb 12 19:24:48.899201 env[1425]: time="2024-02-12T19:24:48.899156434Z" level=info msg="CreateContainer within sandbox \"2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:24:48.915889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3916244844.mount: Deactivated successfully. Feb 12 19:24:48.929308 env[1425]: time="2024-02-12T19:24:48.929226683Z" level=info msg="CreateContainer within sandbox \"2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"126718e4108d83a9187c19afde81d8b4b4a520ba9bae6d036b39d4144a12d721\"" Feb 12 19:24:48.929989 env[1425]: time="2024-02-12T19:24:48.929933386Z" level=info msg="StartContainer for \"126718e4108d83a9187c19afde81d8b4b4a520ba9bae6d036b39d4144a12d721\"" Feb 12 19:24:48.979614 env[1425]: time="2024-02-12T19:24:48.979493332Z" level=info msg="StartContainer for \"126718e4108d83a9187c19afde81d8b4b4a520ba9bae6d036b39d4144a12d721\" returns successfully" Feb 12 19:24:49.047795 env[1425]: time="2024-02-12T19:24:49.047746622Z" level=info msg="shim disconnected" id=126718e4108d83a9187c19afde81d8b4b4a520ba9bae6d036b39d4144a12d721 Feb 12 19:24:49.048099 env[1425]: time="2024-02-12T19:24:49.048079457Z" level=warning msg="cleaning up after shim disconnected" id=126718e4108d83a9187c19afde81d8b4b4a520ba9bae6d036b39d4144a12d721 namespace=k8s.io Feb 12 19:24:49.048193 env[1425]: time="2024-02-12T19:24:49.048177444Z" level=info msg="cleaning up dead shim" Feb 12 19:24:49.055366 env[1425]: time="2024-02-12T19:24:49.055324876Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3683 runtime=io.containerd.runc.v2\n" Feb 12 19:24:49.104073 kubelet[1983]: E0212 19:24:49.104026 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:49.135802 kubelet[1983]: E0212 19:24:49.135776 1983 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:24:49.399253 env[1425]: time="2024-02-12T19:24:49.399217878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-vvz62,Uid:c1743907-b54b-4a67-a198-5ee4f79187a7,Namespace:kube-system,Attempt:0,}" Feb 12 19:24:49.428450 env[1425]: time="2024-02-12T19:24:49.428355893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:49.428588 env[1425]: time="2024-02-12T19:24:49.428459399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:49.428588 env[1425]: time="2024-02-12T19:24:49.428485275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:49.428760 env[1425]: time="2024-02-12T19:24:49.428709965Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dff55480495fec864c97264ea3119f698648e41cd88f169d1fda0cb7d771a89c pid=3704 runtime=io.containerd.runc.v2 Feb 12 19:24:49.465687 env[1425]: time="2024-02-12T19:24:49.465642245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-vvz62,Uid:c1743907-b54b-4a67-a198-5ee4f79187a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"dff55480495fec864c97264ea3119f698648e41cd88f169d1fda0cb7d771a89c\"" Feb 12 19:24:49.467155 env[1425]: time="2024-02-12T19:24:49.467124564Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:24:49.579604 env[1425]: time="2024-02-12T19:24:49.579569301Z" level=info msg="StopPodSandbox for \"2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79\"" Feb 12 19:24:49.579794 env[1425]: time="2024-02-12T19:24:49.579774633Z" level=info msg="Container to stop \"126718e4108d83a9187c19afde81d8b4b4a520ba9bae6d036b39d4144a12d721\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:49.611896 env[1425]: time="2024-02-12T19:24:49.611844771Z" level=info msg="shim disconnected" id=2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79 Feb 12 19:24:49.611896 env[1425]: time="2024-02-12T19:24:49.611892165Z" level=warning msg="cleaning up after shim disconnected" id=2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79 namespace=k8s.io Feb 12 19:24:49.611896 env[1425]: time="2024-02-12T19:24:49.611901523Z" level=info msg="cleaning up dead shim" Feb 12 19:24:49.618763 env[1425]: time="2024-02-12T19:24:49.618727719Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3758 runtime=io.containerd.runc.v2\n" Feb 12 19:24:49.619192 env[1425]: time="2024-02-12T19:24:49.619165980Z" level=info msg="TearDown network for sandbox \"2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79\" successfully" Feb 12 19:24:49.619282 env[1425]: time="2024-02-12T19:24:49.619264487Z" level=info msg="StopPodSandbox for \"2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79\" returns successfully" Feb 12 19:24:49.737601 kubelet[1983]: I0212 19:24:49.737445 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fb668a3-1984-41ff-bea5-3d1d334f60ac-hubble-tls\") pod \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " Feb 12 19:24:49.737601 kubelet[1983]: I0212 19:24:49.737506 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-lib-modules\") pod \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " Feb 12 19:24:49.737601 kubelet[1983]: I0212 19:24:49.737532 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fb668a3-1984-41ff-bea5-3d1d334f60ac-clustermesh-secrets\") pod \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " Feb 12 19:24:49.737601 kubelet[1983]: I0212 19:24:49.737556 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rbsz\" (UniqueName: \"kubernetes.io/projected/9fb668a3-1984-41ff-bea5-3d1d334f60ac-kube-api-access-2rbsz\") pod \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " Feb 12 19:24:49.737601 kubelet[1983]: I0212 19:24:49.737578 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-host-proc-sys-net\") pod \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " Feb 12 19:24:49.738070 kubelet[1983]: I0212 19:24:49.737936 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-etc-cni-netd\") pod \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " Feb 12 19:24:49.738070 kubelet[1983]: I0212 19:24:49.737977 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-bpf-maps\") pod \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " Feb 12 19:24:49.738070 kubelet[1983]: I0212 19:24:49.738002 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cilium-config-path\") pod \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " Feb 12 19:24:49.738070 kubelet[1983]: I0212 19:24:49.738021 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cilium-run\") pod \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " Feb 12 19:24:49.738070 kubelet[1983]: I0212 19:24:49.738040 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-hostproc\") pod \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " Feb 12 19:24:49.738070 kubelet[1983]: I0212 19:24:49.738056 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cni-path\") pod \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " Feb 12 19:24:49.738220 kubelet[1983]: I0212 19:24:49.738074 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cilium-cgroup\") pod \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " Feb 12 19:24:49.738220 kubelet[1983]: I0212 19:24:49.738091 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-xtables-lock\") pod \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " Feb 12 19:24:49.738220 kubelet[1983]: I0212 19:24:49.738108 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-host-proc-sys-kernel\") pod \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " Feb 12 19:24:49.738220 kubelet[1983]: I0212 19:24:49.738126 1983 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cilium-ipsec-secrets\") pod \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\" (UID: \"9fb668a3-1984-41ff-bea5-3d1d334f60ac\") " Feb 12 19:24:49.739183 kubelet[1983]: W0212 19:24:49.738463 1983 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/9fb668a3-1984-41ff-bea5-3d1d334f60ac/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:24:49.740620 kubelet[1983]: I0212 19:24:49.740594 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9fb668a3-1984-41ff-bea5-3d1d334f60ac" (UID: "9fb668a3-1984-41ff-bea5-3d1d334f60ac"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:24:49.740730 kubelet[1983]: I0212 19:24:49.740626 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9fb668a3-1984-41ff-bea5-3d1d334f60ac" (UID: "9fb668a3-1984-41ff-bea5-3d1d334f60ac"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:49.740801 kubelet[1983]: I0212 19:24:49.740644 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-hostproc" (OuterVolumeSpecName: "hostproc") pod "9fb668a3-1984-41ff-bea5-3d1d334f60ac" (UID: "9fb668a3-1984-41ff-bea5-3d1d334f60ac"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:49.740868 kubelet[1983]: I0212 19:24:49.740655 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cni-path" (OuterVolumeSpecName: "cni-path") pod "9fb668a3-1984-41ff-bea5-3d1d334f60ac" (UID: "9fb668a3-1984-41ff-bea5-3d1d334f60ac"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:49.740927 kubelet[1983]: I0212 19:24:49.740665 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9fb668a3-1984-41ff-bea5-3d1d334f60ac" (UID: "9fb668a3-1984-41ff-bea5-3d1d334f60ac"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:49.740979 kubelet[1983]: I0212 19:24:49.740675 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9fb668a3-1984-41ff-bea5-3d1d334f60ac" (UID: "9fb668a3-1984-41ff-bea5-3d1d334f60ac"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:49.741046 kubelet[1983]: I0212 19:24:49.740686 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9fb668a3-1984-41ff-bea5-3d1d334f60ac" (UID: "9fb668a3-1984-41ff-bea5-3d1d334f60ac"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:49.741152 kubelet[1983]: I0212 19:24:49.741133 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9fb668a3-1984-41ff-bea5-3d1d334f60ac" (UID: "9fb668a3-1984-41ff-bea5-3d1d334f60ac"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:49.741231 kubelet[1983]: I0212 19:24:49.741193 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "9fb668a3-1984-41ff-bea5-3d1d334f60ac" (UID: "9fb668a3-1984-41ff-bea5-3d1d334f60ac"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:24:49.741307 kubelet[1983]: I0212 19:24:49.741213 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9fb668a3-1984-41ff-bea5-3d1d334f60ac" (UID: "9fb668a3-1984-41ff-bea5-3d1d334f60ac"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:49.741393 kubelet[1983]: I0212 19:24:49.741380 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9fb668a3-1984-41ff-bea5-3d1d334f60ac" (UID: "9fb668a3-1984-41ff-bea5-3d1d334f60ac"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:49.741508 kubelet[1983]: I0212 19:24:49.741495 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9fb668a3-1984-41ff-bea5-3d1d334f60ac" (UID: "9fb668a3-1984-41ff-bea5-3d1d334f60ac"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:49.743141 kubelet[1983]: I0212 19:24:49.743114 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fb668a3-1984-41ff-bea5-3d1d334f60ac-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9fb668a3-1984-41ff-bea5-3d1d334f60ac" (UID: "9fb668a3-1984-41ff-bea5-3d1d334f60ac"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:49.744569 kubelet[1983]: I0212 19:24:49.744539 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fb668a3-1984-41ff-bea5-3d1d334f60ac-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9fb668a3-1984-41ff-bea5-3d1d334f60ac" (UID: "9fb668a3-1984-41ff-bea5-3d1d334f60ac"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:24:49.745045 kubelet[1983]: I0212 19:24:49.745017 1983 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fb668a3-1984-41ff-bea5-3d1d334f60ac-kube-api-access-2rbsz" (OuterVolumeSpecName: "kube-api-access-2rbsz") pod "9fb668a3-1984-41ff-bea5-3d1d334f60ac" (UID: "9fb668a3-1984-41ff-bea5-3d1d334f60ac"). InnerVolumeSpecName "kube-api-access-2rbsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:49.838340 kubelet[1983]: I0212 19:24:49.838294 1983 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-lib-modules\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:49.838340 kubelet[1983]: I0212 19:24:49.838346 1983 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fb668a3-1984-41ff-bea5-3d1d334f60ac-clustermesh-secrets\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:49.838538 kubelet[1983]: I0212 19:24:49.838368 1983 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-2rbsz\" (UniqueName: \"kubernetes.io/projected/9fb668a3-1984-41ff-bea5-3d1d334f60ac-kube-api-access-2rbsz\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:49.838538 kubelet[1983]: I0212 19:24:49.838387 1983 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-host-proc-sys-net\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:49.838538 kubelet[1983]: I0212 19:24:49.838405 1983 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-hostproc\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:49.838538 kubelet[1983]: I0212 19:24:49.838437 1983 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cni-path\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:49.838538 kubelet[1983]: I0212 19:24:49.838454 1983 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-etc-cni-netd\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:49.838538 kubelet[1983]: I0212 19:24:49.838466 1983 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-bpf-maps\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:49.838538 kubelet[1983]: I0212 19:24:49.838477 1983 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cilium-config-path\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:49.838538 kubelet[1983]: I0212 19:24:49.838485 1983 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cilium-run\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:49.838726 kubelet[1983]: I0212 19:24:49.838494 1983 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cilium-cgroup\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:49.838726 kubelet[1983]: I0212 19:24:49.838502 1983 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-xtables-lock\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:49.838726 kubelet[1983]: I0212 19:24:49.838511 1983 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fb668a3-1984-41ff-bea5-3d1d334f60ac-host-proc-sys-kernel\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:49.838726 kubelet[1983]: I0212 19:24:49.838520 1983 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9fb668a3-1984-41ff-bea5-3d1d334f60ac-cilium-ipsec-secrets\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:49.838726 kubelet[1983]: I0212 19:24:49.838529 1983 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fb668a3-1984-41ff-bea5-3d1d334f60ac-hubble-tls\") on node \"10.200.20.24\" DevicePath \"\"" Feb 12 19:24:49.852016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79-rootfs.mount: Deactivated successfully. Feb 12 19:24:49.852147 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79-shm.mount: Deactivated successfully. Feb 12 19:24:49.852226 systemd[1]: var-lib-kubelet-pods-9fb668a3\x2d1984\x2d41ff\x2dbea5\x2d3d1d334f60ac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2rbsz.mount: Deactivated successfully. Feb 12 19:24:49.852307 systemd[1]: var-lib-kubelet-pods-9fb668a3\x2d1984\x2d41ff\x2dbea5\x2d3d1d334f60ac-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:24:49.852386 systemd[1]: var-lib-kubelet-pods-9fb668a3\x2d1984\x2d41ff\x2dbea5\x2d3d1d334f60ac-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:24:49.852540 systemd[1]: var-lib-kubelet-pods-9fb668a3\x2d1984\x2d41ff\x2dbea5\x2d3d1d334f60ac-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:24:50.105122 kubelet[1983]: E0212 19:24:50.105085 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:50.582362 kubelet[1983]: I0212 19:24:50.581487 1983 scope.go:115] "RemoveContainer" containerID="126718e4108d83a9187c19afde81d8b4b4a520ba9bae6d036b39d4144a12d721" Feb 12 19:24:50.583125 env[1425]: time="2024-02-12T19:24:50.583087101Z" level=info msg="RemoveContainer for \"126718e4108d83a9187c19afde81d8b4b4a520ba9bae6d036b39d4144a12d721\"" Feb 12 19:24:50.589591 env[1425]: time="2024-02-12T19:24:50.589539556Z" level=info msg="RemoveContainer for \"126718e4108d83a9187c19afde81d8b4b4a520ba9bae6d036b39d4144a12d721\" returns successfully" Feb 12 19:24:50.610914 kubelet[1983]: I0212 19:24:50.610871 1983 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:24:50.611051 kubelet[1983]: E0212 19:24:50.610956 1983 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9fb668a3-1984-41ff-bea5-3d1d334f60ac" containerName="mount-cgroup" Feb 12 19:24:50.611051 kubelet[1983]: I0212 19:24:50.610992 1983 memory_manager.go:346] "RemoveStaleState removing state" podUID="9fb668a3-1984-41ff-bea5-3d1d334f60ac" containerName="mount-cgroup" Feb 12 19:24:50.643334 kubelet[1983]: I0212 19:24:50.643272 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8751b0a4-3add-4590-bc93-e8a312236bfe-bpf-maps\") pod \"cilium-468h7\" (UID: \"8751b0a4-3add-4590-bc93-e8a312236bfe\") " pod="kube-system/cilium-468h7" Feb 12 19:24:50.643334 kubelet[1983]: I0212 19:24:50.643337 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8751b0a4-3add-4590-bc93-e8a312236bfe-etc-cni-netd\") pod \"cilium-468h7\" (UID: \"8751b0a4-3add-4590-bc93-e8a312236bfe\") " pod="kube-system/cilium-468h7" Feb 12 19:24:50.643547 kubelet[1983]: I0212 19:24:50.643361 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8751b0a4-3add-4590-bc93-e8a312236bfe-clustermesh-secrets\") pod \"cilium-468h7\" (UID: \"8751b0a4-3add-4590-bc93-e8a312236bfe\") " pod="kube-system/cilium-468h7" Feb 12 19:24:50.643547 kubelet[1983]: I0212 19:24:50.643382 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8751b0a4-3add-4590-bc93-e8a312236bfe-host-proc-sys-kernel\") pod \"cilium-468h7\" (UID: \"8751b0a4-3add-4590-bc93-e8a312236bfe\") " pod="kube-system/cilium-468h7" Feb 12 19:24:50.643547 kubelet[1983]: I0212 19:24:50.643426 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8751b0a4-3add-4590-bc93-e8a312236bfe-host-proc-sys-net\") pod \"cilium-468h7\" (UID: \"8751b0a4-3add-4590-bc93-e8a312236bfe\") " pod="kube-system/cilium-468h7" Feb 12 19:24:50.643547 kubelet[1983]: I0212 19:24:50.643447 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgk4p\" (UniqueName: \"kubernetes.io/projected/8751b0a4-3add-4590-bc93-e8a312236bfe-kube-api-access-xgk4p\") pod \"cilium-468h7\" (UID: \"8751b0a4-3add-4590-bc93-e8a312236bfe\") " pod="kube-system/cilium-468h7" Feb 12 19:24:50.643547 kubelet[1983]: I0212 19:24:50.643467 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8751b0a4-3add-4590-bc93-e8a312236bfe-lib-modules\") pod \"cilium-468h7\" (UID: \"8751b0a4-3add-4590-bc93-e8a312236bfe\") " pod="kube-system/cilium-468h7" Feb 12 19:24:50.643667 kubelet[1983]: I0212 19:24:50.643497 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8751b0a4-3add-4590-bc93-e8a312236bfe-cilium-config-path\") pod \"cilium-468h7\" (UID: \"8751b0a4-3add-4590-bc93-e8a312236bfe\") " pod="kube-system/cilium-468h7" Feb 12 19:24:50.643667 kubelet[1983]: I0212 19:24:50.643517 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8751b0a4-3add-4590-bc93-e8a312236bfe-hostproc\") pod \"cilium-468h7\" (UID: \"8751b0a4-3add-4590-bc93-e8a312236bfe\") " pod="kube-system/cilium-468h7" Feb 12 19:24:50.643667 kubelet[1983]: I0212 19:24:50.643537 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8751b0a4-3add-4590-bc93-e8a312236bfe-xtables-lock\") pod \"cilium-468h7\" (UID: \"8751b0a4-3add-4590-bc93-e8a312236bfe\") " pod="kube-system/cilium-468h7" Feb 12 19:24:50.643667 kubelet[1983]: I0212 19:24:50.643555 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8751b0a4-3add-4590-bc93-e8a312236bfe-hubble-tls\") pod \"cilium-468h7\" (UID: \"8751b0a4-3add-4590-bc93-e8a312236bfe\") " pod="kube-system/cilium-468h7" Feb 12 19:24:50.643667 kubelet[1983]: I0212 19:24:50.643584 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8751b0a4-3add-4590-bc93-e8a312236bfe-cilium-run\") pod \"cilium-468h7\" (UID: \"8751b0a4-3add-4590-bc93-e8a312236bfe\") " pod="kube-system/cilium-468h7" Feb 12 19:24:50.643667 kubelet[1983]: I0212 19:24:50.643604 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8751b0a4-3add-4590-bc93-e8a312236bfe-cilium-cgroup\") pod \"cilium-468h7\" (UID: \"8751b0a4-3add-4590-bc93-e8a312236bfe\") " pod="kube-system/cilium-468h7" Feb 12 19:24:50.643803 kubelet[1983]: I0212 19:24:50.643624 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8751b0a4-3add-4590-bc93-e8a312236bfe-cni-path\") pod \"cilium-468h7\" (UID: \"8751b0a4-3add-4590-bc93-e8a312236bfe\") " pod="kube-system/cilium-468h7" Feb 12 19:24:50.643803 kubelet[1983]: I0212 19:24:50.643657 1983 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8751b0a4-3add-4590-bc93-e8a312236bfe-cilium-ipsec-secrets\") pod \"cilium-468h7\" (UID: \"8751b0a4-3add-4590-bc93-e8a312236bfe\") " pod="kube-system/cilium-468h7" Feb 12 19:24:50.855358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4091258867.mount: Deactivated successfully. Feb 12 19:24:50.915102 env[1425]: time="2024-02-12T19:24:50.915049021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-468h7,Uid:8751b0a4-3add-4590-bc93-e8a312236bfe,Namespace:kube-system,Attempt:0,}" Feb 12 19:24:50.966764 env[1425]: time="2024-02-12T19:24:50.966561073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:50.966764 env[1425]: time="2024-02-12T19:24:50.966598828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:50.966764 env[1425]: time="2024-02-12T19:24:50.966613706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:50.967038 env[1425]: time="2024-02-12T19:24:50.966988055Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f68d7f7e23ddab300e3762f4f4aba9cafc7b31550bbd6db184da02936943c20 pid=3785 runtime=io.containerd.runc.v2 Feb 12 19:24:51.013678 env[1425]: time="2024-02-12T19:24:51.013637216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-468h7,Uid:8751b0a4-3add-4590-bc93-e8a312236bfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f68d7f7e23ddab300e3762f4f4aba9cafc7b31550bbd6db184da02936943c20\"" Feb 12 19:24:51.016120 env[1425]: time="2024-02-12T19:24:51.016089290Z" level=info msg="CreateContainer within sandbox \"5f68d7f7e23ddab300e3762f4f4aba9cafc7b31550bbd6db184da02936943c20\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:24:51.047481 env[1425]: time="2024-02-12T19:24:51.047436644Z" level=info msg="CreateContainer within sandbox \"5f68d7f7e23ddab300e3762f4f4aba9cafc7b31550bbd6db184da02936943c20\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3cbef207b55e4d18ee6cb664b10acf0f6ba73d83dacda9d73eab56e6516b9d8d\"" Feb 12 19:24:51.048278 env[1425]: time="2024-02-12T19:24:51.048254576Z" level=info msg="StartContainer for \"3cbef207b55e4d18ee6cb664b10acf0f6ba73d83dacda9d73eab56e6516b9d8d\"" Feb 12 19:24:51.107645 kubelet[1983]: E0212 19:24:51.106495 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:51.110502 env[1425]: time="2024-02-12T19:24:51.110457390Z" level=info msg="StartContainer for \"3cbef207b55e4d18ee6cb664b10acf0f6ba73d83dacda9d73eab56e6516b9d8d\" returns successfully" Feb 12 19:24:51.168999 env[1425]: time="2024-02-12T19:24:51.168948378Z" level=info msg="shim disconnected" id=3cbef207b55e4d18ee6cb664b10acf0f6ba73d83dacda9d73eab56e6516b9d8d Feb 12 19:24:51.168999 env[1425]: time="2024-02-12T19:24:51.168992932Z" level=warning msg="cleaning up after shim disconnected" id=3cbef207b55e4d18ee6cb664b10acf0f6ba73d83dacda9d73eab56e6516b9d8d namespace=k8s.io Feb 12 19:24:51.168999 env[1425]: time="2024-02-12T19:24:51.169001651Z" level=info msg="cleaning up dead shim" Feb 12 19:24:51.175664 env[1425]: time="2024-02-12T19:24:51.175614613Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3870 runtime=io.containerd.runc.v2\n" Feb 12 19:24:51.516720 env[1425]: time="2024-02-12T19:24:51.516616821Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:51.521796 env[1425]: time="2024-02-12T19:24:51.521750299Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:51.530406 env[1425]: time="2024-02-12T19:24:51.530356076Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:51.531103 env[1425]: time="2024-02-12T19:24:51.531072741Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 12 19:24:51.533396 env[1425]: time="2024-02-12T19:24:51.533368235Z" level=info msg="CreateContainer within sandbox \"dff55480495fec864c97264ea3119f698648e41cd88f169d1fda0cb7d771a89c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:24:51.556573 env[1425]: time="2024-02-12T19:24:51.556513280Z" level=info msg="CreateContainer within sandbox \"dff55480495fec864c97264ea3119f698648e41cd88f169d1fda0cb7d771a89c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"20a629bed9feef45267af21914f9ed903044f6a3ffb9d7164728d319defea655\"" Feb 12 19:24:51.556988 env[1425]: time="2024-02-12T19:24:51.556953462Z" level=info msg="StartContainer for \"20a629bed9feef45267af21914f9ed903044f6a3ffb9d7164728d319defea655\"" Feb 12 19:24:51.589214 env[1425]: time="2024-02-12T19:24:51.589167341Z" level=info msg="CreateContainer within sandbox \"5f68d7f7e23ddab300e3762f4f4aba9cafc7b31550bbd6db184da02936943c20\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:24:51.617032 env[1425]: time="2024-02-12T19:24:51.616986525Z" level=info msg="StartContainer for \"20a629bed9feef45267af21914f9ed903044f6a3ffb9d7164728d319defea655\" returns successfully" Feb 12 19:24:51.624747 env[1425]: time="2024-02-12T19:24:51.624697460Z" level=info msg="CreateContainer within sandbox \"5f68d7f7e23ddab300e3762f4f4aba9cafc7b31550bbd6db184da02936943c20\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"de99c74d9804f16300598d7a2aa566eb3fefd5bef1106f3ca84547bbf00a98c4\"" Feb 12 19:24:51.625384 env[1425]: time="2024-02-12T19:24:51.625348613Z" level=info msg="StartContainer for \"de99c74d9804f16300598d7a2aa566eb3fefd5bef1106f3ca84547bbf00a98c4\"" Feb 12 19:24:51.675340 env[1425]: time="2024-02-12T19:24:51.675292217Z" level=info msg="StartContainer for \"de99c74d9804f16300598d7a2aa566eb3fefd5bef1106f3ca84547bbf00a98c4\" returns successfully" Feb 12 19:24:51.909307 env[1425]: time="2024-02-12T19:24:51.909263128Z" level=info msg="shim disconnected" id=de99c74d9804f16300598d7a2aa566eb3fefd5bef1106f3ca84547bbf00a98c4 Feb 12 19:24:51.909641 env[1425]: time="2024-02-12T19:24:51.909620400Z" level=warning msg="cleaning up after shim disconnected" id=de99c74d9804f16300598d7a2aa566eb3fefd5bef1106f3ca84547bbf00a98c4 namespace=k8s.io Feb 12 19:24:51.909724 env[1425]: time="2024-02-12T19:24:51.909710708Z" level=info msg="cleaning up dead shim" Feb 12 19:24:51.916856 env[1425]: time="2024-02-12T19:24:51.916821763Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3970 runtime=io.containerd.runc.v2\n" Feb 12 19:24:52.106934 kubelet[1983]: E0212 19:24:52.106895 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:52.451338 kubelet[1983]: I0212 19:24:52.451111 1983 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=9fb668a3-1984-41ff-bea5-3d1d334f60ac path="/var/lib/kubelet/pods/9fb668a3-1984-41ff-bea5-3d1d334f60ac/volumes" Feb 12 19:24:52.593519 env[1425]: time="2024-02-12T19:24:52.593475519Z" level=info msg="CreateContainer within sandbox \"5f68d7f7e23ddab300e3762f4f4aba9cafc7b31550bbd6db184da02936943c20\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:24:52.598795 kubelet[1983]: I0212 19:24:52.598765 1983 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-vvz62" podStartSLOduration=-9.22337203125605e+09 pod.CreationTimestamp="2024-02-12 19:24:47 +0000 UTC" firstStartedPulling="2024-02-12 19:24:49.466724938 +0000 UTC m=+76.279129320" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:52.598456783 +0000 UTC m=+79.410861205" watchObservedRunningTime="2024-02-12 19:24:52.598724907 +0000 UTC m=+79.411129289" Feb 12 19:24:52.621643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2503766096.mount: Deactivated successfully. Feb 12 19:24:52.636078 env[1425]: time="2024-02-12T19:24:52.636009198Z" level=info msg="CreateContainer within sandbox \"5f68d7f7e23ddab300e3762f4f4aba9cafc7b31550bbd6db184da02936943c20\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"367ab52dadfe308edc7c5257b9c369a2539e27e38e43bb621bbc0216e00c4340\"" Feb 12 19:24:52.637012 env[1425]: time="2024-02-12T19:24:52.636981430Z" level=info msg="StartContainer for \"367ab52dadfe308edc7c5257b9c369a2539e27e38e43bb621bbc0216e00c4340\"" Feb 12 19:24:52.687009 env[1425]: time="2024-02-12T19:24:52.686970207Z" level=info msg="StartContainer for \"367ab52dadfe308edc7c5257b9c369a2539e27e38e43bb621bbc0216e00c4340\" returns successfully" Feb 12 19:24:52.715700 env[1425]: time="2024-02-12T19:24:52.715593238Z" level=info msg="shim disconnected" id=367ab52dadfe308edc7c5257b9c369a2539e27e38e43bb621bbc0216e00c4340 Feb 12 19:24:52.716082 env[1425]: time="2024-02-12T19:24:52.716059137Z" level=warning msg="cleaning up after shim disconnected" id=367ab52dadfe308edc7c5257b9c369a2539e27e38e43bb621bbc0216e00c4340 namespace=k8s.io Feb 12 19:24:52.716158 env[1425]: time="2024-02-12T19:24:52.716144885Z" level=info msg="cleaning up dead shim" Feb 12 19:24:52.722761 env[1425]: time="2024-02-12T19:24:52.722728579Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4028 runtime=io.containerd.runc.v2\n" Feb 12 19:24:53.107652 kubelet[1983]: E0212 19:24:53.107618 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:53.597500 env[1425]: time="2024-02-12T19:24:53.597456547Z" level=info msg="CreateContainer within sandbox \"5f68d7f7e23ddab300e3762f4f4aba9cafc7b31550bbd6db184da02936943c20\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:24:53.619256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount187614610.mount: Deactivated successfully. Feb 12 19:24:53.625025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount337717328.mount: Deactivated successfully. Feb 12 19:24:53.637438 env[1425]: time="2024-02-12T19:24:53.637380056Z" level=info msg="CreateContainer within sandbox \"5f68d7f7e23ddab300e3762f4f4aba9cafc7b31550bbd6db184da02936943c20\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"24222cdc6b05e07612c389e853974481ad662b50e890439a9a0f504df78b2b13\"" Feb 12 19:24:53.638026 env[1425]: time="2024-02-12T19:24:53.638002775Z" level=info msg="StartContainer for \"24222cdc6b05e07612c389e853974481ad662b50e890439a9a0f504df78b2b13\"" Feb 12 19:24:53.684406 env[1425]: time="2024-02-12T19:24:53.684356125Z" level=info msg="StartContainer for \"24222cdc6b05e07612c389e853974481ad662b50e890439a9a0f504df78b2b13\" returns successfully" Feb 12 19:24:53.708192 env[1425]: time="2024-02-12T19:24:53.708144700Z" level=info msg="shim disconnected" id=24222cdc6b05e07612c389e853974481ad662b50e890439a9a0f504df78b2b13 Feb 12 19:24:53.708434 env[1425]: time="2024-02-12T19:24:53.708396667Z" level=warning msg="cleaning up after shim disconnected" id=24222cdc6b05e07612c389e853974481ad662b50e890439a9a0f504df78b2b13 namespace=k8s.io Feb 12 19:24:53.708510 env[1425]: time="2024-02-12T19:24:53.708495574Z" level=info msg="cleaning up dead shim" Feb 12 19:24:53.715613 env[1425]: time="2024-02-12T19:24:53.715567211Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4083 runtime=io.containerd.runc.v2\n" Feb 12 19:24:54.050120 kubelet[1983]: E0212 19:24:54.050084 1983 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:54.108369 kubelet[1983]: E0212 19:24:54.108341 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:54.137338 kubelet[1983]: E0212 19:24:54.137306 1983 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:24:54.600940 env[1425]: time="2024-02-12T19:24:54.600906288Z" level=info msg="CreateContainer within sandbox \"5f68d7f7e23ddab300e3762f4f4aba9cafc7b31550bbd6db184da02936943c20\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:24:54.624847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount361084077.mount: Deactivated successfully. Feb 12 19:24:54.630473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210208972.mount: Deactivated successfully. Feb 12 19:24:54.640628 env[1425]: time="2024-02-12T19:24:54.640587873Z" level=info msg="CreateContainer within sandbox \"5f68d7f7e23ddab300e3762f4f4aba9cafc7b31550bbd6db184da02936943c20\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dc7d52389d98bea3bdaf28970f515c4d8312ca6a06c37fdfd9733afa8a133b8b\"" Feb 12 19:24:54.641193 env[1425]: time="2024-02-12T19:24:54.641170558Z" level=info msg="StartContainer for \"dc7d52389d98bea3bdaf28970f515c4d8312ca6a06c37fdfd9733afa8a133b8b\"" Feb 12 19:24:54.696160 env[1425]: time="2024-02-12T19:24:54.695666186Z" level=info msg="StartContainer for \"dc7d52389d98bea3bdaf28970f515c4d8312ca6a06c37fdfd9733afa8a133b8b\" returns successfully" Feb 12 19:24:54.999533 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 12 19:24:55.109212 kubelet[1983]: E0212 19:24:55.109171 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:55.619804 kubelet[1983]: I0212 19:24:55.619762 1983 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-468h7" podStartSLOduration=5.619729126 pod.CreationTimestamp="2024-02-12 19:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:55.619637618 +0000 UTC m=+82.432042000" watchObservedRunningTime="2024-02-12 19:24:55.619729126 +0000 UTC m=+82.432133508" Feb 12 19:24:56.109373 kubelet[1983]: E0212 19:24:56.109333 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:57.110372 kubelet[1983]: E0212 19:24:57.110343 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:57.440510 systemd-networkd[1596]: lxc_health: Link UP Feb 12 19:24:57.459561 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:24:57.459338 systemd-networkd[1596]: lxc_health: Gained carrier Feb 12 19:24:57.620111 systemd[1]: run-containerd-runc-k8s.io-dc7d52389d98bea3bdaf28970f515c4d8312ca6a06c37fdfd9733afa8a133b8b-runc.FdQbEu.mount: Deactivated successfully. Feb 12 19:24:58.111359 kubelet[1983]: E0212 19:24:58.111313 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:58.371590 kubelet[1983]: I0212 19:24:58.371499 1983 setters.go:548] "Node became not ready" node="10.200.20.24" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 19:24:58.371437028 +0000 UTC m=+85.183841410 LastTransitionTime:2024-02-12 19:24:58.371437028 +0000 UTC m=+85.183841410 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 19:24:59.112216 kubelet[1983]: E0212 19:24:59.112168 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:59.376567 systemd-networkd[1596]: lxc_health: Gained IPv6LL Feb 12 19:24:59.802640 systemd[1]: run-containerd-runc-k8s.io-dc7d52389d98bea3bdaf28970f515c4d8312ca6a06c37fdfd9733afa8a133b8b-runc.iO2sqQ.mount: Deactivated successfully. Feb 12 19:25:00.113178 kubelet[1983]: E0212 19:25:00.113062 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:01.113191 kubelet[1983]: E0212 19:25:01.113156 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:01.930458 systemd[1]: run-containerd-runc-k8s.io-dc7d52389d98bea3bdaf28970f515c4d8312ca6a06c37fdfd9733afa8a133b8b-runc.94x131.mount: Deactivated successfully. Feb 12 19:25:02.114137 kubelet[1983]: E0212 19:25:02.114078 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:03.114379 kubelet[1983]: E0212 19:25:03.114330 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:04.059085 systemd[1]: run-containerd-runc-k8s.io-dc7d52389d98bea3bdaf28970f515c4d8312ca6a06c37fdfd9733afa8a133b8b-runc.PA7Hxy.mount: Deactivated successfully. Feb 12 19:25:04.115115 kubelet[1983]: E0212 19:25:04.115074 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:05.115465 kubelet[1983]: E0212 19:25:05.115407 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:06.115607 kubelet[1983]: E0212 19:25:06.115553 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:07.116812 kubelet[1983]: E0212 19:25:07.116600 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:08.117235 kubelet[1983]: E0212 19:25:08.117201 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:09.117369 kubelet[1983]: E0212 19:25:09.117328 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:10.118107 kubelet[1983]: E0212 19:25:10.118072 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:11.118448 kubelet[1983]: E0212 19:25:11.118405 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:12.119296 kubelet[1983]: E0212 19:25:12.119258 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:13.120081 kubelet[1983]: E0212 19:25:13.120044 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:14.050853 kubelet[1983]: E0212 19:25:14.050807 1983 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:14.120211 kubelet[1983]: E0212 19:25:14.120162 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:15.121185 kubelet[1983]: E0212 19:25:15.121154 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:16.122700 kubelet[1983]: E0212 19:25:16.122663 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:17.122999 kubelet[1983]: E0212 19:25:17.122963 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:18.123242 kubelet[1983]: E0212 19:25:18.123203 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:18.220350 kubelet[1983]: E0212 19:25:18.220321 1983 controller.go:189] failed to update lease, error: the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 10.200.20.24) Feb 12 19:25:18.678791 kubelet[1983]: E0212 19:25:18.678764 1983 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.34:38622->10.200.20.27:2379: read: connection timed out Feb 12 19:25:18.775947 kubelet[1983]: E0212 19:25:18.775890 1983 kubelet_node_status.go:540] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T19:25:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T19:25:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T19:25:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T19:25:08Z\\\",\\\"lastTransitionTime\\\":\\\"2024-02-12T19:25:08Z\\\",\\\"message\\\":\\\"kubelet is posting ready status\\\",\\\"reason\\\":\\\"KubeletReady\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":157636062},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":87371201},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":55608803},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22\\\",\\\"registry.k8s.io/kube-proxy:v1.26.13\\\"],\\\"sizeBytes\\\":21139040},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":17128551},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":253553}]}}\" for node \"10.200.20.24\": the server was unable to return a response in the time allotted, but may still be processing the request (patch nodes 10.200.20.24)" Feb 12 19:25:19.123869 kubelet[1983]: E0212 19:25:19.123838 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:20.124779 kubelet[1983]: E0212 19:25:20.124740 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:21.125518 kubelet[1983]: E0212 19:25:21.125473 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:22.126454 kubelet[1983]: E0212 19:25:22.126404 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:23.126962 kubelet[1983]: E0212 19:25:23.126920 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:24.127942 kubelet[1983]: E0212 19:25:24.127892 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:25.128738 kubelet[1983]: E0212 19:25:25.128707 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:26.129682 kubelet[1983]: E0212 19:25:26.129645 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:27.130638 kubelet[1983]: E0212 19:25:27.130602 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:28.131159 kubelet[1983]: E0212 19:25:28.131111 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:28.679819 kubelet[1983]: E0212 19:25:28.679781 1983 controller.go:189] failed to update lease, error: Put "https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.24?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 12 19:25:28.776541 kubelet[1983]: E0212 19:25:28.776506 1983 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.20.24\": Get \"https://10.200.20.34:6443/api/v1/nodes/10.200.20.24?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 19:25:29.132099 kubelet[1983]: E0212 19:25:29.132061 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:30.132644 kubelet[1983]: E0212 19:25:30.132607 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:31.133606 kubelet[1983]: E0212 19:25:31.133567 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:32.134118 kubelet[1983]: E0212 19:25:32.134086 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:33.134832 kubelet[1983]: E0212 19:25:33.134798 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:34.050866 kubelet[1983]: E0212 19:25:34.050830 1983 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:34.069350 env[1425]: time="2024-02-12T19:25:34.069306035Z" level=info msg="StopPodSandbox for \"2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79\"" Feb 12 19:25:34.069767 env[1425]: time="2024-02-12T19:25:34.069400849Z" level=info msg="TearDown network for sandbox \"2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79\" successfully" Feb 12 19:25:34.069767 env[1425]: time="2024-02-12T19:25:34.069447416Z" level=info msg="StopPodSandbox for \"2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79\" returns successfully" Feb 12 19:25:34.070083 env[1425]: time="2024-02-12T19:25:34.070051148Z" level=info msg="RemovePodSandbox for \"2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79\"" Feb 12 19:25:34.070197 env[1425]: time="2024-02-12T19:25:34.070164365Z" level=info msg="Forcibly stopping sandbox \"2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79\"" Feb 12 19:25:34.070318 env[1425]: time="2024-02-12T19:25:34.070300946Z" level=info msg="TearDown network for sandbox \"2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79\" successfully" Feb 12 19:25:34.083521 env[1425]: time="2024-02-12T19:25:34.083488947Z" level=info msg="RemovePodSandbox \"2ab4f36885204d288bd1303b9936129259bc03c46b717048e1eb4abba80bdc79\" returns successfully" Feb 12 19:25:34.084058 env[1425]: time="2024-02-12T19:25:34.084022588Z" level=info msg="StopPodSandbox for \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\"" Feb 12 19:25:34.084136 env[1425]: time="2024-02-12T19:25:34.084097120Z" level=info msg="TearDown network for sandbox \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" successfully" Feb 12 19:25:34.084172 env[1425]: time="2024-02-12T19:25:34.084132965Z" level=info msg="StopPodSandbox for \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" returns successfully" Feb 12 19:25:34.084432 env[1425]: time="2024-02-12T19:25:34.084389764Z" level=info msg="RemovePodSandbox for \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\"" Feb 12 19:25:34.084476 env[1425]: time="2024-02-12T19:25:34.084443732Z" level=info msg="Forcibly stopping sandbox \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\"" Feb 12 19:25:34.084530 env[1425]: time="2024-02-12T19:25:34.084507902Z" level=info msg="TearDown network for sandbox \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" successfully" Feb 12 19:25:34.092238 env[1425]: time="2024-02-12T19:25:34.092196909Z" level=info msg="RemovePodSandbox \"6118ca74b30da1e39a574cc7a50bab31b8e0787b8681193065301c8c37784ea6\" returns successfully" Feb 12 19:25:34.135569 kubelet[1983]: E0212 19:25:34.135541 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:35.136765 kubelet[1983]: E0212 19:25:35.136724 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:36.137206 kubelet[1983]: E0212 19:25:36.137171 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:37.138432 kubelet[1983]: E0212 19:25:37.138365 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:38.138848 kubelet[1983]: E0212 19:25:38.138810 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:38.680688 kubelet[1983]: E0212 19:25:38.680653 1983 controller.go:189] failed to update lease, error: Put "https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.24?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 12 19:25:38.777423 kubelet[1983]: E0212 19:25:38.777380 1983 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.20.24\": Get \"https://10.200.20.34:6443/api/v1/nodes/10.200.20.24?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 19:25:39.139872 kubelet[1983]: E0212 19:25:39.139823 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:40.140646 kubelet[1983]: E0212 19:25:40.140611 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:41.141849 kubelet[1983]: E0212 19:25:41.141812 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:42.142562 kubelet[1983]: E0212 19:25:42.142526 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:43.143660 kubelet[1983]: E0212 19:25:43.143625 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:44.144499 kubelet[1983]: E0212 19:25:44.144463 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:45.145209 kubelet[1983]: E0212 19:25:45.145181 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:46.146310 kubelet[1983]: E0212 19:25:46.146270 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:47.147441 kubelet[1983]: E0212 19:25:47.147399 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:48.147789 kubelet[1983]: E0212 19:25:48.147746 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:48.681830 kubelet[1983]: E0212 19:25:48.681786 1983 controller.go:189] failed to update lease, error: Put "https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.24?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 12 19:25:48.681830 kubelet[1983]: I0212 19:25:48.681826 1983 controller.go:116] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Feb 12 19:25:48.778574 kubelet[1983]: E0212 19:25:48.778546 1983 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.20.24\": Get \"https://10.200.20.34:6443/api/v1/nodes/10.200.20.24?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 19:25:49.148153 kubelet[1983]: E0212 19:25:49.148106 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:50.148765 kubelet[1983]: E0212 19:25:50.148719 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:51.149470 kubelet[1983]: E0212 19:25:51.149427 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:52.149784 kubelet[1983]: E0212 19:25:52.149737 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:53.150128 kubelet[1983]: E0212 19:25:53.150092 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:54.050295 kubelet[1983]: E0212 19:25:54.050266 1983 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:54.151184 kubelet[1983]: E0212 19:25:54.151161 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:55.152314 kubelet[1983]: E0212 19:25:55.152280 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:56.153247 kubelet[1983]: E0212 19:25:56.153210 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:57.153753 kubelet[1983]: E0212 19:25:57.153715 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:58.154405 kubelet[1983]: E0212 19:25:58.154378 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:58.682557 kubelet[1983]: E0212 19:25:58.682520 1983 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.24?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 12 19:25:58.779305 kubelet[1983]: E0212 19:25:58.779279 1983 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.20.24\": Get \"https://10.200.20.34:6443/api/v1/nodes/10.200.20.24?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 19:25:58.779510 kubelet[1983]: E0212 19:25:58.779498 1983 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count" Feb 12 19:25:59.155376 kubelet[1983]: E0212 19:25:59.155340 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:26:00.156197 kubelet[1983]: E0212 19:26:00.156159 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:26:01.156707 kubelet[1983]: E0212 19:26:01.156672 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:26:02.157950 kubelet[1983]: E0212 19:26:02.157906 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:26:03.158267 kubelet[1983]: E0212 19:26:03.158209 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:26:04.158774 kubelet[1983]: E0212 19:26:04.158746 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:26:05.160006 kubelet[1983]: E0212 19:26:05.159975 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:26:06.160923 kubelet[1983]: E0212 19:26:06.160882 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:26:07.161938 kubelet[1983]: E0212 19:26:07.161900 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:26:08.162719 kubelet[1983]: E0212 19:26:08.162688 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:26:08.884831 kubelet[1983]: E0212 19:26:08.884785 1983 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.24?timeout=10s": context deadline exceeded Feb 12 19:26:09.164228 kubelet[1983]: E0212 19:26:09.164037 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:26:10.165565 kubelet[1983]: E0212 19:26:10.165522 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:26:10.820652 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:10.837018 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:10.853607 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:10.870320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:10.886192 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:10.902048 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:10.902205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:10.920526 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:10.920670 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:10.938808 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:10.939048 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:10.957998 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.002455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.002662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.002780 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.002883 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.002987 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.012777 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.022232 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.065602 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.065751 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.065860 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.065963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.066071 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.077918 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.078119 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.095845 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.096114 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.113861 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.130618 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.130723 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.141461 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.141700 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.159505 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.159751 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.165675 kubelet[1983]: E0212 19:26:11.165609 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:26:11.178248 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.194590 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.194723 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.205088 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.205303 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.223498 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.287167 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.287392 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.287527 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.287630 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.287753 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.287849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.287949 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.288045 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.304775 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.304986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.322999 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.356489 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.356687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.356811 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.356915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.368484 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.368786 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.387380 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.416388 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.416540 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.416642 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.416743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.433981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.443073 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.443294 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.461150 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.482626 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.482748 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.482848 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.498474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.524958 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.549568 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.549818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.549938 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.550049 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.550140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.561237 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.588234 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.613531 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.613694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.613813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.613913 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.614008 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.623670 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.623988 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.642275 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.642627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.668945 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.669196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.669304 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.686813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.696302 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.696540 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.714544 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.714782 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.732230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.750687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.786215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.786376 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.801446 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.801668 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.801784 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.801896 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.801991 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.822315 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.823165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.823321 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.849709 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.886529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.886647 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.886745 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.886856 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.886967 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.898776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.899006 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.916583 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.916835 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.935609 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.935858 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.953926 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.954179 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:11.971814 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.000116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.000251 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.000351 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.000469 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.018392 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.018635 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.036624 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.054968 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.070654 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.070769 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.070869 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.081460 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.081662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.099377 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.117544 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.130623 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.130755 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.130855 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.144702 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.144960 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.162971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.177675 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.177800 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.178293 kubelet[1983]: E0212 19:26:12.166061 1983 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:26:12.190330 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.190587 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.208732 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.208954 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.226921 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.254555 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.254721 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.254816 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.268175 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.268402 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.286295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.319662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.319782 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.319882 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.319979 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.331825 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.332070 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.358990 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.359238 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.359344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.377177 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.377427 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.395040 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.395293 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.412747 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.422258 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.466341 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.466599 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.466712 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.466812 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.466908 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.467004 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.484148 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.484407 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.503208 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.503448 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.521302 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.538820 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.538946 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.539051 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.556011 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.556288 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.574160 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.606454 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.606679 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.606786 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.606885 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.619303 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.619589 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.637275 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.671435 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.671674 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.671796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.671907 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.681568 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.681859 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.698923 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.699155 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.716720 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.716939 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.735013 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.735227 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.753449 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.753728 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.780253 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.780534 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.780642 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.798018 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.798235 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.807459 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.825077 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.851842 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.861145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.861252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.861349 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.861467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.879213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.879447 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.897353 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.897624 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.915589 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.984558 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.984729 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.984844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.984944 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.985041 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.985140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.985237 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.985348 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.998090 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:12.998315 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:13.016177 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:13.034183 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:13.052776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:13.052899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:26:13.053002 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001